text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Dialectical behavior therapy (DBT) is an evidence-based psychotherapy that began with efforts to treat personality disorders and interpersonal conflicts. Evidence suggests that DBT can be useful in treating mood disorders and suicidal ideation as well as for changing behavioral patterns such as self-harm and substance use. DBT evolved into a process in which the therapist and client work with acceptance and change-oriented strategies and ultimately balance and synthesize them—comparable to the philosophical dialectical process of thesis and antithesis, followed by synthesis.
This approach was developed by Marsha M. Linehan, a psychology researcher at the University of Washington. She defines it as "a synthesis or integration of opposites". DBT was designed to help people increase their emotional and cognitive regulation by learning about the triggers that lead to reactive states and by helping to assess which coping skills to apply in the sequence of events, thoughts, feelings, and behaviors to help avoid undesired reactions. Linehan later disclosed to the public her own struggles and belief that she suffers from borderline personality disorder.
DBT grew out of a series of failed attempts to apply the standard cognitive behavioral therapy (CBT) protocols of the late 1970s to chronically suicidal clients. Research on its effectiveness in treating other conditions has been fruitful. DBT has been used by practitioners to treat people with depression, drug and alcohol problems, post-traumatic stress disorder (PTSD), traumatic brain injuries (TBI), binge-eating disorder, and mood disorders. Research indicates that DBT might help patients with symptoms and behaviors associated with spectrum mood disorders, including self-injury. Work also suggests its effectiveness with sexual-abuse survivors and chemical dependency.
DBT combines standard cognitive-behavioral techniques for emotion regulation and reality-testing with concepts of distress tolerance, acceptance, and mindful awareness largely derived from contemplative meditative practice. DBT is based upon the biosocial theory of mental illness and is the first therapy that has been experimentally demonstrated to be generally effective in treating borderline personality disorder (BPD). The first randomized clinical trial of DBT showed reduced rates of suicidal gestures, psychiatric hospitalizations, and treatment dropouts when compared to usual treatment. A meta-analysis found that DBT reached moderate effects in individuals with BPD. DBT may not be appropriate as a universal intervention, as it was shown to be harmful or have null effects in a study of an adapted DBT skills-training intervention in adolescents in schools, though conclusions of iatrogenic harm are unwarranted as the majority of participants did not significantly engage with the assigned activities with higher engagement predicting more positive outcomes.
== Overview ==
DBT is sometimes considered a part of the "third wave" of cognitive-behavioral therapy, as DBT adapts CBT to assist patients in dealing with stress. DBT focuses on treating disorders that are characterised by impulsivity and emotional dysregulation.
DBT strives to have the patient view the therapist as an accepting ally rather than an adversary in the treatment of psychological issues: many treatments at this time left patients feeling "criticized, misunderstood, and invalidated" due to the way these methods "focused on changing cognitions and behaviors." Accordingly, the therapist aims to accept and validate the client's feelings at any given time, while, nonetheless, informing the client that some feelings and behaviors are maladaptive, and showing them better alternatives. In particular, DBT targets self-harm and suicide attempts by identifying the function of that behavior and obtaining that function safely through DBT coping skills. DBT focuses on the client acquiring new skills and changing their behaviors, with the ultimate goal of achieving a "life worth living".
In DBT's biosocial theory of BPD, clients have a biological predisposition for emotional dysregulation, and their social environment validates maladaptive behavior.
DBT skills training alone is being used to address treatment goals in some clinical settings, and the broader goal of emotion regulation that is seen in DBT has allowed it to be used in new settings, for example, supporting parenting. There has been little study into adapting DBT into an online environment, but a review indicates that attendance is improved online, with comparable improvements for clients to the traditional mode.
== Four modules ==
=== Mindfulness ===
Mindfulness is one of the core ideas behind all elements of DBT. It is considered a foundation for the other skills taught in DBT, because it helps individuals accept and tolerate the powerful emotions they may feel when challenging their habits or exposing themselves to upsetting situations.
The concept of mindfulness and the meditative exercises used to teach it are derived from traditional contemplative religious practice, though the version taught in DBT does not involve any religious or metaphysical concepts. Within DBT it is the capacity to pay attention, nonjudgmentally, to the present moment; about living in the moment, experiencing one's emotions and senses fully, yet with perspective. The practice of mindfulness can also be intended to make people more aware of their environments through their five senses: touch, smell, sight, taste, and sound. Mindfulness relies heavily on the principle of acceptance, sometimes referred to as "radical acceptance". Acceptance skills rely on the patient's ability to view situations with no judgment, and to accept situations and their accompanying emotions. This causes less distress overall, which can result in reduced discomfort and symptomology.
==== Acceptance and change ====
The first few sessions of DBT introduce the dialectic of acceptance and change. The patient must first become comfortable with the idea of therapy; once the patient and therapist have established a trusting relationship, DBT techniques can flourish. An essential part of learning acceptance is to first grasp the idea of radical acceptance: radical acceptance embraces the idea of facing situations, both positive and negative, without judgment. Acceptance also incorporates mindfulness and emotional regulation skills, which depend on the idea of radical acceptance. These skills, specifically, are what set DBT apart from other therapies.
Often, after a patient becomes familiar with the idea of acceptance, they will accompany it with change. DBT has five specific states of change which the therapist will review with the patient: pre-contemplation, contemplation, preparation, action, and maintenance. Precontemplation is the first stage, in which the patient is completely unaware of their problem. In the second stage, contemplation, the patient realizes the reality of their illness: this is not an action, but a realization. It is not until the third stage, preparation, that the patient is likely to take action, and prepares to move forward. This could be as simple as researching or contacting therapists. Finally, in stage 4, the patient takes action and receives treatment. In the final stage, maintenance, the patient must strengthen their change in order to prevent relapse. After grasping acceptance and change, a patient can fully advance to mindfulness techniques.
There are six mindfulness skills used in DBT to bring the client closer to achieving a "wise mind", the synthesis of the rational mind and emotional mind: three "what" skills (observe, describe, participate) and three "how" skills (nonjudgementally, one-mindfully, effectively).
=== Distress tolerance ===
The concept of distress tolerance arose from methods used in person-centered, psychodynamic, psychoanalytic, gestalt, and/or narrative therapies, along with religious and spiritual practices. Distress tolerance means learning to bear emotional discomfort skillfully, without resorting to maladaptive reactions. Healthier coping behaviors are learned, including intentional self-distraction, self-soothing, and 'radical acceptance.'
Distress tolerance skills are meant to arise naturally as a consequence of mindfulness. They have to do with the ability to accept, in a non-evaluative and nonjudgmental fashion, both oneself and the current situation. It is meant to be a non-judgmental stance, one of neither approval nor resignation. The goal is to become capable of calmly recognizing negative situations and their impact, rather than becoming overwhelmed or hiding from them. This allows individuals to make wise decisions about whether and how to take action, rather than falling into intense, desperate, and often destructive emotional reactions.
=== Emotion regulation ===
Individuals with borderline personality disorder and suicidal individuals are frequently emotionally intense and labile. They can be angry, intensely frustrated, depressed, or anxious. The theory holds that intense emotions are conditioned responses to distressing experiences, which serve as the conditioned stimuli. Emotional regulation skills are taught to help patients modify their conditioned responses.
Dialectical behavior therapy skills for emotion regulation include:
Learning how to understand and name emotions: the patient focuses on recognizing their feelings. This segment relates directly to mindfulness, which also exposes a patient to their emotions.
Identify obstacles to changing emotions
Changing unwanted emotions: the therapist emphasizes the use of opposite-reactions, fact-checking, and problem solving to regulate emotions. While using opposite-reactions, the patient targets distressing feelings by responding with the opposite emotion.
Reducing vulnerability: the patient learns to accumulate positive emotions and to plan coping mechanisms in advance, in order to better handle difficult experiences in the future.
Increase mindfulness to current emotions
Take opposite action
Apply distress tolerance techniques
Managing extreme conditions: the patient focuses on incorporating their use of mindfulness skills to their current emotions, to remain stable and alert in a crisis.
=== Interpersonal effectiveness ===
The three interpersonal skills focused on in DBT include self-respect, treating others "with care, interest, validation, and respect", and assertiveness. The dialectic involved in healthy relationships involves balancing the needs of others with the needs of the self, while maintaining one's self-respect.
== Tools ==
=== Diary cards ===
Specially formatted diary cards can be used to track relevant emotions and behaviors. Diary cards are most useful when they are filled out daily. The diary card is used to find the treatment priorities that guide the agenda of each therapy session. Both the client and therapist can use the diary card to see what has improved, gotten worse, or stayed the same.
=== Chain analysis ===
Chain analysis is a form of functional analysis of behavior but with increased focus on sequential events that form the behavior chain. It has strong roots in behavioral psychology in particular applied behavior analysis concept of chaining. A growing body of research supports the use of behavior chain analysis with multiple populations.
== Efficacy ==
=== Borderline personality disorder ===
DBT is the therapy that has been studied the most for treatment of borderline personality disorder, and there have been enough studies done to conclude that DBT is helpful in treating borderline personality disorder. Several studies have found there are neurobiological changes in individuals with BPD after DBT treatment.
=== Depression ===
A Duke University pilot study compared treatment of depression by antidepressant medication to treatment by antidepressants and dialectical behavior therapy. A total of 34 chronically depressed individuals over age 60 were treated for 28 weeks. Six months after treatment, statistically significant differences were noted in remission rates between groups, with a greater percentage of patients treated with antidepressants and dialectical behavior therapy in remission.
=== Complex post-traumatic stress disorder (CPTSD) ===
Exposure to complex trauma, or the experience of prolonged trauma with little chance of escape, can lead to the development of complex post-traumatic stress disorder (CPTSD) in an individual. The American Psychiatric Association (APA) does not recognize CPTSD as a diagnosis in the DSM-5 (Diagnostical and Statistical Manual of Mental Disorders, the manual used by providers to diagnose, treat and discuss mental illness), though many practitioners argue that CPTSD is separate from post-traumatic stress disorder (PTSD). As of 2020, over 40 studies from 15 different countries had "consistently demonstrated the distinction between PTSD and CPTSD" and "replicated the distinct symptoms associated with each disorder" according to a 2021 literature review.
CPTSD is similar to PTSD in that its symptomatology is pervasive and includes cognitive, emotional, and biological domains, among others. CPTSD differs from PTSD in that it is believed to originate in childhood interpersonal trauma, or chronic childhood stress, and that the most common precedents are sexual traumas. Currently, the prevalence rate for CPTSD is an estimated 0.5%, while PTSD's is 1.5%. Numerous definitions for CPTSD exist. Different versions are contributed by the World Health Organization (WHO), The International Society for Traumatic Stress Studies (ISTSS), and individual clinicians and researchers.
Most definitions revolve around criteria for PTSD with the addition of several other domains. While The APA may not recognize CPTSD, the WHO has recognized this syndrome in its 11th edition of the International Classification of Diseases (ICD-11). The WHO defines CPTSD as a disorder following a single or multiple events which cause the individual to feel stressed or trapped, characterized by low self-esteem, interpersonal deficits, and deficits in affect regulation. These deficits in affect regulation, among other symptoms are a reason why CPTSD is sometimes compared with borderline personality disorder (BPD).
==== Similarities Between CPTSD and borderline personality disorder ====
In addition to affect dysregulation, case studies reveal that patients with CPTSD can also exhibit splitting, mood swings, and fears of abandonment. Like patients with borderline personality disorder, patients with CPTSD were traumatized frequently and/or early in their development and never learned proper coping mechanisms. These individuals may use avoidance, substances, dissociation, and other maladaptive behaviors to cope. Thus, treatment for CPTSD involves stabilizing and teaching successful coping behaviors, affect regulation, and creating and maintaining interpersonal connections.
In addition to sharing symptom presentations, CPTSD and BPD can share neurophysiological similarities, for example, abnormal volume of the amygdala (emotional memory), hippocampus (memory), anterior cingulate cortex (emotion), and orbital prefrontal cortex (personality). Another shared characteristic between CPTSD and BPD is the possibility for dissociation. Further research is needed to determine the reliability of dissociation as a hallmark of CPTSD, however it is a possible symptom. Because of the two disorders' shared symptomatology and physiological correlates, psychologists began hypothesizing that a treatment which was effective for one disorder may be effective for the other as well.
==== DBT as a treatment for CPTSD ====
DBT's use of acceptance and goal orientation as an approach to behavior change can help to instill empowerment and engage individuals in the therapeutic process. The focus on the future and change can help to prevent the individual from becoming overwhelmed by their history of trauma. This is a risk especially with CPTSD, as multiple traumas are common within this diagnosis. Generally, care providers address a client's suicidality before moving on to other aspects of treatment. Because PTSD can make an individual more likely to experience suicidal ideation, DBT can be an option to stabilize suicidality and aid in other treatment modalities.
Some critics argue that while DBT can be used to treat CPTSD, it is not significantly more effective than standard PTSD treatments. Further, this argument posits that DBT decreases self-injurious behaviors (such as cutting or burning) and increases interpersonal functioning but neglects core CPTSD symptoms such as impulsivity, cognitive schemas (repetitive, negative thoughts), and emotions such as guilt and shame. The ISTSS reports that CPTSD requires treatment which differs from typical PTSD treatment, using a multiphase model of recovery, rather than focusing on traumatic memories. The recommended multiphase model consists of establishing safety, distress tolerance, and social relations.
Because DBT has four modules which generally align with these guidelines (Mindfulness, Distress Tolerance, Affect Regulation, Interpersonal Skills) it is a treatment option. Other critiques of DBT discuss the time required for the therapy to be effective. Individuals seeking DBT may not be able to commit to the individual and group sessions required, or their insurance may not cover every session.
A study co-authored by Linehan found that among women receiving outpatient care for BPD and who had attempted suicide in the previous year, 56% additionally met criteria for PTSD. Because of the correlation between borderline personality disorder traits and trauma, some settings began using DBT as a treatment for traumatic symptoms. Some providers opt to combine DBT with other PTSD interventions, such as prolonged exposure therapy (PE) (repeated, detailed description of the trauma in a psychotherapy session) or cognitive processing therapy (CPT) (psychotherapy which addresses cognitive schemas related to traumatic memories).
For example, a regimen which combined PE and DBT would include teaching mindfulness skills and distress tolerance skills, then implementing PE. The individual with the disorder would then be taught acceptance of a trauma's occurrence and how it may continue to affect them throughout their lives. Participants in clinical trials of this DBT PE regimen exhibited a decrease in symptoms, and throughout the 12-week trial, no self-injurious or suicidal behaviors were reported. Later trials similarly show increased effectiveness versus DBT.
Another argument which supports the use of DBT as a treatment for trauma hinges upon PTSD symptoms such as emotion regulation and distress. Some PTSD treatments such as exposure therapy may not be suitable for individuals whose distress tolerance and/or emotion regulation is low. Biosocial theory posits that emotion dysregulation is caused by an individual's heightened emotional sensitivity combined with environmental factors (such as invalidation of emotions, continued abuse/trauma), and tendency to ruminate (repeatedly think about a negative event and how the outcome could have been changed).
An individual who has these features is likely to use maladaptive coping behaviors. DBT can be appropriate in these cases because it teaches appropriate coping skills and allows the individuals to develop some degree of self-sufficiency. The first three modules of DBT increase distress tolerance and emotion regulation skills in the individual, paving the way for work on symptoms such as intrusions, self-esteem deficiency, and interpersonal relations.
Noteworthy is that DBT has often been modified based on the population being treated. For example, in veteran populations DBT is modified to include exposure exercises and accommodate the presence of traumatic brain injury (TBI), and insurance coverage (i.e. shortening treatment). Populations with comorbid BPD may need to spend longer in the "Establishing Safety" phase. In adolescent populations, the skills training aspect of DBT has elicited significant improvement in emotion regulation and ability to express emotion appropriately. In populations with comorbid substance use, adaptations may be made on a case-by-case basis.
For example, a provider may wish to incorporate elements of motivational interviewing (psychotherapy which uses empowerment to inspire behavior change). The degree of substance use should also be considered. For some individuals, substance use is the only coping behavior they know, and as such the provider may seek to implement skills training before targeting substance reduction. Inversely, a client's substance use may be interfering with attendance or other treatment compliance and the provider may choose to address the substance use before implementing DBT for the trauma.
== See also ==
Acceptance and commitment therapy – Form of cognitive behavioral therapy
Behaviour therapy – Clinical psychotherapy that uses techniques derived from behaviourism and/or cognitive psychology
Cognitive emotional behavioral therapy – Mental health conditions
Mentalization-based treatment – Form of psychotherapy
Nonviolent Communication – Communication process intended to increase empathy
Rational emotive behavior therapy – PsychotherapyPages displaying short descriptions with no spaces
Social skills – Competence facilitating interaction and communication with others
== References ==
=== Citations ===
=== General and cited sources ===
Koons, Cedar R; Robins, Clive J; Tweed, J.Lindsey; Lynch, Thomas R; Gonzalez, Alicia M; Morse, Jennifer Q; Bishop, G.Kay; Butterfield, Marian I; Bastian, Lori A (2001). "Efficacy of dialectical behavior therapy in women veterans with borderline personality disorder". Behavior Therapy. 32 (2): 371–390. CiteSeerX 10.1.1.453.1646. doi:10.1016/s0005-7894(01)80009-5.
Linehan, M.M.; Comtois, K.A.; Murray, A.M.; Brown, M.Z.; Gallop, R.J.; Heard, H.L.; Korslund, K.E.; Tutek, D.A.; Reynolds, S.K.; Lindenboim, N. (2006). "Two-year randomized controlled trial and follow-up of dialectical behavior therapy vs therapy by experts for suicidal behaviors and borderline personality disorder". Arch Gen Psychiatry. 63 (7): 757–66. doi:10.1001/archpsyc.63.7.757. PMID 16818865.
Linehan, M.M.; Dimeff, L.A.; Reynolds, S.K.; Comtois, K.A.; Welch, S.S.; Heagerty, P.; Kivlahan, D.R. (2002). "Dialectical behavior therapy versus comprehensive validation plus 12-step for the treatment of opioid dependent women meeting criteria for borderline personality disorder". Drug and Alcohol Dependence. 67 (1): 13–26. doi:10.1016/s0376-8716(02)00011-x. PMID 12062776.
Linehan, M.M.; Heard, H.L. (1993). "'Impact of treatment accessibility on clinical course of parasuicidal patients': Reply". Archives of General Psychiatry. 50 (2): 157–158. doi:10.1001/archpsyc.1993.01820140083011.
Linehan, M.M.; Schmidt, H.; Dimeff, L.A.; Craft, J.C.; Kanter, J.; Comtois, K.A. (1999). "Dialectical behavior therapy for patients with borderline personality disorder and drug-dependence". American Journal on Addictions. 8 (4): 279–292. doi:10.1080/105504999305686. PMID 10598211.
Linehan, M.M.; Tutek, D.A.; Heard, H.L.; Armstrong, H.E. (1994). "Interpersonal outcome of cognitive behavioral treatment for chronically suicidal borderline patients". American Journal of Psychiatry. 151 (12): 1771–1776. doi:10.1176/ajp.151.12.1771. PMID 7977884.
Lopez, Amy; Chessick, Cheryl A. (2013). "DBT Graduate Group Pilot Study: A Model to Generalize Skills to Create a "Life Worth Living"". Social Work in Mental Health. 11 (2): 141–153. doi:10.1080/15332985.2012.755145. S2CID 143376433.
van den Bosch, L.M.C.; Verheul, R.; Schippers, G.M.; van den Brink, W. (2002). "Dialectical Behavior Therapy of borderline patients with and without substance use problems: Implementation and long-term effects". Addictive Behaviors. 27 (6): 911–923. doi:10.1016/s0306-4603(02)00293-9. PMID 12369475.
Verheul, R.; van den Bosch, L.M.C.; Koeter, M.W.J.; de Ridder, M.A.J.; Stijnen, T.; van den Brink, W. (2003). "Dialectical behaviour therapy for women with borderline personality disorder: 12-month, randomised clinical trial in the Netherlands". British Journal of Psychiatry. 182 (2): 135–140. doi:10.1192/bjp.182.2.135. PMID 12562741.
== Further reading ==
=== Self-help ===
Galen, Gillian; Aguirre, Blaise (2021). DBT For Dummies. ISBN 978-1-119-73012-5. OCLC 1191215618.
Depressed and Anxious: The Dialectical Behavior Therapy Workbook for Overcoming Depression & Anxiety by Thomas Marra. ISBN 978-1-57224-363-7.
Dialectical Behavior Therapy Workbook: Practical DBT Exercises for Learning Mindfulness, Interpersonal Effectiveness, Emotion Regulation, & Distress Tolerance (New Harbinger Self-Help Workbook) by Matthew McKay, Jeffrey C. Wood, and Jeffrey Brantley. ISBN 978-1-57224-513-6.
Don't Let Your Emotions Run Your Life: How Dialectical Behavior Therapy Can Put You in Control (New Harbinger Self-Help Workbook) by Scott E. Spradlin. ISBN 978-1-57224-309-5.
The High Conflict Couple: A Dialectical Behavior Therapy Guide to Finding Peace, Intimacy, & Validation by Alan E. Fruzzetti. ISBN 1-57224-450-X.
== External links ==
World Dialectical Behavior Therapy Association (WDBTA)
Linehan Board of Certification (DBT-LBC) | Wikipedia/Dialectical_behavior_therapy |
Multimodal therapy (MMT) is an approach to psychotherapy devised by psychologist Arnold Lazarus, who originated the term behavior therapy in psychotherapy. It is based on the idea that humans are biological beings that think, feel, act, sense, imagine, and interact—and that psychological treatment should address each of these modalities. Multimodal assessment and treatment follows seven reciprocally influential dimensions of personality (or modalities) known by their acronym BASIC I.D.: behavior, affect, sensation, imagery, cognition, interpersonal relationships, and drugs/biology.
Multimodal therapy is based on the idea that the therapist must address these multiple modalities of an individual to identify and treat a mental disorder. According to MMT, each individual is affected in different ways and in different amounts by each dimension of personality, and should be treated accordingly for treatment to be successful. It sees individuals as products of interplay among genetic endowment, physical environment, and social learning history. To state that learning plays a central role in the development and resolution of our emotional problems is to communicate little. For events to connect, they must occur simultaneously or in close succession. An association may exist when responses one stimulus provokes, are predictable and reliable, similar to those another provokes. In this regard, classical conditioning and operant conditioning are two central concepts in MMT.
== BASIC I.D. ==
BASIC I.D. refers to the seven dimensions of personality according to Lazarus. Creating a successful treatment for a specific individual requires that the therapist consider each dimension, and the individual's deficits in each.
B represents behavior, which can be manifested through the use of inappropriate acts, habits, gestures, or the lack of appropriate behaviors.
A stands for affect, which can be seen as the level of negative feelings or emotions one experiences.
S is sensation, or the negative bodily sensations or physiological symptoms such as pain, tension, sweat, nausea, quick heartbeat, etc.
I stands for imagery, which is the existence of negative cognitive images or mental pictures.
C represents cognition or the degree of negative thoughts, attitudes, or beliefs.
The second I stands for interpersonal relationships, and refers to one's ability to form successful relationships with others. It is based on social skills and support systems.
D is for drugs and biological functions, and examines the individual's physical health, drug use, and other lifestyle choices.
Multimodal therapy addresses the fact that different people depend on or are more influenced by some personality dimensions more than others. Some people are prone to deal with their problems on their own, cognitively, while others are more likely to draw support from others, and others yet are likely to use physical activities to deal with problems, such as exercise or drugs. All reactions are a combination of how the seven dimensions work together in an individual. Once the source of the problem is found, treatment can be used to focus on that specific dimension more than the others.
== Function ==
MMT starts after the patient has been assessed based on his/her emotional responses, sensory displays and the manner in which he/she interacts with people around via behavior, affect, sensations, images, cognition, drugs and interpersonal activities. Based on this assessment, the therapist will introduce the patient to the first session. During this time, the therapist and the patient will create a list of problems and the suitable treatments that may suit him/her the most. Since the treatment is based upon individual cases, each remedial strategy is considered as an effective method for the patients.
Post the completion of the initial assessment, a more detailed diagnosis is done using questionnaires. The therapist shall diagnose both the actual profile as well as the structural profile of the patient. Such a diagnosis will define the target which both the therapist and the patient would want to achieve once the treatment is complete. Here, the therapist will evaluate different other ways to treat the patient. Often, relaxation tapes are used to calm down the patient. Besides psychotherapy, the therapist will try to include dietary measures and stress management programs to treat patient's associated psychiatric symptoms. The prime focus of the therapist would be to ease the pains of the patient and fulfill his/her needs by studying his/her behavior and mannerisms.
Upon the patient's prior consent, the therapist will tape all the sessions and furnish a copy of those tapes to the patient. These tapes act as a supporting resource when the therapist is evaluating the patient's behavior. MMT is a flexible mode of psychotherapy because each treatment plan is devised keeping all the possibilities in mind. In the case of a single patient, the duration of the session could last not more than few hours, depending upon the therapist's analysis of the concerned patient's behavior. However, if the patient shows a condition that needs multiple treatments, then the session could stretch farther so as to enable the therapist to analyse the patient further.
== CBT ==
Multimodal therapy originated with cognitive behavioral therapy (CBT), which is a fusion of cognitive therapy and behavior therapy. Behavior therapy focused on the consideration of external behaviors, while cognitive therapy focused on mental aspects and internal processes; combining the two made it possible to utilize both internal and external factors of treatment simultaneously.
Arnold Lazarus added the idea that, since personality is multi-dimensional, treatment must also consider multiple dimensions of personality to be effective. His idea of MMT involves examining symptoms on each dimension of personality in order to find the right combination of therapeutic techniques to address them all. Lazarus retained the basic premises of CBT, but believed that more of the individual's specific needs and personality dimensions must be considered.
== See also ==
Common factors theory
Integrative psychotherapy
== References == | Wikipedia/Multimodal_therapy |
Hypnotherapy, also known as hypnotic medicine, is the use of hypnosis in psychotherapy. Hypnotherapy is generally not considered to be based on scientific evidence, and is rarely recommended in clinical practice guidelines. However, several psychological reviews and meta-analyses suggest that hypnotherapy can be effective as an adjunctive treatment for a number of disorders, including chronic and acute pain, irritable bowel syndrome, post-traumatic stress disorder (PTSD), phobias, and some eating disorders.
== Definition ==
The United States Department of Labor's Dictionary of Occupational Titles (DOT) describes the job of the hypnotherapist:"Induces hypnotic state in client to increase motivation or alter behavior patterns: Consults with client to determine nature of problem. Prepares client to enter hypnotic state by explaining how hypnosis works and what client will experience. Tests subject to determine degree of physical and emotional suggestibility. Induces hypnotic state in client, using individualized methods and techniques of hypnosis based on interpretation of test results and analysis of client's problem. May train client in self-hypnosis conditioning."
=== Traditional ===
The form of hypnotherapy practiced by most Victorian hypnotists, including James Braid and Hippolyte Bernheim, mainly employed direct suggestion of symptom removal, with some use of therapeutic relaxation and occasionally aversion to alcohol, drugs, etc.
=== Ericksonian ===
In the 1950s, Milton H. Erickson developed a radically different approach to hypnotism, which has subsequently become known as "Ericksonian hypnotherapy" or "Neo-Ericksonian hypnotherapy." Based on his belief that dysfunctional behaviors were defined by social tension, Erickson coopted the subject's behavior to establish rapport, a strategy he termed "utilization." Once rapport was established, he made use of an informal conversational approach to direct awareness. His methods included complex language patterns and client-specific therapeutic strategies (reflecting the nature of utilization). He claimed to have developed ways to suggest behavior changes during apparently ordinary conversations.
This divergence from tradition led some, including Andre Weitzenhoffer, to dispute whether Erickson was right to label his approach "hypnosis" at all. Erickson's foundational paper, however, considers hypnosis as a mental state in which specific types of "work" may be done, rather than a technique of induction.
The founders of neuro-linguistic programming (NLP), a method somewhat similar in some regards to some versions of hypnotherapy, claimed that they had modelled the work of Erickson extensively and assimilated it into their approach. Weitzenhoffer disputed whether NLP bears any genuine resemblance to Erickson's work.
=== Solution-focused ===
In the 2000s, hypnotherapists began to combine aspects of solution-focused brief therapy (SFBT) with Ericksonian hypnotherapy to produce therapy that was goal-focused (what the client wanted to achieve) rather than the more traditional problem-focused approach (spending time discussing the issues that brought the client to seek help). A solution-focused hypnotherapy session may include techniques from NLP.
=== Cognitive/behavioral ===
Cognitive behavioral hypnotherapy (CBH) is an integrated psychological therapy employing clinical hypnosis and cognitive behavioral therapy (CBT). The use of CBT in conjunction with hypnotherapy may result in greater treatment effectiveness. A meta-analysis of eight different types of research revealed "a 70% greater improvement" for patients undergoing an integrated treatment than those using CBT only.
In 1974, Theodore X. Barber and his colleagues published a review of the research which argued, following the earlier social psychology of Theodore R. Sarbin, that hypnotism was better understood not as a "special state" but as the result of normal psychological variables, such as active imagination, expectation, appropriate attitudes, and motivation. Barber introduced the term "cognitive-behavioral" to describe the nonstate theory of hypnotism, and discussed its application to behavior therapy.
The growing application of cognitive and behavioral psychological theories and concepts to the explanation of hypnosis paved the way for closer integration of hypnotherapy with various cognitive and behavioral therapies.
Many cognitive and behavioral therapies were themselves originally influenced by older hypnotherapy techniques, e.g., the systematic desensitisation of Joseph Wolpe, the cardinal technique of early behavior therapy, was originally called "hypnotic desensitisation" and derived from the Medical Hypnosis (1948) of Lewis Wolberg.
=== Curative ===
Peter Marshall, author of A Handbook of Hypnotherapy, devised the Trance Theory of Mental Illness, which asserts that people suffering from depression, or certain other kinds of neuroses, are already living in a trance. He states that this means the hypnotherapist does not need to induce trance, but instead to make them understand this and lead them out of it.
=== Mindful ===
Mindful hypnotherapy is a therapy that incorporates mindfulness and hypnotherapy. A pilot study was made at Baylor University, Texas, and published in the International Journal of Clinical and Experimental Hypnosis. Gary Elkins, director of the Mind-Body Medicine Research Laboratory at Baylor University, called it "a valuable option for treating anxiety and stress reduction” and "an innovative mind-body therapy". The study showed a decrease in stress and an increase in mindfulness.
=== Relationship to scientific medicine ===
Hypnotherapy practitioners occasionally attract the attention of mainstream medicine. Attempts to instill academic rigor have been frustrated by the complexity of client suggestibility, which has social and cultural aspects, including the practitioner's reputation. Results achieved in one time and center of study have not been reliably transmitted to future generations.
In the 1700s, Anton Mesmer offered pseudoscientific justification for his practices, but a commission that included Benjamin Franklin debunked his rationalizations.
== Effectiveness ==
=== General ===
According to the Royal College of Psychiatrists, “studies have shown that hypnotherapy can help to treat a range of physical and mental health conditions” and “ In many cases, hypnotherapy and other uses of suggestion can provide fast, effective treatment.”
=== Menopause ===
There is evidence supporting the use of hypnotherapy in the treatment of menopause related symptoms, including hot flashes. The North American Menopause Society recommends hypnotherapy for the nonhormonal management of menopause-associated vasomotor symptoms, giving it the highest level of evidence.
=== Irritable bowel syndrome ===
The use of hypnotherapy in treating the symptoms of irritable bowel syndrome is supported by research, including randomized controlled trials. Gut-directed hypnotherapy is recommended in the treatment of irritable bowel syndrome by the American College of Gastroenterology clinical guideline for the management of IBS.
=== Childbirth ===
Hypnotherapy is often applied in the birthing process and the post-natal period, but there is insufficient evidence to determine if it alleviates pain during childbirth and no evidence that it is effective against post-natal depression.
=== Bulimia ===
Literature shows that a wide variety of hypnotic interventions have been investigated for the treatment of bulimia nervosa, with inconclusive effects. Similar studies have shown that groups suffering from bulimia nervosa, undergoing hypnotherapy, were more exceptional to no treatment, placebos, or other alternative treatments.
=== Anxiety ===
Hypnotherapy is shown to be comparable in effectiveness to other forms of therapy, such as cognitive-behavioral therapy, that utilize relaxation techniques and imagery. It has also shown to be successful when used to reduce anxiety in those with dental anxiety and phobias.
=== PTSD ===
Post Traumatic Stress Disorder (PTSD) and its symptoms have been shown to improve due to the implementation of hypnotherapy, in both the long and short term. As research continues, hypnotherapy is being more openly considered as an effective intervention for those with PTSD.
=== Depression ===
Hypnotherapy is effective when used to treat long-term depressive symptoms. It is comparable to the efficacy of cognitive-behavioral therapy, and when used in tandem, efficacy seems to increase.
=== Other uses ===
Historically hypnotism was used therapeutically by some psychiatrists in the Victorian era, to treat the condition then known as hysteria.
Modern hypnotherapy has been used to treat certain habit disorders and control irrational fears, and addiction.
A 2003 meta-analysis on the efficacy of hypnotherapy concluded that "the efficacy of hypnosis is not verified for a considerable part of the spectrum of psychotherapeutic practice."
In 2007, a meta-analysis from the Cochrane Collaboration found that the therapeutic effect of hypnotherapy was "superior to that of a waiting list control or usual medical management, for abdominal pain and composite primary IBS symptoms, in the short term in patients who fail standard medical therapy", with no harmful side-effects. However, the authors noted that the quality of data available was inadequate to draw firm conclusions.
Two Cochrane reviews in 2012 concluded that there was insufficient evidence to support its efficacy in managing the pain of childbirth or post-natal depression.
A 2014 meta-analysis that focused on hypnotherapy's efficacy on irritable bowel syndrome found that it was beneficial for short-term abdominal pain and other gastrointestinal issues.
In 2016, a literature review published in La Presse Médicale found that there is not sufficient evidence to "support the efficacy of hypnosis in chronic anxiety disorders".
In 2019, a Cochrane review was unable to find evidence of a benefit of hypnosis in smoking cessation and suggested that if there is, it is small at best.
A 2019 meta-analysis of hypnosis as a treatment for anxiety found that "the average participant receiving hypnosis reduced anxiety more than about 79% of control participants," also noting that "hypnosis was more effective in reducing anxiety when combined with other psychological interventions than when used as a stand-alone treatment."
== Occupational accreditation ==
=== United States ===
The laws regarding hypnosis and hypnotherapy vary by state and municipality. Some states, like Colorado, Connecticut, and Washington, have mandatory licensing and registration requirements, while many other states have no specific regulations governing the practice of hypnotherapy.
=== United Kingdom ===
==== UK National Occupational Standards ====
In 2002, the Department for Education and Skills developed National Occupational Standards for hypnotherapy linked to National Vocational Qualifications based on the then National Qualifications Framework under the Qualifications and Curriculum Authority. NCFE, a national awarding body, issues a level four national vocational qualification diploma in hypnotherapy. Currently, AIM Awards offers a Level 3 Certificate in Hypnotherapy and Counselling Skills at level 3 of the Regulated Qualifications Framework.
==== UK Confederation of Hypnotherapy Organisations (UKCHO) ====
The regulation of the hypnotherapy profession in the UK is at present the main focus of UKCHO, a non-profit umbrella body for hypnotherapy organisations. Founded in 1998 to provide a non-political arena to discuss and implement changes to the profession of hypnotherapy, UKCHO currently represents 9 of the UK's professional hypnotherapy organisations and has developed standards of training for hypnotherapists, along with codes of conduct and practice that all UKCHO-registered hypnotherapists are governed by. As a step towards the regulation of the profession, UKCHO's website now includes a National Public Register of Hypnotherapists who have been registered by UKCHO's Member Organisations and are therefore subject to UKCHO's professional standards. Further steps to regulate the hypnotherapy profession will be taken in consultation with the Prince's Foundation for Integrated Health.
==== The National Council for Hypnotherapy (NCH) ====
The National Council for Hypnotherapy is a Professional Association, established in 1973 to create a National Membership Organisation for independent Hypnotherapy Practitioners.
The organisation is not for profit with a Board of 12-15 people composed of Executives and Directors, the latter usually ‘in practice’ Hypnotherapists and trainers of Hypnotherapy. The current Chair, Tracey Grist, has been in the position since 2016.
The NCH is a VO (Verifying organisation) for the CNHC, which means that NCH members meet the criteria to become CNHC registrants.
The NCH membership meets the national hypnotherapy training standards via the externally verified Hypnotherapy Practitioner Diploma (HPD) through the NCFE.
Members agree to follow the CECP; the NCH’s ethical code of practice. All members are expected to be insured to practice, meet supervision requirements, and meet annual CPD expectations.
=== Australia ===
The Australian government does not regulate professional hypnotherapy and the use of the occupational titles hypnotherapist or clinical hypnotherapist.
In 1996, as a result of a three-year research project led by Lindsay B. Yeates, the Australian Hypnotherapists Association (founded in 1949), the oldest hypnotism-oriented professional organization in Australia, instituted a peer-group accreditation system for full-time Australian professional hypnotherapists, the first of its kind in the world, which "accredit[ed] specific individuals on the basis of their actual demonstrated knowledge and clinical performance; instead of approving particular 'courses' or approving particular 'teaching institutions'" (Yeates, 1996, p.iv; 1999, p.xiv). The system was further revised in 1999.
Australian hypnotism/hypnotherapy organizations (including the Australian Hypnotherapists Association) are seeking government regulation similar to other mental health professions. However, currently, hypnotherapy is not subject to government regulation through the Australian Health Practitioner Regulation Agency (AHPRA).
== See also ==
Abreaction – Psychoanalytical term
Astral projection – Interpretation of out-of-body experiences
Atavistic regression – Reversion to a more primitive mental state under hypnosis
Autogenic training – Relaxation technique
Automatic writing – Claimed psychic ability
Autosuggestion – Psychological technique related to the placebo effect
Automatic writing – Claimed psychic ability
Confabulation – Recall of fabricated, misinterpreted or distorted memories
Doctor of Clinical Hypnotherapy – Unaccredited degree in hypnotherapy in the United States
False memory – Psychological occurrence
Hypnotherapy in the United Kingdom
Hypnosis – State of increased suggestibility
Hypnosurgery – Practice of using hypnosis for sedation during surgery]
Hypnotic Ego-Strengthening Procedure – Hypnotherapeutic procedure
Ideomotor phenomenon – Concept in hypnosis and psychological research
Mind–body interventions – Health and fitness interventions
Nancy School of Hypnosis – French school of psychotherapy from 1866
Polygraph – Pseudoscientific device that attempts to infer lying
Psychoneuroimmunology – Area of study within psychosomatic medicine
Psychosomatic medicine – Interdisciplinary medical field exploring various influences on bodily processes
Psychotherapy – Clinically applied psychology for desired behavior change
Recovered-memory therapy – Scientifically discredited form of psychotherapy
Regression (psychology) – Mental defence mechanism in psychoanalysis
Repressed memory – Theory that memory may be stored in the unconscious mind
Royal Commission on Animal Magnetism – 1784 French scientific bodies' investigations involving systematic controlled trials
Salpêtrière School of Hypnosis – French school of psychotherapy from 1882Pages displaying short descriptions of redirect targets
Scientific skepticism – Questioning of claims lacking empirical evidence
Source-monitoring error – Type of memory error
Subconscious mind – Part of the mind that is not currently of focal awarenessPages displaying short descriptions of redirect targets
Suggestibility – Inclination to accept the suggestions of others
The Pregnant Man and Other Cases from a Hypnotherapist's Couch (book) – 2010 book by Deirdre Barrett
The Zoist: A Journal of Cerebral Physiology & Mesmerism, and Their Applications to Human Welfare – Academic journal devoted to pseudoscientific concepts
Unconscious mind – Mental processes not available to introspection
== References == | Wikipedia/Hypnotherapy |
Common factors theory, a theory guiding some research in clinical psychology and counseling psychology, proposes that different approaches and evidence-based practices in psychotherapy and counseling share common factors that account for much of the effectiveness of a psychological treatment. This is in contrast to the view that the effectiveness of psychotherapy and counseling is best explained by specific or unique factors (notably, particular methods or procedures) that are suited to treatment of particular problems.
However, according to one review, "it is widely recognized that the debate between common and unique factors in psychotherapy represents a false dichotomy, and these factors must be integrated to maximize effectiveness." In other words, "therapists must engage in specific forms of therapy for common factors to have a medium through which to operate." Common factors is one route by which psychotherapy researchers have attempted to integrate psychotherapies.
== History ==
Saul Rosenzweig started the conversation on common factors in an article published in 1936 that discussed some psychotherapies of his time. John Dollard and Neal E. Miller's 1950 book Personality and Psychotherapy emphasized that the psychological principles and social conditions of learning are the most important common factors. Sol Garfield (who would later go on to edit many editions of the Handbook of Psychotherapy and Behavior Change with Allen Bergin) included a 10-page discussion of common factors in his 1957 textbook Introductory Clinical Psychology.
In the same year, Carl Rogers published a paper outlining what he considered to be common factors (which he called "necessary and sufficient conditions") of successful therapeutic personality change, emphasizing the therapeutic relationship factors which would become central to the theory of person-centered therapy. He proposed the following conditions necessary for therapeutic change: psychological contact between the therapist and client, incongruence in the client, genuineness in the therapist, unconditional positive regard and empathic understanding from the therapist, and the client's perception of the therapist's unconditional positive regard and empathic understanding.
In 1961, Jerome Frank published Persuasion and Healing, a book entirely devoted to examining the common factors among psychotherapies and related healing approaches. Frank emphasized the importance of the expectation of help (a component of the placebo effect), the therapeutic relationship, a rationale or conceptual scheme that explains the given symptoms and prescribes a given ritual or procedure for resolving them, and the active participation of both patient and therapist in carrying out that ritual or procedure.
After Lester Luborsky and colleagues published a literature review of empirical studies of psychotherapy outcomes in 1975, the idea that all psychotherapies are effective became known as the Dodo bird verdict, referring to a scene from Alice's Adventures in Wonderland quoted by Rosenzweig in his 1936 article; in that scene, after the characters race and everyone wins, the Dodo bird says, "everybody has won, and all must have prizes." Luborsky's research was an attempt (and not the first attempt, nor the last one) to disprove Hans Eysenck's 1952 study on the efficacy of psychotherapy; Eysenck found that psychotherapy generally did not seem to lead to improved patient outcomes. A number of studies after 1975 presented more evidence in support of the general efficacy of psychotherapy, but the question of how common and specific factors could enhance or thwart therapy effectiveness in particular cases continued to fuel theoretical and empirical research over the following decades.
The landmark 1982 book Converging Themes in Psychotherapy gathered a number of chapters by different authors promoting common factors, including an introduction by Marvin R. Goldfried and Wendy Padawer, a reprint of Rosenzweig's 1936 article, and further chapters (some of them reprints) by John Dollard and Neal E. Miller, Franz Alexander, Jerome Frank, Arnold Lazarus, Hans Herrman Strupp, Sol Garfield, John Paul Brady, Judd Marmor, Paul L. Wachtel, Abraham Maslow, Arnold P. Goldstein, Anthony Ryle, and others. The chapter by Goldfried and Padawer distinguished between three levels of intervention in therapy:
theories of change (therapists' theories about how change occurs);
principles or strategies of change;
therapy techniques (interventions that therapists suppose will be effective).
Goldfried and Padawer argued that while therapists may talk about their theories using very different jargon, there is more commonality among skilled therapists at the (intermediate) level of principles or strategies. Goldfried and Padawer's emphasis on principles or strategies of change was an important contribution to common factors theory because they clearly showed how principles or strategies can be considered common factors (they are shared by therapists who may espouse different theories of change) and specific factors (they are manifested in particular ways within different approaches) at the same time. Around the same time, James O. Prochaska and colleagues, who were developing the transtheoretical model of change, proposed ten "processes of change" that categorized "multiple techniques, methods, and interventions traditionally associated with disparate theoretical orientations," and they stated that their processes of change corresponded to Goldfried and Padawer's level of common principles of change.
In 1986, David Orlinsky and Kenneth Howard presented their generic model of psychotherapy, which proposed that five process variables are active in any psychotherapy: the therapeutic contract, therapeutic interventions, the therapeutic bond between therapist and patient, the patient's and therapist's states of self-relatedness, and therapeutic realization.
In 1990, Lisa Grencavage and John C. Norcross reviewed accounts of common factors in 50 publications, with 89 common factors in all, from which Grencavage and Norcross selected the 35 most common factors and grouped them into five areas: client characteristics, therapist qualities, change processes, treatment structure, and therapeutic relationship. In the same year, Larry E. Beutler and colleagues published their systematic treatment selection model, which attempted to integrate common and specific factors into a single model that therapists could use to guide treatment, considering variables of patient dimensions, environments, settings, therapist dimensions, and treatment types. Beutler and colleagues would later describe their approach as "identifying common and differential principles of change".
In 1992, Michael J. Lambert summarized psychotherapy outcome research and grouped the factors of successful therapy into four areas, ordered by hypothesized percent of change in clients as a function of therapeutic factors: first, extratherapeutic change (40%), those factors that are qualities of the client or qualities of his or her environment and that aid in recovery regardless of his or her participation in therapy; second, common factors (30%) that are found in a variety of therapy approaches, such as empathy and the therapeutic relationship; third, expectancy (15%), the portion of improvement that results from the client's expectation of help or belief in the rationale or effectiveness of therapy; fourth, techniques (15%), those factors unique to specific therapies and tailored to treatment of specific problems. Lambert's research later inspired a book on common factors theory in the practice of therapy titled The Heart and Soul of Change.
In the mid-1990s, as managed care in mental health services became more widespread in the United States, more researchers began to investigate the efficacy of psychotherapy in terms of empirically supported treatments (ESTs) for particular problems, emphasizing randomized controlled trials as the gold standard of empirical support for a treatment. In 1995, the American Psychological Association's Division 12 (clinical psychology) formed a task force that developed lists of empirically supported treatments for particular problems such as agoraphobia, blood-injection-injury type phobia, generalized anxiety disorder, obsessive–compulsive disorder, panic disorder, etc. In 2001, Bruce Wampold published The Great Psychotherapy Debate, a book that criticized what he considered to be an overemphasis on ESTs for particular problems, and he called for continued research in common factors theory.
In the 2000s, more research began to be published on common factors in couples therapy and family therapy.
In 2014, a series of ten articles on common factors theory was published in the APA journal Psychotherapy. The articles emphasized the compatibility between ESTs and common factors theory, highlighted the importance of multiple variables in psychotherapy effectiveness, called for more empirical research on common factors (especially client and therapist variables), and argued that individual therapists can do much to improve the quality of therapy by rigorously using feedback measures (during treatment) and outcome measures (after termination of treatment). The article by Stefan G. Hofmann and David H. Barlow, two prominent researchers in cognitive behavioral therapy, pointed out how their recent shift in emphasis from distinct procedures for different diagnoses to a transdiagnostic approach was increasingly similar to common factors theory.
== Models ==
There are many models of common factors in successful psychotherapy process and outcome. Already in 1990, Grencavage and Norcross identified 89 common factors in a literature review, which showed the diversity of models of common factors. To be useful for purposes of psychotherapy practice and training, most models reduce the number of common factors to a handful, typically around five. Frank listed six common factors in 1971 and explained their interaction. Goldfried and Padawer listed five common strategies or principles in 1982: corrective experiences and new behaviors, feedback from the therapist to the client promoting new understanding in the client, expectation that psychotherapy will be helpful, establishment of the desired therapeutic relationship, and ongoing reality testing by the client. Grencavage and Norcross grouped common factors into five areas in 1990. Lambert formulated four groups of therapeutic factors in 1992. Joel Weinberger and Cristina Rasco listed five common factors in 2007 and reviewed the empirical support for each factor: the therapeutic relationship, expectations of treatment effectiveness, confronting or facing the problem (exposure), mastery or control experiences, and patients' attributions of successful outcome to internal or external causes.
Terence Tracy and colleagues modified the common factors of Grencavage and Norcross, and used them to develop a questionnaire which they provided to 16 board certified psychologists and 5 experienced psychotherapy researchers; then they analyzed the responses and published the results in 2003. Their multidimensional scaling analysis represented the results on a two-dimensional graph, with one dimension representing hot processing versus cool processing (roughly, closeness and emotional experience versus technical information and persuasion) and the other dimension representing therapeutic activity. Their cluster analysis represented the results as three clusters: the first related to bond (roughly, therapeutic alliance), the second related to information (roughly, the meanings communicated between therapist and client), and the third related to role (roughly, a logical structure so that clients can make sense of the therapy process).
In addition to these models that incorporate multiple common factors, a number of theorists have proposed and investigated single common factors, common principles, and common mechanisms of change, such as learning. In one example, at least three independent groups have converged on the conclusion that a wide variety of different psychotherapies can be integrated via their common ability to trigger the neurobiological mechanism of memory reconsolidation. For further examples, see § Further reading, below.
== Empirical research ==
While many models of common factors have been proposed, they have not all received the same amount of empirical research. There is general consensus on the importance of a good therapeutic relationship in all forms of psychotherapy and counseling.
A review of common factors research in 2008 suggested that 30% to 70% of the variance in therapy outcome was due to common factors. A summary of research in 2014 suggested that 11.5% of variance in therapy outcome was due to the common factor of goal consensus/collaboration, 9% was due to empathy, 7.5% was due to therapeutic alliance, 6.3% was due to positive regard/affirmation, 5.7% was due to congruence/genuineness, and 5% was due to therapist factors. In contrast, treatment method accounted for roughly 1% of outcome variance.
Alan E. Kazdin has argued that psychotherapy researchers must not only find statistical evidence that certain factors contribute to successful outcomes; they must also be able to formulate evidence-based explanations for how and why those factors contribute to successful outcomes, that is, the mechanisms through which successful psychotherapy leads to change. Common factors theory has been dominated by research on psychotherapy process and outcome variables, and there is a need for further work explaining the mechanisms of psychotherapy common factors in terms of emerging theoretical and empirical research in the neurosciences and social sciences, just as earlier works (such as Dollard and Miller's Personality and Psychotherapy or Frank's Persuasion and Healing) explained psychotherapy common factors in terms of the sciences of their time.
One frontier for future research on common factors is automated computational analysis of clinical big data.
== Criticisms ==
There are several criticisms of common factors theory, for example:
that common factors theory dismisses the need for specific therapeutic techniques or procedures,
that common factors are nothing more than a good therapeutic relationship, and
that common factors theory is not scientific.
Some common factors theorists have argued against these criticisms. They state that:
the criticisms are based on a limited knowledge of the common factors literature,
a thorough review of the literature shows that a coherent treatment procedure is a crucial medium for the common factors to operate,
most models of common factors define interactions between multiple variables (including but not limited to therapeutic relationship variables), and
some models of common factors provide evidence-based explanations for the mechanisms of the proposed common factors.
== Notes ==
== References ==
== Further reading ==
=== Sources emphasizing learning as a common factor ===
=== Sources emphasizing other common factors ===
=== Sources emphasizing specific or unique factors === | Wikipedia/Common_factors_theory |
The biomedical model of medicine care is the medical model used in most Western healthcare settings, and is built from the perception that a state of health is defined purely in the absence of illness.: 24, 26 The biomedical model contrasts with sociological theories of care.: 1
== History ==
Forms of the biomedical model have existed since before 400 BC, with Hippocrates advocating for physical etiologies of illness. Despite this, the model did not form the dominant view of health until the nineteenth century during the Scientific Revolution.: 25
== Criticism ==
Criticism of the model generally surrounds its perception that health is independent of the social environment in which it occurs, and can be defined one way across all populations. The model is also criticised for its view of the health system as socially and politically neutral, and not as a source of social and political power or as embedded into the structure of society.
== Alternative models ==
The biopsychosocial model is offered as an alternative.
== Features ==
In their book Society, Culture and Health: an Introduction to Sociology for Nurses, health sociologists Karen Willis and Shandell Elmer outline eight 'features' of the biomedical model's approach to illness and health:: 27–29
doctrine of specific aetiology: that all illness and disease is attributable to a specific, physiological dysfunction
body as a machine: that the body is formed of machinery to be fixed by medical doctors
mind-body distinction: that the mind and body are separate entities that do not interrelate
reductionism
narrow definition of health: that a state of health is always the absence of a definable illness
individualistic: that sources of ill health are always in the individual, and not the environment which health occurs
treatment versus prevention: that the focus of health is on diagnosis and treatment of illness, not prevention
treatment imperative: that medicine can 'fix the broken machinery' of ill-health
neutral scientific process: that health care systems and agents of health are socially and culturally detached
== See also ==
Biopsychosocial model
Medical model
Medical model of disability
Social model of disability
Trauma model of mental disorders
== References == | Wikipedia/Biomedical_model |
Performance science is the multidisciplinary study of human performance. It draws together methodologies across numerous scientific disciplines, including those of biomechanics, economics, physiology, psychology, and sociology, to understand the fundamental skills, mechanisms, and outcomes of performance activities and experiences. It carries implications for various domains of skilled human activity, often performed under extreme stress and/or under the scrutiny of audiences or evaluators. These include performances across the arts, sport, education, and business, particularly those occupations involving the delivery of highly trained skills such as in surgery and management.
== Centers of research and teaching ==
USC Performance Science Institute, University of Southern California
711th Human Performance Wing, Wright-Patterson Air Force Base
Centre for Human Performance Sciences, Stellenbosch University
Centre for Performance Science, a partnership of the Royal College of Music and Imperial College London
Human Performance Science Research Group, University of Edinburgh
Performance and Science Working Group, Theatre and Performance Research Association
Performance Science Unit, Sports Institute for Northern Ireland
== See also ==
Environmental psychology
Industrial and organizational psychology
Military psychology
Music psychology
Sport psychology
== References ==
== External links ==
International Symposium on Performance Science
Frontiers in Psychology: Performance Science (journal) | Wikipedia/Performance_science |
Functional analytic psychotherapy (FAP) is a psychotherapeutic approach based on clinical behavior analysis (CBA) that focuses on the therapeutic relationship as a means to maximize client change. Specifically, FAP suggests that in-session contingent responding to client target behaviors leads to significant therapeutic improvements.
FAP was first conceptualized in the 1980s by psychologists Robert Kohlenberg and Mavis Tsai who, after noticing a clinically significant association between client outcomes and the quality of the therapeutic relationship, set out to develop a theoretical and psychodynamic model of behavioral psychotherapy based on these concepts. Behavioral principles (e.g., reinforcement, generalization) form the basis of FAP. (See § The five rules below.)
FAP is an idiographic (as opposed to nomothetic) approach to psychotherapy. This means that FAP therapists focus on the function of a client's behavior instead of the form. The aim is to change a broad class of behaviors that might look different on the surface but all serve the same function. It is idiographic in that the client and therapist work together to form a unique clinical formulation of the client's therapeutic goals, rather than one therapeutic target for every client who enters therapy.
== Basics ==
FAP posits that client behaviors that occur in their out-of-session interpersonal relationships (i.e. in the "real world") will, if clients are given a therapeutic relationship of sufficiently high quality, occur in the therapy session as well. Based on these in-session behaviors, FAP therapists, in collaboration with their client, develop a case formulation that includes classes of behaviors (based on their function not their form) that the client wishes to increase and decrease.
In-session occurrence of a client's problematic behavior is called clinically relevant behavior 1 (CRB1). In-session occurrence of improvements is called clinically relevant behavior 2 (CRB2). The goal of FAP therapy is to decrease the frequency of CRB1s and increase the frequency of CRB2s.
The FAP therapist evokes (i.e. sets the context for) CRB1s and in response gradually shapes CRB2s.
=== The five rules ===
"The five rules" operationalize the FAP therapist's behavior with respect to this goal. It is important to note that the five rules are not rules in the traditional sense of the word, but instead a set of guidelines for the FAP therapist.
Watch for CRBs – Therapists focus their attention on the occurrence of CRBs that are in-session problems (CRB1s) and improvements (CRB2s).
Evoke CRBs– Therapists set a context which evoke the client's CRBs.
Reinforce CRB2s naturally – Therapists reinforce the occurrence of CRB2s (in-session improvements), increasing the probability that these behaviors will occur more frequently.
Observe therapist impact in relation to client CRBs – Therapists assess the degree to which they actually reinforced behavioral improvements by noting the client's behavior subsequent behavior after Rule 3. This is similar to the behavior analytic concept of performing a functional analysis.
Provide functional interpretations and generalize – Therapists work with the client to generalize in-session behavioral improvements to the client's out-of-session relationships. This can include, but is not limited to, providing homework assignments.
=== The ACL model ===
Researchers at the Center for the Science of Social Connection at the University of Washington are developing a model of social connection that they believe is relevant to FAP. This model – called the ACL model – delineates behaviors relevant to social connection based on decades of scientific research.
Awareness (A) behaviors include paying attention to your own and the other's needs and values within an interpersonal relationship.
Courage (C) behaviors include experiencing emotion in the presence of another person, asking for what you need, and sharing deep, vulnerable experiences with another person in the service of improving the relationship.
Love (L) behaviors involve responding to another's courage behaviors with attunement to what that person needs in the moment. These include providing safety and acceptance in response to a client's vulnerability.
FAP has the potential to target awareness, courage, and love behaviors as they occur in session as described by the five rules above. More research is needed to confirm the utility of the ACL model.
== Research support ==
Radical behaviorism and the field of clinical behavior analysis have strong scientific support. Additionally, researchers have conducted a number of case studies, component process analyses, a study with non-randomized design on FAP-enhanced cognitive therapy for depression, and a randomized controlled trial on FAP-enhanced acceptance and commitment therapy for smoking cessation.
== Third generation behavior therapy ==
FAP belongs to a group of therapies referred to as third-generation behavior therapies (or third-wave behavior therapies) that includes dialectical behavior therapy (DBT), acceptance and commitment therapy (ACT), behavioral activation (BA), and integrative behavioral couples therapy (IBCT).
== Criticism ==
FAP has been criticized for "being ahead of the data", i.e. having not enough empirical support to justify its widespread use. Challenges encountered by FAP researchers are widely discussed
There is also criticism of using the ACL model as it detracts from the idiographic nature of FAP.
== Professional organizations ==
Association for Contextual Behavioral Science (ACBS) – Founded in 2005 (incorporated in 2006), the Association for Contextual Behavioral Science (ACBS) is dedicated to the advancement of functional contextual cognitive and behavioral science and practice so as to alleviate human suffering and advance human well-being.
The Association for Behavior Analysis International (ABAI) has a special interest group for practitioner issues, behavioral counseling, and clinical behavior analysis. ABAI has larger special interest groups for behavioral medicine. ABAI serves as the core intellectual home for behavior analysts.
The Association for Behavioral and Cognitive Therapies (ABCT) also has an interest group in behavior analysis, which focuses on clinical behavior analysis. In addition, the Association for Behavioral and Cognitive Therapies has a special interest group in addictions.
Doctoral level behavior analysts who are psychologists belong to the American Psychological Association's Division 25 (behavior analysis). APA offers a diplomate in behavioral psychology.
The World Association for Behavior Analysis offers a certification for clinical behavior analysis which covers functional analytic psychotherapy.
== References ==
== External links ==
Kohlenberg & Tsai's FAP website
Center for the Science of Social Connection | Wikipedia/Functional_analytic_psychotherapy |
Narrative therapy (or narrative practice) is a form of psychotherapy that seeks to help patients identify their values and the skills associated with them. It provides the patient with knowledge of their ability to embody these values so they can effectively confront current and future problems. The therapist seeks to help the patient co-author a new narrative about themselves by investigating the history of those values. Narrative therapy is a social justice approach to therapeutic conversations, seeking to challenge dominant discourses that shape people's lives in destructive ways. While narrative work is typically located within the field of family therapy, many authors and practitioners report using these ideas and practices in community work, schools and higher education. Narrative therapy has come to be associated with collaborative as well as person-centered therapy.
== History ==
Narrative therapy was developed during the 1970s and 1980s, largely by Australian social worker Michael White and David Epston of New Zealand, and it was influenced by different philosophers, psychologists, and sociologists such as Michel Foucault, Jerome Bruner, Lev Semyonovich Vygotsky etc.
== Conversation maps ==
=== Re-authoring identity ===
The narrative therapist focuses upon assisting people to create stories about themselves, about their identities, that are helpful to them. This work of "re-authoring identity" helps people identify their values and identify the skills and knowledge to live out these values by way of the therapist's skilled use of listening and questioning. Through the process of identifying the history of values in people's lives, the therapist and client are able to co-author a new story about the person.: 24
The story people tell about themselves and that is told about them is important in this approach, which asserts that the story of a person's identity may determine what they think is possible for themselves. The narrative process allows people to identify what values are important to them and how they might use their own skills and knowledge to live these values.: 36
Narrative therapy focuses on "unique outcomes" (a term of Erving Goffman), or moments that contradict a client's personal "problem-saturated" narrative. Unique outcomes work by revealing a person's strengths, agency, and emotional vitality that are hidden behind a person's personal problem-focused narratives. Unique outcomes can help to reveal entryways to more positive alternative narratives that clients are encouraged to adapt.
=== Externalizing conversations ===
The concept of identity is important in narrative therapy. The approach aims not to conflate people's identities with the problems they may face or the mistakes they have made. Rather, the approach seeks to avoid modernist, essentialist notions of the self that lead people to believe there is a biologically determined "true self" or "true nature". Instead, identity, seen as primarily social, can be changed according to the choices people make.
To separate people's identities from the problems they face, narrative therapy employs externalizing conversations. The process of externalization allows people to consider their relationships with problems. Narrative therapy allows people to become separated from their "internalized" understandings or ideas of a problem by looking at the problem from a social context, engaging in the construction and performance of preferred identities, and externalizing a person's strengths or positive attributes.
An externalizing emphasis involves naming a problem so that a person can assess the problem's effects in their life, can analyze how the problem operates or works in their life, and in the end can choose their relationship to the problem.
=== "Statement of Position Map" ===
In a narrative approach, the therapist aims to adopt a collaborative therapeutic posture rather than imposing ideas on people by giving them advice. Michael White developed a conversation map called a "Statement of Position Map" designed to elicit the client's own evaluation of the problems and developments in their lives. Both the therapist and the client are seen as having valuable information relevant to the process and the content of the therapeutic conversation. By adopting a posture of curiosity and collaboration, the therapist aims to give the implicit message to people that they already have knowledge and skills to solve the problems they face. When people develop solutions to their own problems on the basis of their own values, they may become much more committed to implementing these solutions.
=== Re-membering practice ===
Narrative therapy identifies that identities are social achievements and the practice of re-membering draws closer those who support a person's preferred story about themselves and dis-engages those that do not support the person.
=== Absent but implicit ===
The concept of "absent but implicit" refers to the discernment people must make between their expressed experiences and other experiences that they had in the past and already assigned meaning to. The concept of "absent but implicit" is used to discover stories of oneself that lies underneath the problem narrative being provided. Inspired by the work of Jacques Derrida, Michael White became curious about the values implicit in people's pain, their sense of failure, and actions. Often, people only feel pain or failure in when their values are abridged, or when their relationships and lives are not as they should be. Furthermore, there are often stalled initiatives that people take in life that are also guided by implicit values.
=== Outsider witnesses map ===
In this particular narrative practice, people will meet, listen, and respond to the preferred accounts of other's lives. This is referred to as "outsider witness practice" in narrative therapy. Often they are friends of the consulting person or past clients of the therapist who have their own knowledge and experience of the problem at hand. During the first interview, between therapist and consulting person, the outsider listens without comment.
Then the therapist interviews them with the instructions not to critique or evaluate or make a proclamation about what they have just heard, but instead to simply say what phrase or image stood out for them, followed by any resonances between their life struggles and those just witnessed. Lastly, the outsider is asked in what ways they may feel a shift in how they experience themselves from when they first entered the room.
Next, in similar fashion, the therapist turns to the consulting person, who has been listening all the while, and interviews them about what images or phrases stood out in the conversation just heard and what resonances have struck a chord within them.
In the end, an outsider witness conversation is often rewarding for witnesses. But for the consulting person the outcomes are remarkable: they learn they are not the only one with this problem, and they acquire new images and knowledge about it and their chosen alternate direction in life. The main aim of narrative therapy is to help clients to create new, positive stories that they can use to re-author their lives. Narrative therapy helps to separate and externalize people's problems so they can become empowered and retake control of their lives in a positive, meaningful ways.
== Therapeutic documents ==
Narrative therapy embodies a strong appreciation for the creation and use of documents, as when a person and a counsellor co-author "A Graduation from the Blues Certificate", for example. In some instances, case notes are created collaboratively with clients to provide documentation as well as markers of progress.
== Social-political therapeutic approach ==
A strong awareness of the impact of power relations in therapeutic conversations, with a commitment to checking back with the client about the effects of therapeutic styles in order to mitigate the possible negative effect of invisible assumptions or preferences held by the therapist. There is also an awareness of how social narratives such as femininity and masculinity can be corrupted and negatively influence peoples identities.: 23–38
=== Eating disorders ===
Narrative therapy has made numerous contributions to the field of eating disorders. David Epston, Stephen Madigan and Catrina Brown have made the most significant contribution to bringing a depathologizing approach to this issue.
=== Men and domestic violence ===
Narrative therapy has also been applied to work with men who abuse their female partners. Alan Jenkins and Tod Augusta-Scott have been the most prolific in this field. They integrated a social-political analysis of the violence, while at the same time engaging men in a respectful, collaborative manner.
=== Community work ===
Narrative therapy has also been used in a variety of community settings. In particular, an exercise called "Tree of Life" has been used to mobilize communities to act according to their own values.
== Criticisms ==
There have been several formal criticisms of narrative therapy over what are viewed as its theoretical and methodological inconsistencies, among various other concerns.
Narrative therapy has been criticised as holding to a social constructionist belief that there are no absolute truths, but only socially sanctioned points of view, and that Narrative therapists simply privilege their client's concerns over and above "dominating" cultural narratives.
Several critics have posed concerns that narrative therapy has made gurus of its leaders, particularly in the light that its leading proponents tend to be overly harsh about most other kinds of therapy.
Narrative therapy is also criticized for the lack of clinical and empirical studies to validate its many claims. Etchison & Kleist (2000) stated that narrative therapy's focus on qualitative outcomes is not congruent with larger quantitative research and findings which the majority of respected empirical studies employ today. This has led to a lack of research material which can support its claims of efficacy.
== See also ==
== References ==
== External links == | Wikipedia/Narrative_therapy |
The Association for the Advancement of Psychotherapy (AAP) is a professional organization created to advance methods of psychotherapy among members of the medical profession and to familiarize members with progress in the field. Organized in 1939, Association for the Advancement of Psychotherapy publishes the quarterly American Journal of Psychotherapy.
Their 1954 annual meeting was the site of an emerging debate between Harry Benjamin and AAP co-founder Emil Gutheil over the causes and treatment of transsexualism, with Gutheil advocating an environmental cause and therapy to dissuade pursuit of medical options.
In 2001 the organization subsumed the Journal of Psychotherapy Practice and Research into its journal. The organization is currently based at the Albert Einstein College of Medicine in New York, New York.
== References ==
== External links ==
American Journal of Psychotherapy | Wikipedia/Association_for_the_Advancement_of_Psychotherapy |
Brief psychotherapy (also brief therapy, planned short-term therapy) is an umbrella term for a variety of approaches to short-term, solution-oriented psychotherapy.
== Overview ==
Brief therapy differs from other schools of therapy in that it emphasizes (1) a focus on a specific problem and (2) direct intervention. In brief therapy, the therapist takes responsibility for working more pro-actively with the client in order to treat clinical and subjective conditions faster. It also emphasizes precise observation, utilization of natural resources, and a temporary suspension of disbelief to consider new perspectives and multiple viewpoints.
Rather than the formal analysis of historical causes of distress, the primary approach of brief therapy is to help the client to view the present from a wider context and to utilize more functional understandings (not necessarily at a conscious level). By becoming aware of these new understandings, successful clients will de facto undergo spontaneous and generative change.
Brief therapy is often highly strategic, exploratory, and solution-based rather than problem-oriented. It is less concerned with how a problem arose than with the current factors sustaining it and preventing change. Brief therapists do not adhere to one "correct" approach, but rather accept that there being many paths, any of them may or may not, in combination, turn out to be ultimately beneficial.
== Founding proponents ==
Milton Erickson was a practitioner of brief therapy, using clinical hypnosis as his primary tool. To a great extent, he developed this himself. His approach was popularized by Jay Haley, in the book Uncommon therapy: The psychiatric techniques of Milton Erickson M.D.
The analogy Erickson uses is that of a person who wants to change the course of a river. if he opposes the river by trying to block it, the river will merely go over and around him. But if he accepts the force of the river and diverts it in a new direction, the force of the river will cut a new channel.
Richard Bandler, the co-founder of neuro-linguistic programming, is another firm proponent of brief therapy. After many years of studying Erickson's therapeutic work, he wrote:
It's easier to cure a phobia in ten minutes than in five years ... I didn't realize that the speed with which you do things makes them last ... I taught people the phobia cure. They'd do part of it one week, part of it the next, and part of it the week after. Then they'd come to me and say "It doesn't work!" If, however, you do it in five minutes, and repeat it till it happens very fast, the brain understands. That's part of how the brain learns ... I discovered that the human mind does not learn slowly. It learns quickly. I didn't know that.
== Notable therapists ==
Nicholas Cummings (brief therapy, focused therapy)
Milton H. Erickson (hypnotherapy, strategic therapy, brief therapy)
Giorgio Nardone (brief therapy, strategic therapy)
Steve de Shazer (solution focused brief therapy)
Paul Watzlawick (Brief therapy, systems theory)
== See also ==
List of counseling topics
Mental Research Institute – one of the founding clinics of brief therapy and home of a number of the notable therapists mentioned above
Solution focused brief therapy
== References ==
== External links ==
Brief+Psychotherapy at the U.S. National Library of Medicine Medical Subject Headings (MeSH) | Wikipedia/Brief_psychotherapy |
Homework in psychotherapy is sometimes assigned to patients as part of their treatment. In this context, homework assignments are introduced to practice skills taught in therapy, encourage patients to apply the skills they learned in therapy to real life situations, and to improve on specific problems encountered in treatment. For example, a patient with deficits in social skills may learn and rehearse proper social skills in one treatment session, then be asked to complete homework assignments before the next session that apply those newly learned skills (e.g., going to a social engagement or greeting five people each day).
Homework is most often used in cognitive behavioral therapy (CBT) for the treatment of mood and anxiety disorders, although other theoretical frameworks may also incorporate homework. Some of the types of homework used in CBT include thought records and behavioral experiments. Patients using thought records are instructed to write down negative cognitions on the thought record form and weigh the evidence both for and against the negative thoughts, with the goal being to come up with new, balanced thoughts in the process. Behavioral experiments are used as homework to help patients test out thoughts and beliefs directly. Studies have shown that homework completion and accuracy predict favorable outcomes in psychotherapy and may help patients stay in remission. However, some therapists are concerned that assigning homework makes therapy too formal and reduces the impact of the individual sessions.
== Approaches ==
Most of the literature published on homework in psychotherapy to date focuses on homework use during CBT, which involves changing patients' thoughts and behaviors to reduce the symptoms of the mental disorders from which they are suffering. A variety of homework assignments exist in CBT. These tasks can range from scheduling a daily exercise routine to practicing progressive muscle relaxation five times a day to monitoring and recording one's negative automatic thoughts throughout the day. In practice, these homework assignments are meant to help patients lift their mood, practice and master skills they developed in therapy, and progressively improve between treatment sessions. Research has found that homework compliance positively predicts successful outcomes in therapy, and therapists are now looking for better ways to implement homework, so that more individuals may receive its benefits.
CBT is not the only type of therapy to incorporate homework. Although each therapist makes his or her own choices regarding homework assignments, some of the other therapies that may assign homework include exposure therapy, psychodynamic therapy, and problem solving therapy. Homework can also be assigned even if therapists are not physically present with the patients being treated. Such cases include therapy delivered over the phone, over video, or over the Internet. Treatment of some disorders, such as major depression, may also be done without therapists at all. Although the efficacy of this self-help-like treatment is still under scrutiny, preliminary data suggest that completion of homework is one factor predicting positive treatment outcomes for patients who receive treatment over the Internet.
== Thought records ==
Thought records (or thought diaries) are among the most commonly used cognitive assignments in CBT. They allow patients in various situations to closely examine "hot thoughts" and cognitive distortions and, after having done so, arrive at a newly synthesized alternative thought that more closely fits the situation. Many thought records accomplish this task by having patients list out in order: the situation they are in; the emotions they are feeling and with what intensity those emotions are felt; what thoughts they are having and what the "hot thought" is; evidence for the hot thought; evidence against the "hot thought"; balanced alternative thoughts; and the emotions they feel after having completed the thought record and the intensity of those emotions.
=== Example ===
Jane has social anxiety disorder and was just told at work that she would be giving a presentation in front of an audience of 200 people the following week. This produces a large amount of anxiety for Jane, and she starts filling out a thought record to try to calm herself down. To begin, she fills in the column about the situation she is in with: "I was told that I am going to give a speech in front of a large audience next week." In the next column, Jane writes what emotions she is feeling and with what intensity she is feeling them: "Anxious – 100. Afraid – 90. Sad – 40." She then starts identifying some thoughts that immediately ran through her head when she heard that she would be giving the presentation: "Oh no, I'm going to mess up and choke. Everyone will laugh at me. My boss will fire me. I will never be able to hold a job at this rate. I'm worthless and a failure." Jane identifies "I'm worthless and a failure" as the hot thought, the thought that invokes the greatest amount of negative emotion in her situation.
After that, Jane starts writing in the next column the pieces of evidence that support the hot thought: "I've done terribly on presentations in the past. I remember one time in high school when I had to give a speech in front of my class and I ended up crying in front of everyone instead. I got a C on that speech and barely scraped by in the class. My high school friends and I don't talk as much anymore. They must be starting to get sick of me too. My co-workers don't try to talk to me either." Jane jots down in the next column pieces of evidence against her hot thought: "I think my boss might have meant well when he gave me this presentation assignment. I did one of these presentations on a smaller scale last week and I think I did just fine. Almost everyone who was there even came up to me and told me so afterwards. I think that those audience members do care about me and would be willing to support me if I asked. Also, I'm filling out this thought record just like my therapist told me to. I think that's what she would have wanted from me."
In the next column, Jane writes down her alternative thought: "The presentation ahead may be scary and making me feel anxious, but I think I can handle it as long as I know that there are people who support me." After that, Jane writes the emotions she is now feeling and their intensities: "Anxious – 50. Afraid – 40. Sad – 10. Relieved – 50."
=== Efficacy ===
Both the quality and quantity of thought records completed during therapy have been found to be predictive of treatment outcomes for patients with depression and/or an anxiety disorder. Furthermore, Rees, McEvoy, & Nathan (2005) found that accuracy ratings of patients' thought records mid-treatment was positively correlated with post-treatment outcomes, and that doing homework in CBT was overall preferable to not doing homework in CBT. Completing thought records accurately may also be indicative of overall skill gain in treatment; Neimeyer and Feixas (1990) found that patients with depression who completed thought records accurately were less likely to relapse six months after treatment termination. The researchers hypothesized that this was because the patients who completed thought records accurately had acquired the skills taught in CBT, and that these skills served as valuable coping strategies when the patients were faced with future stressors and needed to act as their own therapists.
== Behavioral experiments ==
Behavioral experiments are collaborative endeavors in which therapists and patients work together to identify a potentially negative or harmful belief, then to either confirm or disprove it by designing an experiment that tests the belief. Like thought records, they are most often used in CBT.
=== Example ===
Patients with panic disorder tend to interpret normal bodily sensations as signs of impending catastrophe. An individual with panic disorder may then believe that hyperventilation is a sign of an upcoming heart attack. A therapist who identifies this maladaptive thought can then work with the patient to test the belief with a behavioral experiment. To begin, the therapist and the patient would agree on a thought to test. In this case, it might be something like, "When I start hyperventilating, I will have a heart attack."
Then, the therapist may start giving suggestions on how to test the belief. She may suggest, "Why don't you try hyperventilating? If you show signs of having a heart attack, I have training in CPR and I'll be able to help you while waiting for the authorities." After some initial apprehension, the patient may agree with the experiment and start breathing in a hyperventilating pattern while the therapist watches. Since the patient with panic disorder most likely will not have a heart attack while hyperventilating, he will be less likely to believe in the original thought, even though he may have been scared of testing the belief at first.
=== Efficacy ===
Relative to thought records, behavioral experiments are thought to be better at changing an individual's beliefs and behaviors. To test this hypothesis, researchers conducted an experiment comparing the degree of belief and behavioral change in participants who were given either a thought record or a behavioral experiment intervention. Specifically, this study tested participants who endorsed the commonly held belief, "If I don't wash my hands after going to the restroom, I'll get sick." Participants in the thought record condition were given a "normal" thought record not unlike the one described in the "Thought Record" section of this article and asked to come up with evidence for and against the following belief: "Not washing your hands after going to the toilet will make you ill." After this, they were asked to reflect on their own experiences of washing or not washing their hands after going to the toilet and to come up with a balanced alternative belief.
In the behavioral experiment condition, participants worked with the experimenter to come up with a study to test the validity of the same belief used in the thought record condition. For example, one study could involve having the participant void without washing her hands afterwards to see if she would become ill. The participant was encouraged to concretely define how she would tell whether she became ill or not (e.g., check for fever, coughing, aches, or other common symptoms of illness) and to test her belief as thoroughly as possible (e.g., if the participant believed she was more likely to get ill after touching the toilet seat and not washing her hands, she was encouraged to test this hypothesis as well).
The researchers found that, compared to a no-treatment control, both thought records and behavioral experiments were effective in reducing the belief that not washing one's hands after going to the toilet would make oneself ill. However, behavioral experiments were found to be able to change the individuals' beliefs immediately following the intervention, while thought records demonstrated this ability to change belief only at follow-up one week after the intervention. On the other hand, the researchers found that neither thought records nor behavioral experiments were effective at reducing how often individuals actually washed their hands after using the toilet, even if they no longer believed that they would become ill for not washing their hands. Since the sample being studied was drawn from a normal population (as opposed to the population of individuals seeking treatment for psychological disorders), this lack of an effect on behavior may be due to the possibility that the people being studied were not under any motivation to actually change their behavior.
== Problems and uncertainties ==
Homework is generally associated with improved patient outcomes, but it is still uncertain what other factors may moderate or mediate the effects that homework has on how much patients improve. That is, some researchers have hypothesized that patients who are more motivated to complete homework are also more likely to improve; other researchers have suggested that only individuals with less severe psychopathologies are even capable of completing homework, so it would be effective only for a subset of individuals. To test these possibilities, Burns and Spengler (2000) used structural equation modeling to estimate the causal relations between homework compliance and depressive symptomatology before and after psychotherapy. These researchers found that "the data were consistent with the hypothesis that HW compliance had a causal effect on changes in depression, and the magnitude of this effect was large" (p. 46). Still, there may exist factors that improve homework compliance during therapy, such as general therapist competency and therapists' reviewing homework completed since the previous session.
The types of homework used in psychotherapy are not limited to thought records and behavioral experiments, which tend to be relatively structured in their implementation. In fact, even though researchers have found that psychotherapy with homework is generally more effective than psychotherapy without homework, there have not been many efforts to research if specific types of homework are better at effecting positive treatment outcomes than others, or if certain environments help promote the positive effects of homework. For example, Helbig-Lang and colleagues found that, in an environment where systematic homework assignment procedures were rare but where overall homework compliance remained high, homework compliance was not positively related to treatment outcomes. Another group of researchers looked at patients with depression who were in remission and undergoing maintenance therapy and found that homework compliance did not correlate with treatment outcomes in this sample, either. More research can help elucidate the relations among the types of homework used in psychotherapy, the environments in which they are incorporated, and treatment outcomes for patients with the various disorders for which the homework is being assigned.
== Future directions ==
Both clinicians and patients encounter difficulties in incorporating and complying to homework procedures throughout a treatment. Factors that have been found to be associated with homework compliance during treatment include having the therapist set concrete goals for completing the homework and involving the patient in discussions surrounding the assigned homework. If homework compliance is as important to treatment outcomes as most research suggest, however, then there is room for improvement and future studies could focus on how to improve compliance more effectively.
Like the psychotherapies in which they are incorporated, homework may not be effective at helping all people with all different kinds of psychological disorders. It is thus important to research for which disorders and in which general situations homework would enhance a therapy. This would ostensibly help patients being treated for psychological disorders receive more individualized care and support, and hopefully improve overall treatment outcomes for all disorders.
An example of a specific situation in which homework may be helpful is the mitigation of safety-seeking behaviors with behavioral experiments. Safety seeking behaviors are undertaken by individuals to prevent anticipated future catastrophes, but may end up being more harmful for these individuals in the long run. For example, a patient with panic disorder may avoid exercising because he believes that breathing heavily will make him have a panic attack. Because of the apparently preventative function of safety seeking behaviors, people who carry out these behaviors are unlikely to test their actual effectiveness in preventing catastrophes. So, designing behavioral experiments in therapy to test these behaviors could potentially be a helpful means for reducing their occurrence.
== See also ==
Cognitive restructuring
== References == | Wikipedia/Homework_in_psychotherapy |
The hedonic music consumption model was created by music researchers Kathleen Lacher and Richard Mizeski in 1994. Their goal was to use this model to examine the responses that listening to rock music creates, and to find if these responses influenced the listener's intention to later purchase the music. The article begins with a discussion of why the issue of music consumption is important. Music is then explored as an aesthetic product, prior to a discussion of what hedonic consumption is, as well as its origins, and concludes with an in-depth look at the model itself.
== Music consumption ==
Music is consumed in a variety of ways, through the radio, television, and internet, as well as through concerts and performances.
North and Hargreaves have suggested "record buying is perhaps the ultimate behavioural measure of musical preference, involving the purchaser's time, effort, and money" (p. 282). The study of music purchase and consumption was quite limited as of the early 1990s, as only a few academic studies had investigated this topic at that point in time.
The impact of illegal music downloads and file sharing has again brought the issue of music purchases to the forefront. Much popular press and current academic work is being devoted to this topic. The latest report by the International Federation of the Phonographic Industry (IFPI) mentions four academic studies in particular (e.g., Zenter, 2003; Liebowitz, 2006; Michael, 2006; Rob & Waldfogel, 2006;) that have examined file-sharing as having a negative impact on the record industry. Not all academics believe to be true however, as the Oberholzer & Strumpf (2004) study brought a different viewpoint to this topic in an article published in the prestigious Journal of Political Economy. These Harvard Business School professors received considerable media attention with the study's conclusion "the empirical evidence on sales displacement is mixed…the papers using actual file-sharing data, suggest that piracy and music sales are largely unrelated" (pp. 24–25).
== Music and hedonic consumption ==
Hedonic consumption was first introduced as an alternative to the traditional consumer behavior model by in the early 1980s. Conventional consumer research has traditionally used the black box approach and has historically analyzed the consumption activities of product categories such as package goods and major consumer durables. Hedonic consumption focuses on products such as the arts, music, and cultural events such as rock concerts, fashion shows and films. These "experience" types of products tend to involve individual preferences that can generate certain emotions, feelings and behaviors.
Music is an aesthetic product as it often provides an emotional or spiritually moving experience specific to an individual. Holbrook also states that music is appreciated primarily for its internal essence as opposed to being viewed strictly as an objective product. However, consumer research literature focusing on these types of products often uses the terms "aesthetic" and "hedonic" almost interchangeably. Charters (2006) points out "hedonic consumption is essentially about pleasure" (p. 240), and that pleasure is but one aspect of the overall aesthetic experience. He also notes that "popular culture works may have ‘layers of meaning' for consumers" as they may convey symbolic meaning for people in significance or emotion of some type. From a retailing standpoint, aesthetic products are characterized as having a wide range of product offerings in the marketplace.
As Hirschman and Holbrook point out, hedonic consumption expands upon the traditional definition of consumer behavior through its inclusion of "the multi-sensory, fantasy and emotive aspects of one's experience with products…including tastes, sounds, scents, tactile impressions and visual images" (p. 92). These types of experience-oriented products often involve "fun, amusement, fantasy, arousal, sensory stimulation, and enjoyment" (p. 37). In addition, hedonic consumption often uses ethnic background, social class and gender to help determine the different consumer emotions and fantasies around a product.
== The origins of hedonic consumption ==
The study of the hedonic consumption as an academic field began in the late 1970s. It originated out of different behavioral science fields, including sociology, philosophy, psycholinguistics, and psychology. Hirschman and Holbrook considered key contributions to come from two fields of prior academic research. The first was the motivation research of the 1950s, "which focused on the emotional aspects of products and fantasies that the products could arouse and/or fulfill" (p. 93). Ernest Dichter was a key figure in this field of research which was popular from the 1950s to the 1970s. One of the major shortcomings of the early motivation research was the fact that many of its clinical studies lacked rigor and validity.
Hedonic consumption also owes a big debt to the academic field of product symbolism research. One of the important contributors to this academic field includes Sidney J. Levy, who is now the Coca-Cola Distinguished Professor of Marketing at Eller College of Management at the University of Arizona. He wrote the groundbreaking article "Symbols for Sale" which first appeared in the July–August 1959 edition of the Harvard Business Review.
Levy observed "consumers buy products and brands not only for so-called functional reasons but for the various ‘symbolic meanings' that their consumption provides" (p. 198). Another early Levy article "Symbolism and Life Style" was originally published in the December 1963 American Marketing Association's Toward Scientific Marketing Winter Conference Proceedings. In this article, Levy expanded upon his earlier writings by urging marketers to consider "the sum of an individual's consumption of symbolic goods and services as a 'lifestyle'" (p. 199). Levy was interested in the development of a taxonomy that would allow marketers to be able to think more systematically about how to meaningfully fulfill the needs of their individual consumers.
== Explanation ==
Hirschman and Holbrook's research in the hedonic consumption field first led Kathleen Lacher to begin to explore music as a hedonic consumption product in the late 1980s. Her goal was to try to understand the factors behind why people bought music. In 1994, she joined forces with Richard Mizerski to conduct an experiment that explored music purchase intentions. These professors proposed a theoretical model they named the "model of music consumption and purchase intention". Their basic premise was that people buy music because of the "experience that the music creates by itself or because music can enhance other experiences, whether it is an individual or shared experiences with others" (p. 367).
== Model building blocks ==
Lacher and Mizerski based their experiments on previous music research that was conducted in the fields of music education and psychology. They tested music on four "responses" to see what, if any, had any effect on music purchase intentions.
Emotion - or the feelings people experience when hearing music, which usually range across a scale extending from rage to love. Emotion is also considered to be one of the primary factors in music appreciation as well as a potential factor in the purchasing process.
Sensory – music often invokes a raw physical response to move physically or sway to the music. Yingling describes this primal process as "an awareness to the listener's need to either move physically towards or away" from the music source.
Imaginal – This response often involves "images, memories or situations that music evokes" (p. 109). The imaginal response also tends to invoke memories of past events, or to imagine events that could take place in the future.
Analytical – This response often involves pre-conceived expectations about the music itself. Listeners tend to separate music by identifying of the technical aspects of the music (e.g., tempo, dynamics, etc.); type, through music genre (e.g., rock, folk, etc.); and intrinsic, through other personalized listener factors (e.g., contemporary, religious, etc.).
Next, these first four inputs link together along a pathway towards music purchase using four additional factors:
Experiential response – or the experience that one has in becoming absorbed or involved in the music.
The overall affective behavior – This is more than just emotion, which "tends to be characterized by short duration intense reactionary episodes attributed to a specific cause. Lacher and Mizerski considered the affective domain to capture the various interactions between basic emotions, emotional patterns, as well as moods and motivations, as well as the previously identified analytical and imagery responses as well.
Reexperience the music – This revolves around the listener's need to want to listen to the music again. The researchers considered this to be a key factor of music purchase. In other words, the listener is motivated enough to purchase the product in order to control the type of music played, as well as where, when, and with whom the music is experienced in the future.
Purchase intention is the ultimate outcome of the hedonic music consumption model purchase. Lacher and Mizerski hypothesized that each of the first four inputs could also lead directly to any of the factors in the second cluster as well.
== The revised significant-paths-only hedonic music consumption model ==
Lacher and Mizerski conducted experiments with college students to determine if the directly linked relationships they had proposed between these eight factors stayed consistent in ultimately predicting rock music purchases. They picked this music genre as previous research suggested most recorded music was bought by people between the ages of 10 and 25, and that this was the preferred genre of the students in test groups.
Lacher and Mizerski found during the course of the tests that their hypotheses were not as consistent as they had thought, and in some cases, the relationships were not direct. Still they were able to make some general conclusions about why people buy music. They revised their original model to create a significant-paths-only model hedonic music consumption model detailing the revised relationships between the factors. The emotional response was broken down to include six different music dimensions that ranged from calm to exuberant. In short, their end goal was to "foster a systematic system of why people purchase music so we can better understand this important marketplace behavior" (p. 375). One of the ways that Lacher and Mizerski believed that this hedonic music consumption model might prove helpful in terms of future research was to explain the consumption of other "hedonic" products such as books, movies, plays, paintings and sports events" (p. 377).
== See also ==
Hedonism
== References == | Wikipedia/Hedonic_music_consumption_model |
Occupational therapy (OT), also known as ergotherapy, is a healthcare profession. Ergotherapy is derived from the Greek ergon which is allied to work, to act and to be active. Occupational therapy is based on the assumption that engaging in meaningful activities, also referred to as occupations, is a basic human need and that purposeful activity has a health-promoting and therapeutic effect. Occupational science the study of humans as 'doers' or 'occupational beings' was developed by inter-disciplinary scholars, including occupational therapists, in the 1980s.
The World Federation of Occupational Therapists (WFOT) defines occupational therapy as ‘a client-centred health profession concerned with promoting health and wellbeing through occupation. The primary goal of occupational therapy is to enable people to participate in the activities of everyday life. Occupational therapists achieve this outcome by working with people and communities to enhance their ability to engage in the occupations they want to, need to, or are expected to do, or by modifying the occupation or the environment to better support their occupational engagement'.
Many of the Member Organisations of WFOT have agreed a national definition of occupational therapy. In New Zealand occupational therapy is translated into Maori as 'whakaora ngangahau'. 'Whakaora' means ‘to restore to health' and 'ngangahau' is an adjective meaning 'active, spirited, zealous'.
Education programmes leading to entry to practice as an occupational therapist can be at diploma, baccalaureate, bachelors, masters or doctoral level. Information about entry level education programmes, currently or previously approved by WFOT, is available on the WFOT website
Occupational therapy is an allied health profession. In England, allied health professions (AHPs) are the third largest clinical workforce in health and care. Fifteen professions, with 352,593 registrants are regulated by the Health and Care Professions Council in the United Kingdom.
== History ==
The earliest evidence of using occupations as a method of therapy can be found in ancient times. In c. 100 BCE, Greek physician Asclepiades treated patients with a mental illness humanely using therapeutic baths, massage, exercise, and music. Later, the Roman Celsus prescribed music, travel, conversation and exercise to his patients. However, by medieval times the use of these interventions with people with mental illness was rare, if not nonexistent.
=== Moral treatment and graded activity ===
In late 18th-century Europe, doctors such as Philippe Pinel and Johann Christian Reil reformed the mental asylum system. Their institutions used rigorous work and leisure activities. This became part of what was known as moral treatment. Although it was thriving in Europe, interest in the reform movement fluctuated in the United States throughout the 19th century.
In the late 19th and early 20th centuries, the establishment of public health measures to control infectious diseases included the building of fever hospitals. Patients with tuberculosis were recommended to have a regime of prolonged bed rest followed by a gradual increase in exercise.
This was a time in which the rising incidence of disability related to industrial accidents, tuberculosis, and mental illness brought about an increasing social awareness of the issues involved.
The Arts and Crafts movement that took place between 1860 and 1910 also impacted occupational therapy. The movement emerged against the monotony and lost autonomy of factory work in the developed world. Arts and crafts were used to promote learning through doing, provided a creative outlet, and served as a way to avoid boredom during long hospital stays.
From the late 1870's, Scottish tuberculosis doctor Robert William Philip prescribed graded activity from complete rest through to gentle exercise and eventually to activities such as digging, sawing, carpentry and window cleaning. During this period a farm colony near Edinburgh and a village settlement near Papworth in England were established, both of which aimed to employ people in appropriate long-term work prior to their return to open employment.
=== Development into a health profession ===
In the United States, the health profession of occupational therapy was conceived in the early 1910s as a reflection of the Progressive Era. Early professionals merged highly valued ideals, such as having a strong work ethic and the importance of crafting with one's own hands with scientific and medical principles.
American social worker Eleanor Clarke Slagle (1870-1942) is considered to be the "mother" of occupational therapy. Slagle proposed habit training as a primary occupational therapy model of treatment. Based on the philosophy that engagement in meaningful routines shape a person's wellbeing, habit training focused on creating structure and balance between work, rest and leisure. In 1912, she became director of a department of occupational therapy at The Henry Phipps Psychiatric Clinic in Baltimore.
=== World War I ===
In 1915, Slagle worked at the first occupational therapy training program, the Henry B. Favill School of Occupations at Hull House in Chicago.
British-Canadian teacher and architect Thomas B. Kidner was appointed vocational secretary of the Canadian Military Hospitals Commission in January 1916. He was given the duty of preparing soldiers returning from World War I to return to their former vocational duties or retrain soldiers no longer able to perform their previous duties. He developed a program that engaged soldiers recovering from wartime injuries or tuberculosis in occupations even while they were still bedridden. Once the soldiers were sufficiently recovered they would work in a curative workshop and eventually progress to an industrial workshop before being placed in an appropriate work setting. He used occupations (daily activities) as a medium for manual training and helping injured individuals to return to productive duties such as work.The entry of the United States into World War I in April 1917 was a crucial event in the history of the profession. Up until this time, occupational therapy was not formalised into a profession. U.S. involvement in the war led to an escalating number of injured and disabled soldiers, which presented a daunting challenge to those in command.
The inaugural meeting of the National Society for the Promotion of Occupational Therapy (NSPOT) was held in Clifton Springs, New York, 15-17 March 1917. The meeting was attended by six founders: George Edward Barton, William Rush Dunton, Eleanor Clarke Slagle, Thomas B Kidner, Susan Cox Johnson and Isabel Gladwin Newton Barton. Susan E. Tracy and Herbert James Hall, did not attend but are considered near founders of the Society.
The military enlisted the assistance of NSPOT to recruit and train over 1,200 "reconstruction aides" to help with the rehabilitation of those wounded in the war.
Dunton's 1918 article "The Principles of Occupational Therapy" appeared in the journal Public Health, and laid the foundation for the textbook he published in 1919 entitled Reconstruction Therapy.
Dunton struggled with "the cumbersomeness of the term occupational therapy", as he thought it lacked the "exactness of meaning which is possessed by scientific terms". Other titles such as "work-cure", "ergo therapy" (ergo being the Greek root for "work"), and "creative occupations" were discussed as substitutes, but ultimately, none possessed the broad meaning that the practice of occupational therapy demanded in order to capture the many forms of treatment that existed from the beginning. NSPOT formally adopted the name "occupational therapy" for the field in 1921.
=== Inter-war period ===
There was a struggle to keep people in the profession during the post-war years. Emphasis shifted from the altruistic war-time mentality to the financial, professional, and personal satisfaction that comes with being a therapist. To make the profession more appealing, practice was standardized, as was the curriculum. Entry and exit criteria were established, and the American Occupational Therapy Association advocated for steady employment, decent wages, and fair working conditions. Via these methods, occupational therapy sought and obtained medical legitimacy in the 1920s.
The emergence of occupational therapy challenged the views of mainstream scientific medicine. Instead of focusing purely on the medical model, occupational therapists argued that a complex combination of social, economic, and biological reasons cause dysfunction. Principles and techniques were borrowed from many disciplines—including but not limited to physical therapy, nursing, psychiatry, rehabilitation, self-help, orthopedics, and social work—to enrich the profession's scope.
The 1920s and 1930s were a time of establishing standards of education and laying the foundation of the profession and its organization. Eleanor Clarke Slagle proposed a 12-month course of training in 1922, and these standards were adopted in 1923. In 1928, William Denton published another textbook, Prescribing Occupational Therapy. Educational standards were expanded to a total training time of 18 months in 1930 to place the requirements for professional entry on par with those of other professions. By the early 1930s, AOTA had established educational guidelines and accreditation procedures.
Margaret Barr Fulton became the first US qualified occupational therapist to work in the United Kingdom in 1925. She qualified at the Philadelphia School in the United States and was appointed to the Aberdeen Royal Hospital for mental patients where she worked until her retirement in 1963. US-style OT was introduced into England by Dr Elizabeth Casson who had visited similar establishments in America. (Casson had also earlier worked under the transformative English social reformer Octavia Hill.) In 1929 she established her own residential clinic in Bristol, Dorset House, for "women with mental disorders", and worked as its medical director. It was here in 1930 that she founded the first school of occupational therapy in the UK.
The Scottish Association of Occupational Therapists was founded in 1932. The profession was served in the rest of the UK by the Association of Occupational Therapists from 1936. (The two later merged to form what is today the Royal College of Occupational Therapists in 1974.)
=== World War II ===
With the US entry into World War II and the ensuing skyrocketing demand for occupational therapists to treat those injured in the war, the field of occupational therapy underwent dramatic growth and change. Occupational therapists needed to be skilled not only in the use of constructive activities such as crafts, but also increasingly in the use of activities of daily living.
The body that is now Occupational Therapy Australia began in 1944.
=== Post-World War II ===
Another textbook was published in the United States for occupational therapy in 1947, edited by Helen S. Willard and Clare S. Spackman. The profession continued to grow and redefine itself in the 1950s. In 1954, AOTA created the Eleanor Clarke Slagle Lectureship Award in its namesake's honor. Each year, this award recognizes a member of AOTA "who has creatively contributed to the development of the body of knowledge of the profession through research, education, or clinical practice." The profession also began to assess the potential for the use of trained assistants in the attempt to address the ongoing shortage of qualified therapists, and educational standards for occupational therapy assistants were implemented in 1960.
The 1960s and 1970s were a time of ongoing change and growth for the profession as it struggled to incorporate new knowledge and cope with the recent and rapid growth of the profession in the previous decades. New developments in the areas of neurobehavioral research led to new conceptualizations and new treatment approaches, possibly the most groundbreaking being the sensory integrative approach developed by A. Jean Ayres.
The profession has continued to grow and expand its scope and settings of practice. Occupational science, the study of occupation, was founded in 1989 by Elizabeth Yerxa at the University of Southern California as an academic discipline to provide foundational research on occupation to support and advance the practice of occupation-based occupational therapy, as well as offer a basic science to study topics surrounding "occupation".
In addition, occupational therapy practitioner's roles have expanded to include political advocacy (from a grassroots base to higher legislation); for example, in 2010 PL 111-148 titled the Patient Protection and Affordable Care Act had a habilitation clause that was passed in large part due to AOTA's political efforts. Furthermore, occupational therapy practitioners have been striving personally and professionally toward concepts of occupational justice and other human rights issues that have both local and global impacts. The World Federation of Occupational Therapist's Resource Centre has many position statements on occupational therapy's roles regarding their participation in human rights issues.
In 2021, U.S. News & World Report ranked occupational therapy as #19 of their list of '100 Best Jobs'.
== Practice frameworks ==
An occupational therapist works systematically with a client through a sequence of actions called an "occupational therapy process." There are several versions of this process. All practice frameworks include the components of evaluation (or assessment), intervention, and outcomes. This process provides a framework through which occupational therapists assist and contribute to promoting health and ensures structure and consistency among therapists.
=== Occupational Therapy Practice Framework (OTPF, United States) ===
The Occupational Therapy Practice Framework (OTPF) is the core competency of occupational therapy in the United States. The OTPF is divided into two sections: domain and process. The domain includes environment, client factors, such as the individual's motivation, health status, and status of performing occupational tasks. The domain looks at the contextual picture to help the occupational therapist understand how to diagnose and treat the patient. The process is the actions taken by the therapist to implement a plan and strategy to treat the patient.
=== Canadian Practice Process Framework ===
The Canadian Model of Client Centered Enablement (CMCE) embraces occupational enablement as the core competency of occupational therapy and the Canadian Practice Process Framework (CPPF) as the core process of occupational enablement in Canada. The Canadian Practice Process Framework (CPPF) has eight action points and three contextual element which are: set the stage, evaluate, agree on objective plan, implement plan, monitor/modify, and evaluate outcome. A central element of this process model is the focus on identifying both client and therapists strengths and resources prior to developing the outcomes and action plan.
=== International Classification of Functioning, Disability and Health (ICF) ===
The International Classification of Functioning, Disability and Health (ICF) is the World Health Organisation's framework to measure health and ability by illustrating how these components impact one's function. This relates very closely to the Occupational Therapy Practice Framework, as it is stated that "the profession's core beliefs are in the positive relationship between occupation and health and its view of people as occupational beings". The ICF is built into the 2nd edition of the practice framework. Activities and participation examples from the ICF overlap Areas of Occupation, Performance Skills, and Performance Patterns in the framework. The ICF also includes contextual factors (environmental and personal factors) that relate to the framework's context. In addition, body functions and structures classified within the ICF help describe the client factors described in the Occupational Therapy Practice Framework. Further exploration of the relationship between occupational therapy and the components of the ICIDH-2 (revision of the original International Classification of Impairments, Disabilities, and Handicaps (ICIDH), which later became the ICF) was conducted by McLaughlin Gray.
It is noted in the literature that occupational therapists should use specific occupational therapy vocabulary along with the ICF in order to ensure correct communication about specific concepts. The ICF might lack certain categories to describe what occupational therapists need to communicate to clients and colleagues. It also may not be possible to exactly match the connotations of the ICF categories to occupational therapy terms. The ICF is not an assessment and specialized occupational therapy terminology should not be replaced with ICF terminology. The ICF is an overarching framework for current therapy practices.
== Occupations ==
According to the American Occupational Therapy Association's (AOTA) Occupational Therapy Practice Framework: Domain and Process, 4th Edition (OTPF-4), occupations are defined as "everyday activities that people do as individuals, and families, and with communities to occupy time and bring meaning and purpose to life. Occupations include things people need to, want to and are expected to do". Occupations are central to a client's (person's, group's, or population's) health, identity, and sense of competence and have particular meaning and value to that client. Occupations include activities of daily living (ADLs), instrumental activities of daily living (IADLs), education, work, play, leisure, social participation, rest and sleep.
== Practice settings ==
According to the 2019 Salary and Workforce Survey by the American Occupational Therapy Association, occupational therapists work in a wide-variety of practice settings including: hospitals (28.6%), schools (18.8%), long-term care facilities/skilled nursing facilities (14.5%), free-standing outpatient (13.3%), home health (7.3%), academia (6.9%), early intervention (4.4%), mental health (2.2%), community (2.4%), and other (1.6%). According to the AOTA, the most common primary work setting for occupational therapists is in hospitals. Also according to the survey, 46% of occupational therapists work in urban areas, 39% work in suburban areas and the remaining 15% work in rural areas.
The Canadian Institute for Health Information (CIHI) found that as of 2020 nearly half (46.1%) of occupational therapists worked in hospitals, 43.2% worked in community health, 3.6% work in long-term care (LTC) and 7.1% work in "other", including government, industry, manufacturing, and commercial settings. The CIHI also found that 68% of occupational therapists in Canada work in urban settings and only 3.7% work in rural settings.
== Areas of practice in the United States ==
=== Children and youth ===
Occupational therapists work with infants, toddlers, children, youth, and their families in a variety of settings, including schools, clinics, homes, hospitals, and the community. Evaluation assesses the child's ability to engage in daily, meaningful occupations, the underlying skills (or performance components) which may be physical, cognitive, or emotional in nature, and the fit between the client's skills and the environments and contexts in which the client functions. OT intervention and involves evaluating a young person's occupational performance in areas of feeding, playing, socializing which aligns with their neurodiversity, daily living skills, or attending school. In planning treatment, occupational therapists work in collaboration with the children and teens themselves, parents, caregivers, and teachers in order to develop functional goals within a variety of occupations meaningful to the young client.
Early intervention addresses daily functioning of a child between the ages of birth to three years old. OTs who practice in early intervention support a family's ability to care for their child with special needs and promote his or her function and participation in the most natural environment. Each child is required to have an Individualized Family Service Plan (IFSP) that focuses on the family's goals for the child. It's possible for an OT to serve as the family's service coordinator and facilitate the team process for creating an IFSP for each eligible child.
Objectives that an occupational therapist addresses with children and youth may take a variety of forms. Examples are as follows:
Providing rehabilitation activities to children with neuromuscular disabilities such as cerebral palsy
Supporting self-regulation within neurodivergent children whose neurobiology does not align with the sensory environment or the contexts in which they function
Facilitating coping skills to a child with generalized anxiety disorder.
Consulting with teachers, psychologists, social workers, parents/caregivers, and other professionals who work with children regarding modifications, accommodations and supports in a variety of areas, such as sensory processing, motor planning, visual processing, and executive function skills.
Providing individualized treatment for sensory processing differences.
Providing splinting and caregiver education in a hospital burn unit.
Instructing caregivers in regard to mealtime intervention for autistic children who have feeding challenges.
Facilitating handwriting development through providing intervention to develop fine motor and writing readiness skills in school-aged children.
In the United States, pediatric occupational therapists work in the school setting as a "related service" for children with an Individual Education Plan (IEP). Every student who receives special education and related services in the public school system is required by law to have an IEP, which is a very individualized plan designed for each specific student (U.S. Department of Education, 2007). Related services are "developmental, corrective, and other supportive services as are required to assist a child with a disability to benefit from special education," and include a variety of professions such as speech–language pathology and audiology services, interpreting services, psychological services, and physical and occupational therapy.
As a related service, occupational therapists work with children with varying disabilities to address those skills needed to access the special education program and support academic achievement and social participation throughout the school day (AOTA, n.d.-b). In doing so, occupational therapists help children fulfill their role as students and prepare them to transition to post-secondary education, career and community integration (AOTA, n.d.-b).
Occupational therapists have specific knowledge to increase participation in school routines throughout the day, including:
Modification of the school environment to allow physical access for children with disabilities
Provide assistive technology to support student success
Helping to plan instructional activities for implementation in the classroom
Support the needs of students with significant challenges such as helping to determine methods for alternate assessment of learning
Helping students develop the skills necessary to transition to post-high school employment, independent living or further education (AOTA).
Other settings, such as homes, hospitals, and the community are important environments where occupational therapists work with children and teens to promote their independence in meaningful, daily activities. Outpatient clinics offer a growing OT intervention referred to as "Sensory Integration Treatment". This therapy, provided by experienced and knowledgeable pediatric occupational therapists, was originally developed by A. Jean Ayres, an occupational therapist. Sensory integration therapy is an evidence-based practice which enables children to better process and integrate sensory input from the child's body and from the environment, thus improving his or her emotional regulation, ability to learn, behavior, and functional participation in meaningful daily activities.
Recognition of occupational therapy programs and services for children and youth is increasing worldwide. Occupational therapy for both children and adults is now recognized by the United Nations as a human right which is linked to the social determinants of health. As of 2018, there are over 500,000 occupational therapists working worldwide (many of whom work with children) and 778 academic institutions providing occupational therapy instruction.
=== Health and wellness ===
According to the American Occupational Therapy Association's (AOTA) Occupational Therapy Practice Framework, 3rd Edition, the domain of occupational therapy is described as "Achieving health, well-being, and participation in life through engagement in occupation". Occupational therapy practitioners have a distinct value in their ability to utilize daily occupations to achieve optimal health and well-being. By examining an individual's roles, routines, environment, and occupations, occupational therapists can identify the barriers in achieving overall health, well-being and participation.
Occupational therapy practitioners can intervene at primary, secondary and tertiary levels of intervention to promote health and wellness. It can be addressed in all practice settings to prevent disease and injuries, and adapt healthy lifestyle practices for those with chronic diseases. Two of the occupational therapy programs that have emerged targeting health and wellness are the Lifestyle Redesign Program and the REAL Diabetes Program.
Occupational therapy interventions for health and wellness vary in each setting:
==== School ====
Occupational therapy practitioners target school-wide advocacy for health and wellness including: bullying prevention, backpack awareness, recess promotion, school lunches, and PE inclusion. They also heavily work with students with learning disabilities such as those on the autism spectrum.
A study conducted in Switzerland showed that a large majority of occupational therapists collaborate with schools, half of them providing direct services within mainstream school settings. The results also show that services were mainly provided to children with medical diagnoses, focusing on the school environment rather than the child's disability.
==== Outpatient ====
Occupational therapy practitioners conduct 1:1 treatment sessions and group interventions to address: leisure, health literacy and education, modified physical activity, stress/anger management, healthy meal preparation, and medication management.
==== Acute care ====
Occupational therapy practitioners in acute care assess whether a patient has the cognitive, emotional and physical ability as well as the social supports needed to live independently and care for themselves after discharge from the hospital. Occupational therapists are uniquely positioned to support patients in acute care as they focus on both clinical and social determinants of health.
Services delivered by occupational therapists in acute care include:
Direct rehabilitation interventions, individually or in group settings to address physical, emotional and cognitive skills that are required for the patient to perform self-care and other important activities.
Caregiver training to assist patients after discharge.
Recommendations for adaptive equipment for increased safety and independence with activities of daily living (e.g. aids for getting dressed, shower chairs for bathing, and medication organizers for self-administering medications).
They also perform home safety assessments to suggest modifications for improved safety and function after discharge.
Occupational therapists use a variety of models, including the Model of Human Occupation, Person, Environment and Occupation, and Canadian Occupational Performance Model to adopt a client centered approach used for discharge planning. Hospital spending on occupational therapy services in acute care was found to be the single most significant spending category in reducing the risk of readmission to the hospital for heart failure, pneumonia, and acute myocardial infarction.
==== Community-based ====
Occupational therapy practitioners develop and implement community wide programs to assist in prevention of diseases and encourage healthy lifestyles by: conducting education classes for prevention, facilitating gardening, offering ergonomic assessments, and offering pleasurable leisure and physical activity programs.
=== Mental health ===
Mental Health
Occupational therapy's foundation in mental health is deeply rooted in the moral treatment movement, which sought to replace the harsh treatment of mental disorders with the establishment of healthy routines and engagement in meaningful activities. This movement significantly influenced the development of occupational therapy, particularly through the contributions of early 20th-century practitioners and theorists like Adolph Meyer, who emphasized a holistic approach to mental health care (Christiansen & Haertl, 2014).
According to the American Occupational Therapy Association (AOTA), occupational therapy is based on the principle that "active engagement in occupation promotes, facilitates, supports, and maintains health and participation" (AOTA, 2017). Occupations refer to individuals' activities to structure their time and provide meaning. The primary goals of occupational therapy include promoting physical and mental health and well-being and establishing, restoring, maintaining, and improving function and quality of life for individuals at risk of or affected by physical or mental health disorders (AOTA, 2017).
Education and Professional Qualifications
Occupational therapists require a master's degree or clinical doctorate, while occupational therapy assistants need at least an associate's degree. Their education encompasses extensive mental health-related topics, including biological, physical, social, and behavioral sciences, and supervised clinical experiences culminating in full-time internships. Both must pass national examinations and meet state licensure requirements. Occupational therapists apply mental and physical health knowledge, focusing on participation and occupation, using performance-based assessments to understand the relationship between occupational participation and well-being. Their education covers various aspects of mental health, including neurophysiological changes, human development, historical and contemporary perspectives on mental health, and current diagnostic criteria. This comprehensive training prepares occupational therapy practitioners to address the complex interplay of client variables, activity demands, and environmental factors in promoting health and managing health challenges (Bazyk & Downing, 2017).
Occupational therapy role in mental health practice
Occupational therapy practitioners play a critical role in mental health by using therapeutic activities to promote mental health and support full participation in life for individuals at risk of or experiencing psychiatric, behavioral, and substance use disorders. They work across the lifespan and in various settings, including homes, schools, workplaces, community environments, hospitals, outpatient clinics, and residential facilities (AOTA,2017). Occupational therapists and occupational therapy assistants assume diverse roles, such as case managers, care coordinators, group facilitators, community mental health providers, consultants, program developers, and advocates. Their interventions aim to facilitate engagement in meaningful occupations, enhance role performance, and improve overall well-being. This involves analyzing, adapting, and modifying tasks and environments to support clients' goals and optimal engagement in daily activities (AOTA, 2017).
Occupational therapy practitioners utilize clinical reasoning, informed by various theoretical perspectives and evidence-based approaches, to guide evaluation and intervention. They are skilled in analyzing the complex interplay among client variables, activity demands, and the environments where participation occurs. For individuals experiencing any mental health issues, his or her ability to participate in occupations actively may be hindered. For example, an individual diagnosed with depression or anxiety may experience interruptions in sleep, difficulty completing self-care tasks, decreased motivation to participate in leisure activities, decreased concentration for school or job-related work, and avoidance of social interactions.
Occupational therapy utilizes the public health approach to mental health (WHO, 2001) which emphasizes the promotion of mental health as well as the prevention of, and intervention for, mental illness. This model highlights the distinct value of occupational therapists in mental health promotion, prevention, and intensive interventions across the lifespan (Miles et al., 2010). Below are the three major levels of service:
==== Tier 3: intensive interventions ====
Intensive interventions are provided for individuals with identified mental, emotional, or behavioral disorders that limit daily functioning, interpersonal relationships, feelings of emotional well-being, and the ability to cope with challenges in daily life. Occupational therapy practitioners are committed to the recovery model which focuses on enabling persons with mental health challenges through a client-centered process to live a meaningful life in the community and reach their potential (Champagne & Gray, 2011).
The focus of intensive interventions (direct–individual or group, consultation) is engagement in occupation to foster recovery or "reclaiming mental health" resulting in optimal levels of community participation, daily functioning, and quality of life; functional assessment and intervention (skills training, accommodations, compensatory strategies) (Brown, 2012); identification and implementation of healthy habits, rituals, and routines to support wellness.
==== Tier 2: targeted services ====
Targeted services are designed to prevent mental health problems in persons who are at risk of developing mental health challenges, such as those who have emotional experiences (e.g., trauma, abuse), situational stressors (e.g., physical disability, bullying, social isolation, obesity) or genetic factors (e.g., family history of mental illness). Occupational therapy practitioners are committed to early identification of and intervention for mental health challenges in all settings.
The focus of targeted services (small groups, consultation, accommodations, education) is engagement in occupations to promote mental health and diminish early symptoms; small, therapeutic groups (Olson, 2011); environmental modifications to enhance participation (e.g., create Sensory friendly classrooms, home, or work environments)
==== Tier 1: universal services ====
Universal services are provided to all individuals with or without mental health or behavioral problems, including those with disabilities and illnesses (Barry & Jenkins, 2007). Occupational therapy services focus on mental health promotion and prevention for all: encouraging participation in health-promoting occupations (e.g., enjoyable activities, healthy eating, exercise, adequate sleep); fostering self-regulation and coping strategies (e.g., mindfulness, yoga); promoting mental health literacy (e.g., knowing how to take care of one's mental health and what to do when experiencing symptoms associated with ill mental health). Occupational therapy practitioners develop universal programs and embed strategies to promote mental health and well-being in a variety of settings, from schools to the workplace.
The focus of universal services (individual, group, school-wide, employee/organizational level) is universal programs to help all individuals successfully participate in occupations that promote positive mental health (Bazyk, 2011); educational and coaching strategies with a wide range of relevant stakeholders focusing on mental health promotion and prevention; the development of coping strategies and resilience; environmental modifications and supports to foster participation in health-promoting occupations.
=== Productive aging ===
Occupational therapists work with older adults to maintain independence, participate in meaningful activities, and live fulfilling lives. Some examples of areas that occupational therapists address with older adults are driving, aging in place, low vision, and dementia or Alzheimer's disease (AD). When addressing driving, driver evaluations are administered to determine if drivers are safe behind the wheel. To enable independence of older adults at home, occupational therapists perform falls risk assessments, assess clients functioning in their homes, and recommend specific home modifications. When addressing low vision, occupational therapists modify tasks and the environment. While working with individuals with AD, occupational therapists focus on maintaining quality of life, ensuring safety, and promoting independence.
=== Geriatrics/productive aging ===
Occupational therapists address all aspects of aging from health promotion to treatment of various disease processes. The goal of occupational therapy for older adults is to ensure that older adults can maintain independence and reduce health care costs associated with hospitalization and institutionalization. In the community, occupational therapists can assess an older adults ability to drive and if they are safe to do so. If it is found that an individual is not safe to drive the occupational therapist can assist with finding alternate transit options. Occupational therapists also work with older adults in their home as part of home care. In the home, an occupational therapist can work on such things as fall prevention, maximizing independence with activities of daily living, ensuring safety and being able to stay in the home for as long as the person wants. An occupational therapist can also recommend home modifications to ensure safety in the home. Many older adults have chronic conditions such as diabetes, arthritis, and cardiopulmonary conditions. Occupational therapists can help manage these conditions by offering education on energy conservation strategies or coping strategies. Not only do occupational therapists work with older adults in their homes, they also work with older adults in hospitals, nursing homes and post-acute rehabilitation. In nursing homes, the role of the occupational therapist is to work with clients and caregivers on education for safe care, modifying the environment, positioning needs and enhancing IADL skills to name a few. In post-acute rehabilitation, occupational therapists work with clients to get them back home and to their prior level of function after a hospitalization for an illness or accident. Occupational therapists also play a unique role for those with dementia. The therapist may assist with modifying the environment to ensure safety as the disease progresses along with caregiver education to prevent burnout. Occupational therapists also play a role in palliative and hospice care. The goal at this stage of life is to ensure that the roles and occupations that the individual finds meaningful continue to be meaningful. If the person is no longer able to perform these activities, the occupational therapist can offer new ways to complete these tasks while taking into consideration the environment along with psychosocial and physical needs. Not only do occupational therapists work with older adults in traditional settings, they also work in senior centre's and ALFs.
=== Visual impairment ===
Visual impairment is one of the top 10 disabilities among American adults. Occupational therapists work with other professions, such as optometrists, ophthalmologists, and certified low vision therapists, to maximize the independence of persons with a visual impairment by using their remaining vision as efficiently as possible. AOTA's promotional goal of "Living Life to Its Fullest" speaks to who people are and learning about what they want to do, particularly when promoting the participation in meaningful activities, regardless of a visual impairment. Populations that may benefit from occupational therapy includes older adults, persons with traumatic brain injury, adults with potential to return to driving, and children with visual impairments. Visual impairments addressed by occupational therapists may be characterized into two types including low vision or a neurological visual impairment. An example of a neurological impairment is a cortical visual impairment (CVI) which is defined as "...abnormal or inefficient vision resulting from a problem or disorder affecting the parts of brain that provide sight". The following section will discuss the role of occupational therapy when working with the visually impaired.
Occupational therapy for older adults with low vision includes task analysis, environmental evaluation, and modification of tasks or the environment as needed. Many occupational therapy practitioners work closely with optometrists and ophthalmologists to address visual deficits in acuity, visual field, and eye movement in people with traumatic brain injury, including providing education on compensatory strategies to complete daily tasks safely and efficiently. Adults with a stable visual impairment may benefit from occupational therapy for the provision of a driving assessment and an evaluation of the potential to return to driving. Lastly, occupational therapy practitioners enable children with visual impairments to complete self care tasks and participate in classroom activities using compensatory strategies.
=== Adult rehabilitation ===
Occupational therapists address the need for rehabilitation following an injury or impairment. When planning treatment, occupational therapists address the physical, cognitive, psychosocial, and environmental needs involved in adult populations across a variety of settings.
Occupational therapy in adult rehabilitation may take a variety of forms:
Working with adults with autism at day rehabilitation programs to promote successful relationships and community participation through instruction on social skills
Increasing the quality of life for an individual with cancer by engaging them in occupations that are meaningful, providing anxiety and stress reduction methods, and suggesting fatigue management strategies
Coaching individuals with hand amputations how to put on and take off a myoelectrically controlled limb as well as training for functional use of the limb
Pressure sore prevention for those with sensation loss such as in spinal cord injuries.
Using and implementing new technology such as speech to text software and Nintendo Wii video games
Communicating via telehealth methods as a service delivery model for clients who live in rural areas
Working with adults who have had a stroke to regain their activities of daily living
=== Assistive technology ===
Occupational therapy practitioners, or occupational therapists (OTs), are uniquely poised to educate, recommend, and promote the use of assistive technology to improve the quality of life for their clients. OTs are able to understand the unique needs of the individual in regards to occupational performance and have a strong background in activity analysis to focus on helping clients achieve goals. Thus, the use of varied and diverse assistive technology is strongly supported within occupational therapy practice models.
=== Travel occupational therapy ===
Because of the rising need for occupational therapy practitioners in the U.S., many facilities are opting for travel occupational therapy practitioners—who are willing to travel, often out of state, to work temporarily in a facility. Assignments can range from 8 weeks to 9 months, but typically last 13–26 weeks in length. Travel therapists work in many different settings, but the highest need for therapists are in home health and skilled nursing facility settings. There are no further educational requirements needed to be a travel occupational therapy practitioner; however, there may be different state licensure guidelines and practice acts that must be followed. According to Zip Recruiter, as of July 2019, the national average salary for a full-time travel therapist is $86,475 with a range between $62,500 to $100,000 across the United States. Most commonly (43%), travel occupational therapists enter the industry between the ages of 21–30.
=== Occupational justice ===
The practice area of occupational justice relates to the "benefits, privileges and harms associated with participation in occupations" and the effects related to access or denial of opportunities to participate in occupations. This theory brings attention to the relationship between occupations, health, well-being, and quality of life. Occupational justice can be approached individually and collectively. The individual path includes disease, disability, and functional restrictions. The collective way consists of public health, gender and sexual identity, social inclusion, migration, and environment. The skills of occupational therapy practitioners enable them to serve as advocates for systemic change, impacting institutions, policy, individuals, communities, and entire populations. Examples of populations that experience occupational injustice include refugees, prisoners, homeless persons, survivors of natural disasters, individuals at the end of their life, people with disabilities, elderly living in residential homes, individuals experiencing poverty, children, immigrants, and LGBTQI+ individuals.
For example, the role of an occupational therapist working to promote occupational justice may include:
Analyzing task, modifying activities and environments to minimize barriers to participation in meaningful activities of daily living.
Addressing physical and mental aspects that may hinder a person's functional ability.
Provide intervention that is relevant to the client, family, and social context.
Contribute to global health by advocating for individuals with disabilities to participate in meaningful activities on a global level. Occupation therapists are involved with the World Health Organization (WHO), non-governmental organizations and community groups and policymaking to influence the health and well-being of individuals with disabilities worldwide
Occupational therapy practitioners' role in occupational justice is not only to align with perceptions of procedural and social justice but to advocate for the inherent need of meaningful occupation and how it promotes a just society, well-being, and quality of life among people relevant to their context. It is recommended to the clinicians to consider occupational justice in their everyday practice to promote the intention of helping people participate in tasks that they want and need to do.
=== Occupational injustice ===
In contrast, occupational injustice relates to conditions wherein people are deprived, excluded or denied of opportunities that are meaningful to them. Types of occupational injustices and examples within the OT practice include:
Occupational deprivation: The exclusion from meaningful occupations due to external factors that are beyond the person's control. For example, a person with difficulties with functional mobility may find it challenging to reintegrate into the community due to transportation barriers.
OTs can help in raising awareness and bringing communities together to reduce occupational deprivation
OTs can recommend the removal of environmental barriers to facilitate occupation, whilst designing programs that enable engagement.
Advocacy by providing information to policy to prevent possible unintended occupational deprivation and increase social cohesion and inclusion
Occupational apartheid: The exclusion of a person in chosen occupations due to personal characteristics such as age, gender, race, nationality, or socioeconomic status. An example can be seen in children with developmental disabilities from low socioeconomic backgrounds whose families would opt out of therapy due to financial constraints.
OTs providing interventions within a segregated population must focus on increasing occupational engagement through large-scale environmental modification and occupational exploration.
OTs can address occupational engagement through group and individual skill-building opportunities, as well as community-based experiences that explore free and local resources
Occupational marginalization: Relates to how implicit norms of behavior or societal expectations prevent a person from engaging in a chosen occupation. As an example, a child with physical impairments may only be offered table-top leisure activities instead of sports as an extracurricular activity due to the functional limitations caused by his physical impairments.
OTs can design, develop, and/or provide programs that mitigate the negative impacts of occupational marginalization and enhance optimal levels of performance and wellbeing that enable participation
Occupational imbalance: The limited participation in a meaningful occupation brought about by another role in a different occupation. This can be seen in the situation of a caregiver of a person with a disability who also has to fulfill other roles such as being a parent to other children, a student, or a worker.
OTs can advocate fostering for supportive environments for participation in occupations that promote individuals' well-being and in advocating for building healthy public policy
Occupational alienation: The imposition of an occupation that does not hold meaning for that person. In the OT profession, this manifests in the provision of rote activities that do not really relate to the goals or the client's interests.
OTs can develop individualized activities tailored to the interests of the individual to maximize their potential.
OTs can design, develop and promote programs that can be inclusive and provide a variety of choices that the individual can engage in.
Within occupational therapy practice, injustice may ensue in situations wherein professional dominance, standardized treatments, laws and political conditions create a negative impact on the occupational engagement of our clients. Awareness of these injustices will enable the therapist to reflect on his own practice and think of ways in approaching their client's problems while promoting occupational justice.
=== Community-based therapy ===
As occupational therapy (OT) has grown and developed, community-based practice has blossomed from an emerging area of practice to a fundamental part of occupational therapy practice (Scaffa & Reitz, 2013). Community-based practice allows for OTs to work with clients and other stakeholders such as families, schools, employers, agencies, service providers, stores, day treatment and day care and others who may influence the degree of success the client will have in participating. It also allows the therapist to see what is actually happening in the context and design interventions relevant to what might support the client in participating and what is impeding her or him from participating. Community-based practice crosses all of the categories within which OTs practice from physical to cognitive, mental health to spiritual, all types of clients may be seen in community-based settings. The role of the OT also may vary, from advocate to consultant, direct care provider to program designer, adjunctive services to therapeutic leader.
=== Nature-based therapy ===
Nature-based interventions and outdoor activities may be incorporated into occupational therapy practice as they can provide therapeutic benefits in various ways. Examples include therapeutic gardening, animal-assisted therapy (AAT), and adventure therapy.
For instance, parents reported improvement in the emotional regulation and social engagement of their children with autism spectrum disorder (ASD) in a study of parental perceptions regarding the outcomes of AAT conducted with trained dogs. They also observed reductions in problematic behaviors. A source cited in the study found similar results with AAT employing horses and llamas.
Gardening in a group setting may serve as a complementary intervention in stroke rehabilitation; in addition to being mentally restful and conducive to social connection, it helps patients master skills and can remind them of experiences from their past. Royal Rehab's Productive Garden Project in Australia, managed by a horticultural therapist, allows patients and practitioners to participate in meaningful activity outside the usual healthcare settings. Thus, tending a garden helps facilitate experiential activities, perhaps attaining a better balance between clinical and real-life pursuits during rehabilitation, in lieu of mainly relying on clinical interventions.
For adults with acquired brain injury, nature-based therapy has been found to improve motor abilities, cognitive function, and general quality of life. Contributing to a theoretical understanding of such successes in nature-based approaches are: nature's positive impact on problem solving and the refocusing of attention; an innate human connection with, and positive response to, the natural world; an increased sense of well-being when in contact with nature; and the emotional, nonverbal, and cognitive aspects of human-environment interaction.
== Education ==
Worldwide, there is a range of qualifications required to practice as an occupational therapist or occupational therapy assistant. Depending on the country and expected level of practice, degree options include associate degree, Bachelor's degree, entry-level master's degree, post-professional master's degree, entry-level Doctorate (OTD), post-professional Doctorate (DrOT or OTD), Doctor of Clinical Science in OT (CScD), Doctor of Philosophy in Occupational Therapy (PhD), and combined OTD/PhD degrees.
Both occupational therapist and occupational therapy assistant roles exist internationally. Currently in the United States, dual points of entry exist for both OT and OTA programs. For OT, that is entry-level Master's or entry-level Doctorate. For OTA, that is associate degree or bachelor's degree.
The World Federation of Occupational Therapists (WFOT) has minimum standards for the education of OTs, which was revised in 2016. All of the educational programs around the world need to meet these minimum standards. These standards are subsumed by and can be supplemented with academic standards set by a country's national accreditation organization. As part of the minimum standards, all programs must have a curriculum that includes practice placements (fieldwork). Examples of fieldwork settings include: acute care, inpatient hospital, outpatient hospital, skilled nursing facilities, schools, group homes, early intervention, home health, and community settings.
The profession of occupational therapy is based on a wide theoretical and evidence based background. The OT curriculum focuses on the theoretical basis of occupation through multiple facets of science, including occupational science, anatomy, physiology, biomechanics, and neurology. In addition, this scientific foundation is integrated with knowledge from psychology, sociology and more.
In the United States, Canada, and other countries around the world, there is a licensure requirement. In order to obtain an OT or OTA license, one must graduate from an accredited program, complete fieldwork requirements, and pass a national certification examination.
== Philosophical underpinnings ==
The philosophy of occupational therapy has evolved over the history of the profession. The philosophy articulated by the founders owed much to the ideals of romanticism, pragmatism and humanism, which are collectively considered the fundamental ideologies of the past century.
One of the most widely cited early papers about the philosophy of occupational therapy was presented by Adolf Meyer, a psychiatrist who had emigrated to the United States from Switzerland in the late 19th century and who was invited to present his views to a gathering of the new Occupational Therapy Society in 1922. At the time, Dr. Meyer was one of the leading psychiatrists in the United States and head of the new psychiatry department and Phipps Clinic at Johns Hopkins University in Baltimore, Maryland.
William Rush Dunton, a supporter of the National Society for the Promotion of Occupational Therapy, now the American Occupational Therapy Association, sought to promote the ideas that occupation is a basic human need, and that occupation is therapeutic. From his statements came some of the basic assumptions of occupational therapy, which include:
Occupation has a positive effect on health and well-being.
Occupation creates structure and organizes time.
Occupation brings meaning to life, culturally and personally.
Occupations are individual. People value different occupations.
These assumptions have been developed over time and are the basis of the values that underpin the Codes of Ethics issued by the national associations. The relevance of occupation to health and well-being remains the central theme.
In the 1950s, criticism from medicine and the multitude of disabled World War II veterans resulted in the emergence of a more reductionistic philosophy. While this approach led to developments in technical knowledge about occupational performance, clinicians became increasingly disillusioned and re-considered these beliefs. As a result, client centeredness and occupation have re-emerged as dominant themes in the profession. Over the past century, the underlying philosophy of occupational therapy has evolved from being a diversion from illness, to treatment, to enablement through meaningful occupation.
Three commonly mentioned philosophical precepts of occupational therapy are that occupation is necessary for health, that its theories are based on holism and that its central components are people, their occupations (activities), and the environments in which those activities take place. However, there have been some dissenting voices. Mocellin, in particular, advocated abandoning the notion of health through occupation as he proclaimed it obsolete in the modern world. As well, he questioned the appropriateness of advocating holism when practice rarely supports it. Some values formulated by the American Occupational Therapy Association have been critiqued as being therapist-centric and do not reflect the modern reality of multicultural practice.
In recent times occupational therapy practitioners have challenged themselves to think more broadly about the potential scope of the profession, and expanded it to include working with groups experiencing occupational injustice stemming from sources other than disability. Examples of new and emerging practice areas would include therapists working with refugees, children experiencing obesity, and people experiencing homelessness.
== Theoretical frameworks ==
A distinguishing facet of occupational therapy is that therapists often espouse the use theoretical frameworks to frame their practice. Many have argued that the use of theory complicates everyday clinical care and is not necessary to provide patient-driven care.
Note that terminology differs between scholars. An incomplete list of theoretical bases for framing a human and their occupations include the following:
=== Generic models ===
Generic models are the overarching title given to a collation of compatible knowledge, research and theories that form conceptual practice. More generally they are defined as "those aspects which influence our perceptions, decisions and practice".
The Person Environment Occupation Performance model (PEOP) was originally published in 1991 (Charles Christiansen & M. Carolyn Baum) and describes an individual's performance based on four elements including: environment, person, performance and occupation. The model focuses on the interplay of these components and how this interaction works to inhibit or promote successful engagement in occupation.
Occupation-focused practice models
Occupational Therapy Intervention Process Model (OTIPM) (Anne Fisher and others)
Occupational Performance Process Model (OPPM)
Model of Human Occupation (MOHO) (Gary Kielhofner and others)
MOHO was first published in 1980. It explains how people select, organise and undertake occupations within their environment. The model is supported with evidence generated over thirty years and has been successfully applied throughout the world.
Canadian Model of Occupational Performance and Engagement (CMOP-E)
This framework was originated in 1997 by the Canadian Association of Occupational Therapists (CAOT) as the Canadian Model of Occupational Performance (CMOP). It was expanded in 2007 by Palatjko, Townsend and Craik to add engagement. This framework upholds the view that three components—the person, environment and occupation- are related. Engagement was added to encompass occupational performance. A visual model is depicted with the person located at the center of the model as a triangle. The triangles three points represent cognitive, affective, and physical components with a spiritual center. The person triangle is surrounded by an outer ring symbolizing the context of environment with an inner ring symbolizing the context of occupation.
Occupational Performances Model – Australia (OPM-A) (Chris Chapparo & Judy Ranka)
The OPM(A) was conceptualized in 1986 with its current form launched in 2006. The OPM(A) illustrates the complexity of occupational performance, the scope of occupational therapy practice, and provides a framework for occupational therapy education.
Kawa (River) Model (Michael Iwama)
Biopsychosocial models
Engel's biopsychosocial model takes into account how disease and illness can be impacted by social, environmental, psychological and body functions. The biopsychosocial model is unique in that it takes the client's subjective experience and the client-provider relationship as factors to wellness. This model also factors in cultural diversity as many countries have different societal norms and beliefs. This is a multifactorial and multi-dimensional model to understand not only the cause of disease but also a person-centered approach that the provider has more of a participatory and reflective role.
Other models which incorporate biology (body and brain), psychology (mind), and social (relational, attachment) elements influencing human health include interpersonal neurobiology (IPNB), polyvagal theory (PVT), and the dynamic-maturational model of attachment and adaptation (DMM). The latter two in particular provide detail about the source, mechanism and function of somatic symptoms. Kasia Kozlowska describes how she uses these models to better connect with clients, to understand complex human illness, and how she includes occupational therapists as part of a team to address functional somatic symptoms. Her research indicates children with functional neurological disorders (FND) utilize higher, or more challenging, DMM self-protective attachment strategies to cope with their family environments, and how those impact functional somatic symptoms.
Pamela Meredith and colleagues have been exploring the relationship between the attachment system and psychological and neurobiological systems with implications for how occupational therapists can improve their approach and techniques. They have found correlations between attachment and adult sensory processing, distress, and pain perception. In a literature review, Meredith identified a number of ways that occupational therapists can effectively apply an attachment perspective, sometimes uniquely.
=== Frames of reference ===
Frames of reference are an additional knowledge base for the occupational therapist to develop their treatment or assessment of a patient or client group. Though there are conceptual models (listed above) that allow the therapist to conceptualise the occupational roles of the patient, it is often important to use further reference to embed clinical reasoning. Therefore, many occupational therapists will use additional frames of reference to both assess and then develop therapy goals for their patients or service users.
Biomechanical frame of reference
The biomechanical frame of reference is primarily concerned with motion during occupation. It is used with individuals who experience limitations in movement, inadequate muscle strength or loss of endurance in occupations. The frame of reference was not originally compiled by occupational therapists, and therapists should translate it to the occupational therapy perspective, to avoid the risk of movement or exercise becoming the main focus.
Rehabilitative (compensatory)
Neurofunctional (Gordon Muir Giles and Clark-Wilson)
Dynamic systems theory
Client-centered frame of reference
This frame of reference is developed from the work of Carl Rogers. It views the client as the center of all therapeutic activity, and the client's needs and goals direct the delivery of the occupational therapy process.
Cognitive-behavioural frame of reference
Ecology of human performance model
The recovery model
Sensory integration
Sensory integration framework is commonly implemented in clinical, community, and school-based occupational therapy practice. It is most frequently used with children with developmental delays and developmental disabilities such as autism spectrum disorder, Sensory processing disorder and dyspraxia. Core features of sensory integration in treatment include providing opportunities for the client to experience and integrate feedback using multiple sensory systems, providing therapeutic challenges to the client's skills, integrating the client's interests into therapy, organizing of the environment to support the client's engagement, facilitating a physically safe and emotionally supportive environment, modifying activities to support the client's strengths and weaknesses, and creating sensory opportunities within the context of play to develop intrinsic motivation. While sensory integration is traditionally implemented in pediatric practice, there is emerging evidence for the benefits of sensory integration strategies for adults.
== See also ==
Busy work
Occupational apartheid
Occupational therapy and substance use disorder
Occupational therapy in the management of cerebral palsy
Occupational therapy in Greece
Occupational therapy in the United Kingdom
== References ==
American Occupational Therapy Association (2014c). Occupational therapy practice framework: Domain and process (3rd ed.). American Journal of Occupational Therapy, 68(Suppl. 1), S1–S48. https://doi.org/10.5014/ajot.2014.682006
American Occupational Therapy Association (2017). Mental Health Promotion, Prevention, and Intervention in Occupational Therapy Practice. The American Journal of Occupational Therapy. 71(Suppl. 2). https://doi.org/10.5014/ajot.2017.716S03
Christiansen, C. H., & Haertl, K. (2014). A contextual history of occupational therapy. In B. A. B. Schell, G. Gillen, & M. E. Scaffa (Eds.), Willard and Spackman's occupational therapy (12th ed., pp. 9–34).Philadelphia: Lippincott Williams & Wilkins.
== External links ==
World Federation of Occupational Therapists | Wikipedia/Occupational_therapy |
Aquatic therapy refers to treatments and exercises performed in water for relaxation, fitness, physical rehabilitation, and other therapeutic benefit. Typically a qualified aquatic therapist gives constant attendance to a person receiving treatment in a heated therapy pool. Aquatic therapy techniques include Ai Chi, Aqua Running, Bad Ragaz Ring Method, Burdenko Method, Halliwick, Watsu, and other aquatic bodywork forms. Therapeutic applications include neurological disorders, spine pain, musculoskeletal pain, postoperative orthopedic rehabilitation, pediatric disabilities, pressure ulcers, and disease conditions, such as osteoporosis. Aquatic physical therapy is also beneficial for older adults for fall prevention, increasing balance, and gait training.
== Overview ==
Aquatic therapy refers to water-based treatments or exercises of therapeutic intent, in particular for relaxation, fitness, and physical rehabilitation. Treatments and exercises are performed while floating, partially submerged, or fully submerged in water. Many aquatic therapy procedures require constant attendance by a trained therapist, and are performed in a specialized temperature-controlled pool. Rehabilitation commonly focuses on improving the physical function associated with illness, injury, or disability.
Aquatic therapy encompasses a broad set of approaches and techniques, including aquatic exercise, physical therapy, aquatic bodywork, and other movement-based therapy in water (hydrokinesiotherapy). Treatment may be passive, involving a therapist or giver and a patient or receiver, or active, involving self-generated body positions, movement, or exercise. Examples include Halliwick Aquatic Therapy, Bad Ragaz Ring Method, Watsu, and Ai chi.
For orthopedic rehabilitation, aquatic therapy is considered to be synonymous with therapeutic aquatic exercise, aqua therapy, aquatic rehabilitation, water therapy, and pool therapy. Aquatic therapy can support restoration of function for many areas of orthopedics, including sports medicine, work conditioning, joint arthroplasty, and back rehabilitation programs. A strong aquatic component is especially beneficial for therapy programs where limited or non-weight bearing is desirable and where normal functioning is limited by inflammation, pain, guarding, muscle spasm, and limited range of motion (ROM). Water provides a controllable environment for reeducation of weak muscles and skill development for neurological and neuromuscular impairment, acute orthopedic or neuromuscular injury, rheumatological disease, or recovery from recent surgery.: 1
Various properties of water contribute to therapeutic effects, including the ability to use water for resistance in place of gravity or weights; thermal stability that permits maintenance of near-constant temperature; hydrostatic pressure that supports and stabilizes, and that influences heart and lung function; buoyancy that permits flotation and reduces the effects of gravity; and turbulence and wave propagation that allow gentle manipulation and movement.
== History ==
The use of water for therapeutic purposes first dates back to 2400 B.C. in the form of hydrotherapy, with records suggesting that ancient Egyptian, Assyrian, and Mohammedan cultures utilized mineral waters which were thought to have curative properties through the 18th century.
In 1911, Dr. Charles Leroy Lowman began to use therapeutic tubs to treat cerebral palsy and spastic patients in California at Orthopedic Hospital in Los Angeles. Lowman was inspired after a visit to Spaulding School for Crippled Children in Chicago, where wooden exercise tanks were used by paralyzed patients. The invention of the Hubbard Tank, developed by Leroy Hubbard, launched the evolution of modern aquatic therapy and the development of modern techniques including the Halliwick Concept and the Bad Ragaz Ring Method (BRRM). Throughout the 1930s, research and literature on aquatic exercise, pool treatment, and spa therapy began to appear in professional journals. Dr. Charles Leroy Lowman's Technique of Underwater Gymnastics: A Study in Practical Application, published in 1937, introduced underwater exercises that were used to help restore muscle function lost by bodily deformities. The National Foundation for Infantile Paralysis began utilizing corrective swimming pools and Lowman's techniques for treatment of poliomyelitis in the 1950s.
The American Physical Therapy Association (APTA) recognized the aquatic therapy section within the APTA in 1992, after a vote within the House of Delegates of the APTA in Denver, CO after lobbying efforts spearheaded starting in 1989 by Judy Cirullo and Richard C. Ruoti.
== Techniques ==
Techniques for aquatic therapy include the following:
Ai Chi: Ai Chi, developed in 1993 by Jun Konno, uses diaphragmatic breathing and active progressive resistance training in water to relax and strengthen the body, based on elements of qigong and tai chi.
Aqua running: Aqua running (Deep Water Running or Aquajogging) is a form of cardiovascular conditioning, involving running or jogging in water, useful for injured athletes and those who desire a low-impact aerobic workout. Aqua running is performed in deep water using a floatation device (vest or belt) to support the head above water.
Bad Ragaz Ring Method: The Bad Ragaz Ring Method (BRRM) focuses on rehabilitation of neuromuscular function using patterns of therapist-assisted exercise performed while the patient lies horizontal in water, with support provided by rings or floats around the neck, arms, pelvis, and knees. BRRM is an aquatic version of Proprioceptive Neuromuscular Facilitation (PNF) developed by physiotherapists at Bad Ragaz, Switzerland, as a synthesis of aquatic exercises designed by a German physician in the 1930s and land-based PNF developed by American physiotherapists in the 1950s and 1960s.: 187
Burdenko Method: The Burdenko Method, originally developed by Soviet professor of sports medicine Igor Burdenko, is an integrated land-water therapy approach that develops balance, coordination, flexibility, endurance, speed, and strength using the same methods as professional athletes. The water-based therapy uses buoyant equipment to challenge the center of buoyancy in vertical positions, exercising with movement in multiple directions, and at multiple speeds ranging from slow to fast.: 299
Halliwick Concept: The Halliwick Concept, originally developed by fluid mechanics engineer James McMillan in the late 1940s and 1950s at the Halliwick School for Girls with Disabilities in London, focuses on biophysical principles of motor control in water, in particular developing sense of balance (equilibrioception) and core stability. The Halliwick Ten-Point-Program implements the concept in a progressive program of mental adjustment, disengagement, and development of motor control, with an emphasis on rotational control, and applies the program to teach physically disabled people balance control, swimming, and independence. Halliwick Aquatic Therapy (also known as Water Specific Therapy, WST), implements the concept in patient-specific aquatic therapy.: 187
Watsu: Watsu is a form of aquatic bodywork, originally developed in the early 1980s by Harold Dull at Harbin Hot Springs, California, in which an aquatic therapist continuously supports and guides the person receiving treatment through a series of flowing movements and stretches that induce deep relaxation and provide therapeutic benefit. In the late 1980s and early 1990s physiotherapists began to use Watsu for a wide range of orthopedic and neurologic conditions, and to adapt the techniques for use with injury and disability.
== Applications and effectiveness ==
Applications of aquatic therapy include neurological disorders, spine pain, musculoskeletal pain, postoperative orthopedic rehabilitation, pediatric disabilities, pressure ulcers, and other disease conditions, such as osteoporosis.
A 2006 systematic review of effects of aquatic interventions in children with neuromotor impairments found "substantial lack of evidence-based research evaluating the specific effects of aquatic interventions in this population".
For musculoskeletal rehabilitation, aquatic therapy is typically used to treat acute injuries as well as subjective pain of chronic conditions, such as arthritis. Water immersion has compressive effects and reflexively regulates blood vessel tone. Muscle blood flow increases by about 225% during immersion, as increased cardiac output is distributed to skin and muscle tissue. Flotation is able to counteract the effects of gravitational force on joints, creating a low impact environment for joints to perform within. The temperature changes, increase in systolic blood pressure to extremities, and overall increase in ambulation are factors which help immersion to alleviate pain. Aquatic Therapy helps with pain and stiffness, but can also improve quality of life, tone the muscles in the body, and can help with movement in the knees, hips, and back. Moreover, research demonstrates that aquatic therapy is helpful to those who suffer from chronic back pain particularly their lower back. According to the data of the Visual Analogue Scale, people who utilized aquatic therapy as a mechanism to relieve lower back pain have experienced a decrease in pain and overall a better quality of life. Protocols using a combination of strengthening, flexibility, and balance exercises resulted in the greatest improvements in Childhood Health Assessment Questionnaire scores, whereas aerobic exercise did not result in greater improvements in CHAQ scores compared to a comparison group performing Qigong. Not only does aquatic therapy help with pain, but it can also benefit postural stability, meaning it can help to strengthen balance functions especially with people who have neurological disorders. For people diagnosed with Parkinson's disease, aquatic exercise has been proven to be more beneficial than land-based exercise for two important outcome measures. The Berg Balance Scale and Falls Efficacy Scale score were reported to have significant improvement when implementing aquatic exercise over land-based exercise. These results suggest that aquatic exercise can be extremely helpful for Parkinson's disease patients with specific balance disorders and fear of falling. The same concept can be applied to elder adults in need of fall prevention therapy. The use of water assists in providing a safe environment that eliminates the fear of falling and aids in strengthening balance.
For osteoarthritis (OA), aquatic therapy has been shown to decrease pain, increase mobility, and decrease stiffness. According to the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and Visual Analogue Scale (VAS), pain associated with OA has significantly decreased in individuals who performed aquatic therapy. Additionally, due to the nature of aquatic therapy, the temperature and buoyancy of the water have a therapeutic effect on osteoarthritic joints. This improves joint mobility and decreases stiffness.
Aquatic therapy in warm water has been shown to have a positive effect on the aerobic capacity of people with fibromyalgia. It is still inconclusive whether physical therapy is better than aquatic therapy; however, it has been demonstrated that aquatic therapy is as effective as land-based therapy. There are advantageous outcomes for patients with fibromyalgia resulting from aquatic therapy, such as a decrease in articulatory load regarding an individual's biomechanics.
Currently, there is no standardized aquatic therapy protocol for people post stroke; however, it is safe to conclude that aquatic therapy can be more effective than land-based therapy for improving balance and mobility. There is insufficient evidence regarding improvements in the functional independence of people post stroke.
From a cardiopulmonary standpoint, aquatic therapy is often used because its effects mirror land-based effects but at lower speeds. During immersion, blood is displaced upwards into heart and there is an increase in pulse pressure due to increased cardiac filling. Cardiac volume increases 27-30%. Oxygen consumption is increased with exercise, and heart rate is increased at higher temperatures, and decreased at lower temperatures. However, immersion can worsen effects in cases of valvular insufficiency due to this cardiac and stroke volume increase. The aquatic environment is also not recommended for those who experience severe or uncontrolled heart failure.
Aquatic therapy can be used for younger populations or in a pediatric setting. Aquatic therapy improves the trunk structure involved in gross motor function. The role of physical therapists is early intervention to improve their physical, mental, and social recovery. There are different interventions or activity sequences that can be implemented using aquatic therapy to improve specific functions or address specific disabilities in children. In regards to children and aquatic therapy, studies show that aquatic therapy improves motor symptoms, increases physical activity levels (which can be maintained over a long period of time) in children with developmental or motor disabilities. It also has a positive influence on social interactions/behaviors, and participation in children with neurological disorders. Aquatic therapy is beneficial for people with spinal cord injury or disorder. Aquatic therapy promotes physical and psychosocial benefits for patients with spinal cord injury and disorders. In a study, underwater treadmill training improved lower extremity strength, balance and gait in people who suffer from partial damage to their spinal cord. Respiratory function also improved with underwater treadmill training in these individuals. Knowledge of how to use aquatic therapy in application to people with spinal cord injuries or disorders is important because access to aquatic therapy is limited in this population even though there is evidence of significant improvement of many systems/ overall function using aquatic therapy.
Multiple Sclerosis or MS, is a disabling disease that affects one's central nervous system. MS will target the protective sheath (myelin) that covers the nerves. Myelin allows for communication. The destruction of myelin would result in poor communication between the brain and the body. Those with MS will experience neurological damage that impacts physical, cognitive, and psychological and emotional functioning, as well as quality of life. Aquatic therapy offers benefits for this population. By utilizing the physical properties of water such as buoyancy, turbulence, hydrostatic pressure, and hydrostatic resistance, MS patients would be able to work on balance and coordination. This being something that had been compromised with the progression of the disease. The viscosity or thickness of water, allows for MS patients to take their time with their movements. The viscous environment would result in slower more careful movement. Aquatic therapy also offers the benefit of being able to actively use your muscle in order to maintain stabilization within the water itself. Finally, another potential benefit of aquatic therapy and patients with MS is the temperature of the water creating a comfortable environment. Patients with MS experience increased body temperature. Some authors have recommended that water temperature be below 85 °F (29 °C) for MS patients. In the exercise program, a temperature range of 83 to 85 °F (28 to 29 °C) is recommended for low-repeat and low resistance exercises. The benefits of using aquatic therapy would result in a cool-down effect, that would essentially create a more optimal central temperature eventually increasing the ability to perform exercises effectively.
Exercise has been shown to decrease the number of osteoporotic fractures in postmenopausal adults. However, the risk of falling along with the intense weight bearing (WB) and dynamic resistance exercises recommended to improve bone mineral density (BMD) typically conflicts with the proclivity of many older and vulnerable individuals. Research shows that the properties of water utilized during Aquatic Therapy, such as buoyancy and water resistance have made statically significant improvements in the BMD of patient's Lumbar Spine (LS) and proximal Femoral Neck (FN), the most important sites for osteoporotic fractures. Due to its safety, Aquatic Therapy is recommended for individuals unable, unmotivated, or scared to perform intense land exercises. Further research is to be completed to determine the effects of specific aquatic exercise properties, such as intensity, frequency and duration on BMD in order to provide effective aquatic program recommendations. Additionally, land-based exercises involving balance and coordination can be a challenge for older adults as many have a fear of falling, but performing these exercises in water is a promising alternative. Not only are you able to move more easily in water, but research has proved that aquatic physical therapy promotes confidence, motor dexterity, range of motion, and center of mass displacement.
== Professional training and certification ==
Aquatic therapy is performed by diverse professionals with specific training and certification requirements. An aquatic therapy specialization is an add-on certification for healthcare providers, mainly including physical therapists and athletic trainers.
For medical purposes, aquatic therapy, as defined by the American Medical Association (AMA), can be performed by various legally-regulated healthcare professionals who have scopes of practice that permit them to offer such services and who are permitted to use AMA Current Procedural Terminology (CPT) codes. Currently, aquatic therapy certification is provided by the Aquatic Therapy and Rehab Institute (ATRI), which aims to further education for therapists and healthcare professionals working in aquatic environments. The ATRI prerequisites for certification include 15 hours of Aquatic Therapy, Rehab and/or Aquatic Therapeutic Exercise education, which can be completed hands-on or online. Once completing the prerequisites, those pursuing certification can take the Aquatic Therapy & Rehab Institutes Aquatic Therapeutic Exercise Certification exam.
== References == | Wikipedia/Aquatic_therapy |
The World Federation of Music Therapy (WFMT) is an international, non-profit music therapy corporation, headquartered in North Carolina in the USA. It aims to promote global awareness of both the scientific and artistic nature of the profession and advocates for the recognition of music therapy as an evidence-based profession.
== History ==
In 2010, the WFMT celebrated its 25th anniversary. The seeds for the development of the World Federation of Music Therapy were sown during the second World Congress of Music Therapy in Buenos Aires in 1976. A group of American, European, and South American music therapists met and started to develop a plan for unity and standards in the international arena of music therapy. Among the ten founding members were Rolando Benenzon (Argentina), Giovanna Mutti (Italy), Jacques Jost (France), Barbara Hesser (USA), Amelia Oldfield (UK), Ruth Bright (Australia), Heinrich Otto Moll (Germany), Rafael Colon (Puerto Rico), Clementina Nastari (Brazil), and Tadeusz Natanson (Poland). The Federation was formally established during the 5th World Congress of Music Therapy in Genoa, Italy, in 1985. Anecdotal reports about the beginnings of WFMT as well as the founder’s motivation can be found at the online journal, Voices.
== WFMT leadership ==
WFMT leadership consists of 21 individuals serving in a volunteer capacity as officers, commissioners, or regional liaisons.[5] The officers are the President, Past President, Secretary, Treasurer, and the Executive Assistant. The commissioners oversee committees related to one of eight areas: education , clinical practice, global crises intervention, publications, research & ethics, public relations, dei, and world congress organization. Regional liaisons reside in the eight WFMT global regions they represent and collect and disseminate information related to developments in the profession.The Assembly of Student Delegates consists of student leaders representing each of WFMT's global regions and is facilitated by the Executive Assistant.
== World Congresses of Music Therapy ==
The World Congress of Music Therapy is held every three years in a different country. Music therapy professionals and experts in related fields from around the world gather at the congress to share ideas, experiences, trends, and research outcomes. Formats include symposia, panels, and roundtables. The World Congress of Music Therapy is hosted by a WFMT organizational member in conjunction with a local host.
Previous congress locations include:
1. Paris, France (1974)
2. Buenos Aires, Argentina (1976)
3. San Juan, Puerto Rico (1981)
4. Paris, France (1983)
5. Genoa, Italy (1985) - Founding of the World Federation of Music Therapy (WFMT)
6. Rio de Janeiro, Brazil (1990)
7. Vitoria, Spain (1993)
8. Hamburg, Germany (1996)
9. Washington, D.C., USA (1999)
10. Oxford, England (2002)
11. Brisbane, Australia (2005)
12. Buenos Aires, Argentina (2008)
13. Seoul, South Korea (2011)
14. Vienna and Krems, Austria (2014)
15. Tsukuba, Japan (2017)
16. Pretoria, South Africa (2020)
17. Vancouver, Canada (2023)
== Awards ==
The federation makes a number of awards. Its highest award is the Lifetime Achievement Award, which has been awarded to Daphne Rickson in 2023, Clive Robbins in 2020, Barbara Wheeler in 2017, Ruth Bright in 2014, David Aldridge in 2011 and Rolando Benenzon in 2008.
== External links ==
WFMT (official webpage)
== References == | Wikipedia/World_Federation_of_Music_Therapy |
The temporal dynamics of music and language describes how the brain coordinates its different regions to process musical and vocal sounds. Both music and language feature rhythmic and melodic structure. Both employ a finite set of basic elements (such as tones or words) that are combined in ordered ways to create complete musical or lingual ideas.
== Neuroanotomy of language and music ==
Key areas of the brain are used in both music processing and language processing, such as Brocas area that is devoted to language production and comprehension. Patients with lesions, or damage, in the Brocas area often exhibit poor grammar, slow speech production and poor sentence comprehension. The inferior frontal gyrus, is a gyrus of the frontal lobe that is involved in timing events and reading comprehension, particularly for the comprehension of verbs. The Wernickes area is located on the posterior section of the superior temporal gyrus and is important for understanding vocabulary and written language.
The primary auditory cortex is located on the temporal lobe of the cerebral cortex. This region is important in music processing and plays an important role in determining the pitch and volume of a sound. Brain damage to this region often results in a loss of the ability to hear any sounds at all. The frontal cortex has been found to be involved in processing melodies and harmonies of music. For example, when a patient is asked to tap out a beat or try to reproduce a tone, this region is very active on fMRI and PET scans. The cerebellum is the "mini" brain at the rear of the skull. Similar to the frontal cortex, brain imaging studies suggest that the cerebellum is involved in processing melodies and determining tempos. The medial prefrontal cortex along with the primary auditory cortex has also been implicated in tonality, or determining pitch and volume.
In addition to the specific regions mentioned above many "information switch points" are active in language and music processing. These regions are believed to act as transmission routes that conduct information. These neural impulses allow the above regions to communicate and process information correctly. These structures include the thalamus and the basal ganglia.
Some of the above-mentioned areas have been shown to be active in both music and language processing through PET and fMRI studies. These areas include the primary motor cortex, the Brocas area, the cerebellum, and the primary auditory cortices.
== Imaging the brain in action ==
The imaging techniques best suited for studying temporal dynamics provide information in real time. The methods most utilized in this research are functional magnetic resonance imaging, or fMRI, and positron emission tomography known as PET scans.
Positron emission tomography involves injecting a short-lived radioactive tracer isotope into the blood. When the radioisotope decays, it emits positrons which are detected by the machine sensor. The isotope is chemically incorporated into a biologically active molecule, such as glucose, which powers metabolic activity. Whenever brain activity occurs in a given area these molecules are recruited to the area. Once the concentration of the biologically active molecule, and its radioactive "dye", rises enough, the scanner can detect it. About one second elapses from when brain activity begins to when the activity is detected by the PET device. This is because it takes a certain amount of time for the dye to reach the needed concentrations can be detected.
Functional magnetic resonance imaging or fMRI is a form of the traditional MRI imaging device that allows for brain activity to be observed in real time. An fMRI device works by detecting changes in neural blood flow that is associated with brain activity. fMRI devices use a strong, static magnetic field to align nuclei of atoms within the brain. An additional magnetic field, often called the gradient field, is then applied to elevate the nuclei to a higher energy state. When the gradient field is removed, the nuclei revert to their original state and emit energy. The emitted energy is detected by the fMRI machine and is used to form an image. When neurons become active blood flow to those regions increases. This oxygen-rich blood displaces oxygen depleted blood in these areas. Hemoglobin molecules in the oxygen-carrying red blood cells have different magnetic properties depending on whether it is oxygenated. By focusing the detection on the magnetic disturbances created by hemoglobin, the activity of neurons can be mapped in near real time. Few other techniques allow for researchers to study temporal dynamics in real time.
Another important tool for analyzing temporal dynamics is magnetoencephalography, known as MEG. It is used to map brain activity by detecting and recording magnetic fields produced by electrical currents generated by neural activity. The device uses a large array of superconducting quantum interface devices, called SQUIDS, to detect magnetic activity. Because the magnetic fields generated by the human brain are so small the entire device must be placed in a specially designed room that is built to shield the device from external magnetic fields.
== Other research methods ==
Another common method for studying brain activity when processing language and music is transcranial magnetic stimulation or TMS. TMS uses induction to create weak electromagnetic currents within the brain by using a rapidly changing magnetic field. The changes depolarize or hyper-polarize neurons. This can produce or inhibit activity in different regions. The effect of the disruptions on function can be used to assess brain interconnections.
== Recent research ==
Many aspects of language and musical melodies are processed by the same brain areas. In 2006, Brown, Martinez and Parsons found that listening to a melody or a sentence resulted in activation of many of the same areas including the primary motor cortex, the supplementary motor area, the Brocas area, anterior insula, the primary audio cortex, the thalamus, the basal ganglia and the cerebellum.
A 2008 study by Koelsch, Sallat and Friederici found that language impairment may also affect the ability to process music. Children with specific language impairments, or SLIs were not as proficient at matching tones to one another or at keeping tempo with a simple metronome as children with no language disabilities. This highlights the fact that neurological disorders that effect language may also affect musical processing ability.
Walsh, Stewart, and Frith in 2001 investigated which regions processed melodies and language by asking subjects to create a melody on a simple keyboard or write a poem. They applied TMS to the location where musical and lingual data. The research found that TMS applied to the left frontal lobe had affected the ability to write or produce language material, while TMS applied to the auditory and Brocas area of the brain most inhibited the research subject's ability to play musical melodies. This suggests that some differences exist between music and language creation.
== Developmental aspects ==
The basic elements of musical and lingual processing appear to be present at birth. For example, a French 2011 study that monitored fetal heartbeats found that past the age of 28 weeks, fetuses respond to changes in musical pitch and tempo. Baseline heart rates were determined by 2 hours of monitoring before any stimulus. Descending and ascending frequencies at different tempos were played near the womb. The study also investigated fetal response to lingual patterns, such as playing a sound clip of different syllables, but found no response to different lingual stimulus. Heart rates increased in response to high pitch loud sounds compared to low pitched soft sounds. This suggests that the basic elements of sound processing, such as discerning pitch, tempo and loudness are present at birth, while later-developed processes discern speech patterns after birth.
A 2010 study researched the development of lingual skills in children with speech difficulties. It found that musical stimulation improved the outcome of traditional speech therapy. Children aged 3.5 to 6 years old were separated into two groups. One group heard lyric-free music at each speech therapy session while the other group was given traditional speech therapy. The study found that both phonological capacity and the children's ability to understand speech increased faster in the group that was exposed to regular musical stimulation.
== Applications in Rehabilitation ==
Recent studies found that the effect of music in the brain is beneficial to individuals with brain disorders. Stegemöller discusses the underlying principles of music therapy being increased dopamine, neural synchrony and lastly, a clear signal which are important features for normal brain functioning. This combination of effects induces the brain's neuroplasticity which is suggested to increase an individual's potential for learning and adaptation. Existing literature examines the effect of music therapy on those with Parkinson's disease, Huntington's Disease and Dementia among others.
=== Parkinson's disease ===
Individuals with Parkinson's disease experience gait and postural disorders caused by decreased dopamine in the brain. One of hallmarks of this disease is shuffling gait, where the individual leans forward while walking and increases his speed progressively, which results in a fall or contact with a wall. Parkinson's patients also have difficulty in changing direction when walking. The principle of increased dopamine in music therapy would therefore ease parkinsonian symptoms. These effects were observed in Ghai's study of various auditory feedback cues wherein patients with Parkinson's disease experience increased walking speed, stride length as well as decreased cadence.
=== Huntington's disease ===
Huntington's disease affects a person's movement, cognitive as well as psychiatric functions which severely affects his or her quality of life. Most commonly, patients with Huntington's Disease most commonly experience chorea, lack of impulse control, social withdrawal and apathy. Schwarz et al. conducted a review over the published literature concerning the effects of music and dance therapy to patients with Huntington's disease. The fact that music is able to enhance cognitive and motor abilities for activities other than those of music related ones suggests that music may be beneficial to patients with this disease. Although studies concerning the effects of music on physiologic functions are essentially inconclusive, studies find that music therapy enhances patient participation and long term engagement in therapy which are important in achieving the maximum potential of a patient's abilities.
=== Dementia ===
Individuals with Alzeihmer's disease caused by dementia almost always become animated immediately when hearing a familiar song. Särkämo et al. discusses the effects of music found through a systemic literature review in those with this disease. Experimental studies on music and dementia find that although higher level auditory functions such as melodic contour perception and auditory analysis are diminished in individuals, they retain their basic auditory awareness involving pitch, timbre and rhythm. Interestingly, music-induced emotions and memories were also found to be preserved even in patients suffering from severe dementia. Studies demonstrate beneficial effects of music on agitation, anxiety and social behaviors and interactions. Cognitive tasks are affected by music as well, such as episodic memory and verbal fluency. Experimental studies on singing for individuals in this population enhanced memory storage, verbal working memory, remote episodic memory and executive functions.
== References == | Wikipedia/Temporal_dynamics_of_music_and_language |
Acceptance and commitment therapy (ACT, typically pronounced as the word "act") is a form of psychotherapy, as well as a branch of clinical behavior analysis. It is an empirically-based psychological intervention that uses acceptance and mindfulness strategies along with commitment and behavior-change strategies to increase psychological flexibility.
This approach was first called comprehensive distancing. Steven C. Hayes developed it around 1982 to integrate features of cognitive therapy and behavior analysis, especially behavior analytic data on the often negative effects of verbal rules and how they might be ameliorated.
ACT protocols vary with the target behavior and the setting. For example, in behavioral health, a brief version of ACT is focused acceptance and commitment therapy (FACT).
The goal of ACT is not to eliminate difficult feelings but to be present with what life brings and to "move toward valued behavior".: 240 Acceptance and commitment therapy invites people to open up to unpleasant feelings, not to overreact to them, and not to avoid situations that cause them.
Its therapeutic effect aims to be a positive spiral, in which more understanding of one's emotions leads to a better understanding of the truth. In ACT, "truth" is measured through the concept of "workability", or what works to take another step toward what matters (e.g., values, meaning).
== Technique ==
=== Basics ===
ACT is developed within a pragmatic philosophy, functional contextualism. ACT is based on relational frame theory (RFT), a comprehensive theory of language and cognition that is derived from behavior analysis. Both ACT and RFT are based on B. F. Skinner's philosophy of radical behaviorism.
ACT differs from some kinds of cognitive behavioral therapy (CBT) in that, rather than try to teach people to control their thoughts, feelings, sensations, memories, and other private events, ACT teaches them to "just notice", accept, and embrace their private events, especially previously unwanted ones. ACT helps the individual get in contact with a transcendent sense of self, "self-as-context"—the one who is always there observing and experiencing and yet distinct from one's thoughts, feelings, sensations, and memories. ACT tries to help the individual clarify values and then use them as the basis for action, bringing more vitality and meaning to life in the process, while increasing psychological flexibility.
While Western psychology has typically operated under the "healthy normality" assumption, which states that humans naturally are psychologically healthy, ACT assumes that the psychological processes of a normal human mind are often destructive. The core conception of ACT is that psychological suffering is usually caused by experiential avoidance, cognitive entanglement, and resulting psychological rigidity that leads to a failure to take needed behavioral steps in accord with core values. As a simple way to summarize the model, ACT views the core of many problems to be due to the concepts represented in the acronym, FEAR:
Fusion with your thoughts
Evaluation of experience
Avoidance of your experience
Reason-giving for your behavior
And the healthy alternative is to ACT:
Accept your thoughts and emotions
Choose a valued direction
Take action
=== Core principles ===
ACT commonly employs six core principles to help clients develop psychological flexibility:
Cognitive defusion: Learning methods to reduce the tendency to reify thoughts, images, emotions, and memories.
Acceptance: Allowing unwanted private experiences (thoughts, feelings and urges) to come and go without struggling with them.
Contact with the present moment: Awareness of the here and now, experienced with openness, interest, and receptiveness. (e.g., mindfulness)
The observing self: Accessing a transcendent sense of self, a continuity of consciousness which is unchanging.
Values: Discovering what is most important to oneself.
Committed action: Setting goals according to values and carrying them out responsibly, in the service of a meaningful life.
Correlational evidence has found that absence of psychological flexibility predicts many forms of psychopathology. A 2005 meta-analysis showed that the six ACT principles, on average, account for 16–29% of the variance in psychopathology (general mental health, depression, anxiety) at baseline, depending on the measure, using correlational methods.: 12–13 A 2012 meta-analysis of 68 laboratory-based studies on ACT components has also provided support for the link between psychological flexibility concepts and specific components.
== Research ==
The website of the Association for Contextual Behavioral Science states that there were over 1,300 randomized controlled trials (RCTs) of ACT, over 550 meta-analyses/systematic reviews, and 88 mediational studies of the ACT literature as of January 2025.
Organizations that have stated that acceptance and commitment therapy is empirically supported in certain areas or as a whole according to their standards include (as of March 2022):
Society of Clinical Psychology (American Psychological Association/APA Division 12)
World Health Organization
UK National Institute for Health and Care Excellence
Australian Psychological Society
Netherlands Institute of Psychologists: Sections of Neuropsychology and Rehabilitation
Netherlands National Institute for Public Health and the Environment (RIVM)
Sweden Association of Physiotherapists
SAMHSA's National Registry of Evidence-based Programs and Practices
California Evidence-Based Clearinghouse for Child Welfare
U.S. Department of Veterans Affairs/Department of Defense
US Department of Justice - Office of Justice Programs
Washington State Institute for Public Policy
American Headache Society
=== History ===
In 2006, only about 30 randomized clinical trials and controlled time series evaluating ACT were known, in 2011 the number had doubled to more than 60 ACT randomized controlled trials, and in 2023 there were more than 1,000 randomized controlled trials of ACT worldwide. A 2008 meta-analysis concluded that the evidence was still too limited for ACT to be considered a supported treatment. A 2009 meta-analysis found that ACT was more effective than placebo and "treatment as usual" for most problems. A 2012 meta-analysis was more positive and reported that ACT outperformed CBT, except for treating depression and anxiety. A 2015 review found that ACT was better than placebo and typical treatment for anxiety disorders, depression, and addiction. Its effectiveness was similar to traditional treatments like cognitive behavioral therapy (CBT). The authors also noted that research methodologies had improved since the studies described in the 2008 meta-analysis.
In 2020, a review of meta-analyses examined 20 meta-analyses that included 133 studies and 12,477 participants. The authors concluded ACT is efficacious for all conditions examined, including anxiety, depression, substance use, pain, and transdiagnostic groups. Results also showed that ACT was generally superior to inactive controls, treatment as usual, and most active intervention conditions.
In 2020–2021, after three RCTs of ACT by the World Health Organization (WHO), WHO released an ACT-based self-help course Self-Help Plus (SH+) for "groups of up to 30 people who have lived through or are living through adversity". As of July 2023, there are six RCTs of Self-Help Plus.
In 2022, a systematic review of meta-analyses about interventions for depressive symptoms in people living with chronic pain concluded "Acceptance and commitment therapy for general chronic pain, and fluoxetine and web-based psychotherapy for fibromyalgia showed the most robust effects and can be prioritized for implementation in clinical practice".
== Professional organizations ==
The Association for Contextual Behavioral Science is committed to research and development in the area of ACT, RFT, and contextual behavioral science more generally. As of 2023 it had over 8,000 members worldwide, about half outside of the United States. It holds annual "world conference" meetings each summer, with the location alternating between North America, Europe, and South America.
The Association for Behavior Analysis International (ABAI) has a special interest group for practitioner issues, behavioral counseling, and clinical behavior analysis ABA:I. ABAI has larger special interest groups for autism and behavioral medicine. ABAI serves as the core intellectual home for behavior analysts. ABAI sponsors three conferences/year—one multi-track in the U.S., one specific to autism and one international.
The Association for Behavioral and Cognitive Therapies (ABCT) also has an interest group in behavior analysis, which focuses on clinical behavior analysis. ACT work is commonly presented at ABCT and other mainstream CBT organizations.
The British Association for Behavioural and Cognitive Psychotherapies (BABCP) has a large special interest group in ACT, with over 1,200 members.
Doctoral-level behavior analysts who are psychologists belong to the American Psychological Association's (APA) Division 25—Behavior analysis. ACT has been called a "commonly used treatment with empirical support" within the APA-recognized specialty of behavioral and cognitive psychology.
== Similarities ==
ACT, dialectical behavior therapy (DBT), functional analytic psychotherapy (FAP), mindfulness-based cognitive therapy (MBCT) and other acceptance- and mindfulness-based approaches have been grouped by Steven Hayes under the name "the third wave of cognitive behavior therapy". However, this classification has been criticized and not everyone agrees with it. For example, David Dozois and Aaron T. Beck argued that there is no "new wave" and that there are a variety of extensions of cognitive therapy; for example, Jeffrey Young's schema therapy came after Beck's cognitive therapy but Young did not name his innovations "the third wave" or "the third generation" of cognitive behavior therapy.
According to Hayes' classification, the first wave, behaviour therapy, commenced in the 1920s based on Pavlov's classical (respondent) conditioning and operant conditioning that was correlated to reinforcing consequences. The second wave emerged in the 1970s and included cognition in the form of irrational beliefs, dysfunctional attitudes or depressogenic attributions. In the late 1980s empirical limitations and philosophical misgivings of the second wave gave rise to Steven Hayes' ACT theory which modified the focus of abnormal behaviour away from the content or form towards the context in which it occurs. People's rigid ideas about themselves, their lack of focus on what is important in their life, and their struggle to change sensations, feelings or thoughts that are troublesome only serve to create greater distress.
Steven C. Hayes described the third wave in his ABCT President Address as follows:
Grounded in an empirical, principle-focused approach, the third wave of behavioral and cognitive therapy is particularly sensitive to the context and functions of psychological phenomena, not just their form, and thus tends to emphasize contextual and experiential change strategies in addition to more direct and didactic ones. These treatments tend to seek the construction of broad, flexible and effective repertoires over an eliminative approach to narrowly defined problems, and to emphasize the relevance of the issues they examine for clinicians as well as clients. The third wave reformulates and synthesizes previous generations of behavioral and cognitive therapy and carries them forward into questions, issues, and domains previously addressed primarily by other traditions, in hopes of improving both understanding and outcomes.
ACT has also been adapted to create a non-therapy version of the same processes called acceptance and commitment training. This training process, oriented towards the development of mindfulness, acceptance, and valued skills in non-clinical settings such as businesses or schools, has also been investigated in a handful of research studies with good preliminary results.
The emphasis of ACT on ongoing present moment awareness, valued directions and committed action is similar to other psychotherapeutic approaches that, unlike ACT, are not as focused on outcome research or consciously linked to a basic behavioral science program, including approaches such as Gestalt therapy, Morita therapy, and others. Hayes and colleagues themselves stated in their book that introduced ACT that "many or even most of the techniques in ACT have been borrowed from elsewhere—from the human potential movement, Eastern traditions, behavior therapy, mystical traditions, and the like".
Wilson, Hayes & Byrd explored at length the compatibilities between ACT and the 12-step treatment of addictions and argued that, unlike most other psychotherapies, both approaches can be implicitly or explicitly integrated due to their broad commonalities. Both approaches endorse acceptance as an alternative to unproductive control. ACT emphasizes the hopelessness of relying on ineffectual strategies to control private experience, similarly the 12-step approach emphasizes the acceptance of powerlessness over addiction. Both approaches encourage a broad life-reorientation, rather than a narrow focus on the elimination of substance use, and both place great value on the long-term project of building of a meaningful life aligned with the clients' values. ACT and 12-step both encourage the pragmatic utility of cultivating a transcendent sense of self (higher power) within an unconventional, individualized spirituality. Finally they both openly accept the paradox that acceptance is a necessary condition for change and both encourage a playful awareness of the limitations of human thinking.
== Criticism ==
The textbook Systems of Psychotherapy: A Transtheoretical Analysis includes various criticisms of third-wave behaviour therapy, including ACT, from the perspectives of other systems of psychotherapy, including the complaint that third-wave therapies "display an annoying tendency to gather effective methods from other traditions and label them as their own".
=== Evidence-based practice ===
In a 2012 blog post, psychologist James C. Coyne criticized the process and studies initially used by the APA to favorably evaluate ACT for the treatment of psychosis in its labeling system for evidence-based medicine. In particular, it relied on only one full randomized trial, supplemented by a pilot study and a feasibility study, despite the criteria for "strong evidence" requiring a treatment to be supported by many such trials. The main study used (Bach and Hayes, 2002) was alleged not to have clearly specified its hypothesis—that ACT reduces rehospitalization—in advance of the conducted analysis: a practice in which researchers retrospectively cherry-pick the metric showing the largest claim-supporting change post-treatment. In 2016, this and other critiques were cited by William O'Donohue and co-authors in a paper on ACT "weak and pseudo-tests", adding that while "no doubt there are studies of ACT that are quite good", they had examined three trials of ACT that were "weakened and thus made easier to pass", and they listed over 30 ways in which such trials were "weak or pseudo-tests". Drawing on concepts from Karl Popper's philosophy of science and Popper's critique of psychoanalysis as impossible to falsify, O'Donohue and colleagues advocated Popperian severe testing instead.
=== Excessive promotion over other therapies ===
In 2013, psychologist Jonathan W. Kanter said that Hayes and colleagues "argue that empirical clinical psychology is hampered in its efforts to alleviate human suffering and present contextual behavioral science (CBS) to address the basic philosophical, theoretical and methodological shortcomings of the field. CBS represents a host of good ideas but at times the promise of CBS is obscured by excessive promotion of Acceptance and Commitment Therapy (ACT) and Relational Frame Theory (RFT) and demotion of earlier cognitive and behavior change techniques in the absence of clear logic and empirical support." Nevertheless, Kanter concluded that "the ideas of CBS, RFT, and ACT deserve serious consideration by the mainstream community and have great potential to shape a truly progressive clinical science to guide clinical practice".
Authors of a 2013 paper comparing ACT to cognitive therapy (CT) concluded that "although preliminary research on ACT is promising, we suggest that its proponents need to be appropriately humble in their claims. In particular, like CT, ACT cannot yet make strong claims that its unique and theory-driven intervention components are active ingredients in its effects." The authors of the paper suggested that many of the assumptions of ACT and CT "are pre-analytical, and cannot be directly pitted against one another in experimental tests."
In 2012, ACT appeared to be about as effective as standard CBT, with some meta-analyses showing small differences in favor of ACT and others not. For example, a meta-analysis published by Francisco Ruiz in 2012 looked at 16 studies comparing ACT to standard CBT. ACT failed to separate from CBT on effect sizes for anxiety, however modest benefits were found with ACT compared to CBT for depression and quality of life. The author found a separation between ACT and CBT on the "primary outcome"—a heterogeneous class of 14 separate outcome measures aggregated into the effect size analysis. However, the study is limited by the highly heterogeneous nature of the outcome variables used in the analysis, which tends to increase the number needed to treat (NNT) to replicate the effect size reported. More limited measures, such as depression, anxiety, and quality of life, decrease the NNT, making the analysis more clinically relevant, and on these measures, ACT did not outperform CBT. A 2012 clinical trial by Forman et al. found that Beckian CBT obtained better results than ACT.
Several theoretical and empirical concerns have arisen in response to the ascendancy of ACT. One theoretical concern was that ACT's primary authors and the corresponding theories of human behavior, relational frame theory (RFT) and functional contextualism (FC), recommended their approach as the proverbial holy grail of psychological therapies. In 2012, in the preface to the second edition of Acceptance and Commitment Therapy, the primary authors of ACT clarified that "ACT has not been created to undercut the traditions from which it came, nor does it claim to be a panacea.": x
== See also ==
Behavioral psychotherapy
Contextualism
Decisional balance sheet § Four square tool
Defence mechanism
Humanistic psychology
Positive psychology
Solution-focused brief therapy
== References ==
== External links ==
Contextualscience.org – Home for the Association for Contextual Behavioral Science, a professional organization dedicated to ACT, RFT, and functional contextualism. Also helpful for training opportunities for professionals interested in ACT and RFT. Most ACT workshops worldwide are listed here. | Wikipedia/Acceptance_and_commitment_therapy |
Gestalt therapy is a form of psychotherapy that emphasizes personal responsibility and focuses on the individual's experience in the present moment, the therapist–client relationship, the environmental and social contexts of a person's life, and the self-regulating adjustments people make as a result of their overall situation. It was developed by Fritz Perls, Laura Perls and Paul Goodman in the 1940s and 1950s, and was first described in the 1951 book Gestalt Therapy.
== Overview ==
Edwin Nevis, co-founder of the Gestalt Institute of Cleveland, founder of the Gestalt International Study Center, and faculty member at the MIT Sloan School of Management, described Gestalt therapy as "a conceptual and methodological base from which helping professionals can craft their practice". In the same volume, Joel Latner stated that Gestalt therapy is built upon two central ideas:
that the most helpful focus of psychotherapy is the experiential present moment, and that everyone is caught in webs of relationships;
thus, it is only possible to know ourselves against the background of our relationships to others.
The historical development of Gestalt therapy (described below) discloses the influences that generated these two ideas. Expanded, they support the four chief theoretical constructs (explained in the theory and practice section) that comprise Gestalt theory, and that guide the practice and application of Gestalt therapy.
Gestalt therapy was forged from various influences upon the lives of its founders during the times in which they lived, including the new physics, Eastern religion, existential phenomenology, Gestalt psychology, psychoanalysis, experimental theatre, systems theory, and field theory. Gestalt therapy rose from its beginnings in the middle of the 20th century to rapid and widespread popularity during the decade of the 1960s and early 1970s. During the 1970s and 80s Gestalt therapy training centers spread globally; but they were, for the most part, not aligned with formal academic settings. As the cognitive revolution eclipsed Gestalt theory in psychology, many came to believe Gestalt was an anachronism. Because Gestalt therapists disdained the positivism underlying what they perceived to be the concern of research, they largely ignored the need to use research to further develop Gestalt theory and Gestalt therapy practice (with a few exceptions like Les Greenberg; see the interview "Validating Gestalt"). However, the new century has seen a sea change in attitudes toward research and Gestalt practice. In March 2020, Vikram Kolmannskog became the world's first Professor of Gestalt Therapy at the Norwegian Gestalt Institute, where he has been teaching and researching since 2015.
Gestalt therapy is not identical to Gestalt psychology, but Gestalt psychology influenced the development of Gestalt therapy to a large extent.
Gestalt therapy focuses on process (what is actually happening) over content (what is being talked about). The emphasis is on what is being done, thought, and felt at the present moment (the phenomenality of both client and therapist), rather than on what was, might be, could be, or should have been. Gestalt therapy is a method of awareness practice (also called "mindfulness" in other clinical domains), by which perceiving, feeling, and acting are understood to be conducive to interpreting, explaining, and conceptualizing (the hermeneutics of experience). This distinction between direct experience versus indirect or secondary interpretation is developed in the process of therapy. The client learns to become aware of what they are doing and that triggers the ability to risk a shift or change.
The objective of Gestalt therapy is to enable the client to become more fully and creatively alive and to become free from the blocks and unfinished business that may diminish satisfaction, fulfillment, and growth, and to experiment with new ways of being. For this reason Gestalt therapy falls within the category of humanistic psychotherapies. As Gestalt therapy includes perception and the meaning-making processes by which experience forms, it can also be considered a cognitive approach. Also, because Gestalt therapy relies on the contact between therapist and client, and because a relationship can be considered to be contact over time, Gestalt therapy can be considered a relational or interpersonal approach. As it appreciates the larger picture which is the complex situation involving multiple influences in a complex situation, it can also be considered a multi-systemic approach. In addition, the processes of Gestalt therapy are experimental, involving action, Gestalt therapy can be considered both a paradoxical and an experiential/experimental approach.
When Gestalt therapy is compared to other clinical domains, a person can find many matches, or points of similarity. "Probably the clearest case of consilience is between gestalt therapy's field perspective and the various organismic and field theories that proliferated in neuroscience, medicine, and physics in the early and mid-20th century. Within social science there is a consilience between gestalt field theory and systems or ecological psychotherapy; between the concept of dialogical relationship and object relations, attachment theory, client-centered therapy and the transference-oriented approaches; between the existential, phenomenological, and hermeneutical aspects of gestalt therapy and the constructivist aspects of cognitive therapy; and between gestalt therapy's commitment to awareness and the natural processes of healing and mindfulness, acceptance and Buddhist techniques adopted by cognitive behavioral therapy.": 174
== Contemporary theory and practice ==
The theoretical foundations of Gestalt therapy essentially rest atop four "load-bearing walls": phenomenological method, dialogical relationship, field-theoretical strategies, and experimental freedom. Although all these tenets were present in the early formulation and practice of Gestalt therapy, as described in Ego, Hunger and Aggression (Perls, 1947) and in Gestalt Therapy, Excitement and Growth in the Human Personality (Perls, Hefferline, & Goodman, 1951), the early development of Gestalt therapy theory emphasized personal experience and the experiential episodes understood as "safe emergencies" or experiments. Indeed, half of the Perls, Hefferline, and Goodman book consists of such experiments. Later, through the influence of such people as Erving and Miriam Polster, a second theoretical emphasis emerged: namely, contact between self and other, and ultimately the dialogical relationship between therapist and client. Later still, field theory emerged as an emphasis. At various times over the decades, since Gestalt therapy first emerged, one or more of these tenets and the associated constructs that go with them have captured the imagination of those who have continued developing the contemporary theory of Gestalt therapy. Since 1990 the literature focused upon Gestalt therapy has flourished, including the development of several professional Gestalt journals. Along the way, Gestalt therapy theory has also been applied in Organizational Development and coaching work. And, more recently, Gestalt methods have been combined with meditation practices into a unified program of human development called Gestalt Practice, which is used by some practitioners.
Richard G. Erskine, the originator of Integrative Psychotherapy (Developmentally Based, Relationally Focused), has written about the treatment of shame and self-righteousness in "A Gestalt therapy approach to shame and self-righteousness: theory and methods" from his book Relational Patterns, Therapeutic Presence: Concepts and Practice of Integrative Psychotherapy (2015).
=== Phenomenological method ===
The goal of a phenomenological exploration is awareness. This exploration works systematically to reduce the effects of bias through repeated observations and inquiry.
The phenomenological method comprises three steps:
Applying the rule of epoché - one sets aside one's initial biases and prejudices in order to suspend expectations and assumptions.
Applying the rule of description - one occupies oneself with describing instead of explaining.
Applying the rule of horizontalization - one treats each item of description as having equal value or significance.
The rule of epoché sets aside any initial theories with regard to what is presented in the meeting between therapist and client. The rule of description implies immediate and specific observations, abstaining from interpretations or explanations, especially those formed from the application of a clinical theory superimposed over the circumstances of experience. The rule of horizontalization avoids any hierarchical assignment of importance such that the data of experience become prioritized and categorized as they are received. A Gestalt therapist using the phenomenological method might say something like, "I notice a slight tension at the corners of your mouth when I say that, and I see you shifting on the couch and folding your arms across your chest ... and now I see you rolling your eyes back". Of course, the therapist may make a clinically relevant evaluation, but when applying the phenomenological method, temporarily suspends the need to express it.
=== Dialogical relationship ===
To create the conditions under which a dialogic moment might occur, the therapist attends to their own presence, creates the space for the client to enter in and become present as well (called inclusion), and commits themself to the dialogic process, surrendering to what takes place, as opposed to attempting to control it. With presence, the therapist judiciously "shows up" as a whole and authentic person, instead of assuming a role, false self or persona. To be judicious, the therapist takes into account the specific strengths, weaknesses and values of the client. The only good client is a live client, so driving a client away by injudicious exposure of intolerable [to this client] experience of the therapist is obviously counter-productive. For example, for an atheistic therapist to tell a devout client that religion is myth would not be useful, especially in the early stages of the relationship. To practice inclusion is to accept however the client chooses to be present, whether in a defensive and obnoxious stance or a superficially cooperative one. To practice inclusion is to support the presence of the client, including their resistance, not as a gimmick but in full realization that this is how the client is actually present and is the best this client can do at this time. Finally, the Gestalt therapist is committed to the process, trusts in that process, and does not attempt to save themself from it.
=== Field-theoretical strategies ===
Field theory is a concept borrowed from physics in which people and events are no longer considered discrete units but as parts of something larger, which are influenced by everything including the past, and observation itself. "The field" can be considered in two ways. There are ontological dimensions and there are phenomenological dimensions to one's field. The ontological dimensions are all those physical and environmental contexts in which we live and move. They might be the office in which one works, the house in which one lives, the city and country of which one is a citizen, and so forth. The ontological field is the objective reality that supports our physical existence. The phenomenological dimensions are all mental and physical dynamics that contribute to a person's sense of self, one's subjective experience—not merely elements of the environmental context. These might be the memory of an uncle's inappropriate affection, one's color blindness, one's sense of the social matrix in operation at the office in which one works, and so forth. The way that Gestalt therapists choose to work with field dynamics makes what they do strategic. Gestalt therapy focuses upon character structure; according to Gestalt theory, the character structure is dynamic rather than fixed in nature. To become aware of one's character structure, the focus is upon the phenomenological dimensions in the context of the ontological dimensions.
=== Experimental freedom ===
Gestalt therapy is distinct because it moves toward action, away from mere talk therapy, and for this reason is considered an experiential approach. Through experiments, the therapist supports the client's direct experience of something new, instead of merely talking about the possibility of something new. Indeed, the entire therapeutic relationship may be considered experimental, because at one level it is a corrective, relational experience for many clients, and it is a "safe emergency" that is free to turn out however it will. An experiment can also be conceived as a teaching method that creates an experience in which a client might learn something as part of their growth.
Examples might include:
Rather than talking about the client's critical parent, a Gestalt therapist might ask the client to imagine the parent is present, or that the therapist is the parent, and talk to that parent directly
If a client is struggling with how to be assertive, a Gestalt therapist could either:
have the client say some assertive things to the therapist or members of a therapy group
give a talk about how one should never be assertive
A Gestalt therapist might notice something about the non-verbal behavior or tone of voice of the client; then the therapist might have the client exaggerate the non-verbal behavior and pay attention to that experience
A Gestalt therapist might work with the breathing or posture of the client, and direct awareness to changes that might happen when the client talks about different content.
With all these experiments the Gestalt therapist is working with process rather than content, the how rather than the what.
== Noteworthy issues ==
=== Self ===
In field theory, self is a phenomenological concept, existing in comparison with other. Without the other there is no self, and how one experiences the other is inseparable from how one experiences oneself. The continuity of selfhood (functioning personality) is something that is achieved in relationship, rather than something inherently "inside" the person. This can have its advantages and disadvantages. At one end of the spectrum, someone may not have enough self-continuity to be able to make meaningful relationships, or to have a workable sense of who they are. In the middle, their personality is a loose set of ways of being that work for them, including commitments to relationships, work, culture and outlook, always open to change where they need to adapt to new circumstances or just want to try something new. At the other end, their personality is a rigid defensive denial of the new and spontaneous. They act in stereotyped ways, and either induce other people to act in particular and fixed ways towards them, or they redefine their actions to fit with fixed stereotypes.
In Gestalt therapy, the process is not about the self of the client being helped or healed by the fixed self of the therapist; rather it is an exploration of the co-creation of self and other in the here-and-now of the therapy. There is no assumption that the client will act in all other circumstances as they do in the therapy situation. However, the areas that cause problems will be either the lack of self-definition leading to chaotic or psychotic behaviour, or the rigid self-definition in some area of functioning that denies spontaneity and makes dealing with particular situations impossible. Both of these conditions show up very clearly in the therapy, and can be worked with in the relationship with the therapist.
The experience of the therapist is also very much part of the therapy. Since we co-create our self-other experiences, the way a therapist experiences being with a client is significant information about how the client experiences themselves. The proviso here is that a therapist is not operating from their own fixed responses. This is why Gestalt therapists are required to undertake significant therapy of their own during training.
From the perspective of this theory of self, neurosis can be seen as fixed predictability—a fixed Gestalt—and the process of therapy can be seen as facilitating the client to become unpredictable: more responsive to what is in the client's present environment, rather than responding in a stuck way to past introjects or other learning. If the therapist has expectations of how the client should end up, this defeats the aim of therapy.
=== Change ===
In what has now become a classic of Gestalt therapy literature, Arnold R. Beisser described Gestalt's paradoxical theory of change. The paradox is that the more one attempts to be who one is not, the more one remains the same. Conversely, when people identify with their current experience, the conditions of wholeness and growth support change. Put another way, change comes about as a result of "full acceptance of what is, rather than a striving to be different."
=== Empty chair technique ===
Empty chair technique or chairwork is typically used in Gestalt therapy when a patient might have deep-rooted emotional problems from someone or something in their life, such as relationships with themselves, with aspects of their personality, their concepts, ideas, feelings, etc., or other people in their lives. The purpose of this technique is to get the patient to think about their emotions and attitudes. Common things the patient addresses in the empty chair are another person, aspects of their own personality, a certain feeling, etc., as if that thing were in that chair. They may also move between chairs and act out two or more sides of a discussion, typically involving the patient and persons significant to them. It uses a passive approach to opening up the patient's emotions and pent-up feelings so they can let go of what they have been holding back. A form of role-playing, the technique focuses on exploration of self and is used by therapists to help patients self-adjust. Gestalt techniques were originally a form of psychotherapy, but are now often used in counseling, for instance, by encouraging clients to act out their feelings helping them prepare for a new job. The purpose of the technique is so the patient will become more in touch with their feelings and have an emotional conversation that clears up any long-held feelings or reaction to the person or object in the chair.
== Historical development ==
Fritz Perls was a German-Jewish psychoanalyst who fled Europe with his wife Laura Perls to South Africa in order to escape Nazi oppression in 1933. After World War II, the couple emigrated to New York City, which had become a center of intellectual, artistic and political experimentation by the late 1940s and early 1950s.
=== Early influences ===
Perls grew up on the bohemian scene in Berlin, participated in Expressionism and Dadaism, and experienced the turning of the artistic avant-garde toward the revolutionary left. Deployment to the front line, the trauma of war, anti-Semitism, intimidation, escape, and the Holocaust are further key sources of biographical influence.
Perls served in the German Army during World War I, and was wounded in the conflict. After the war he was educated as a medical doctor. He became an assistant to Kurt Goldstein, who worked with brain-injured soldiers. Perls went through a psychoanalysis with Wilhelm Reich and became a psychiatrist. Perls assisted Goldstein at Frankfurt University where he met his wife Lore (Laura) Posner, who had earned a doctorate in Gestalt psychology. They fled Nazi Germany in 1933 and settled in South Africa. Perls established a psychoanalytic training institute and joined the South African armed forces, serving as a military psychiatrist. During these years in South Africa, Perls was influenced by Jan Smuts and his ideas about "holism".
In 1936 Fritz Perls attended a psychoanalysts' conference in Marienbad, Czechoslovakia, where he presented a paper on oral resistances, mainly based on Laura Perls's notes on breastfeeding their children. Perls's paper was turned down. Perls did present his paper in 1936, but according to him, it met with "deep disapproval." Perls wrote his first book, Ego, Hunger and Aggression (1942, 1947), in South Africa, based in part on the rejected paper. It was later re-published in the United States. Laura Perls wrote two chapters of this book, but she was not given adequate recognition for her work.
=== Seminal book ===
Perls's seminal work was Gestalt Therapy: Excitement and Growth in the Human Personality, published in 1951, co-authored by Fritz Perls, Paul Goodman, and Ralph Hefferline (a university psychology professor and sometimes-patient of Fritz Perls). Most of Part II of the book was written by Paul Goodman from Perls's notes, and it contains the core of Gestalt theory. This part was supposed to appear first, but the publishers decided that Part I, written by Hefferline, fit into the nascent self-help ethos of the day, and they made it an introduction to the theory. Isadore From, a leading early theorist of Gestalt therapy, taught Goodman's Part II for an entire year to his students, going through it phrase by phrase.
=== First instances of Gestalt therapy ===
Fritz and Laura founded the first Gestalt Institute in 1952, running it out of their Manhattan apartment. Isadore From became a patient, first of Fritz, and then of Laura. Fritz soon made From a trainer, and also gave him some patients. From lived in New York until his death, at age seventy-five, in 1993. He was known worldwide for his philosophical and intellectually rigorous take on Gestalt therapy. Acknowledged as a supremely gifted clinician, he was indisposed to writing, so what remains of his work is merely transcripts of interviews.
Of great importance to understanding the development of Gestalt therapy is the early training which took place in experiential groups in the Perls's apartment, led by both Fritz and Laura before Fritz left for the West Coast, and after by Laura alone. These "trainings" were unstructured, with little didactic input from the leaders, although many of the principles were discussed in the monthly meetings of the institute, as well as at local bars after the sessions. Many notable Gestalt therapists emerged from these crucibles in addition to Isadore From, e.g., Richard Kitzler, Dan Bloom, Bud Feder, Carl Hodges, and Ruth Ronall. In these sessions, both Fritz and Laura used some variation of the "hot seat" method, in which the leader essentially works with one individual in front of an audience with little or no attention to group dynamics. In reaction to this omission emerged a more interactive approach in which Gestalt-therapy principles were blended with group dynamics; in 1980, the book Beyond the Hot Seat, edited by Feder and Ronall, was published, with contributions from members of both the New York and Cleveland Institutes, as well as others.
Fritz left Laura and New York in 1960, briefly lived in Miami, and ended up in California. Jim Simkin was a psychotherapist who became a client of Perls in New York and then a co-therapist with Perls in Los Angeles. Simkin was responsible for Perls's going to California, where Perls began a psychotherapy practice. Ultimately, the life of a peripatetic trainer and workshop leader was better suited to Fritz's personality—starting in 1963, Simkin and Perls co-led some of the early Gestalt workshops and training groups at Esalen Institute in Big Sur, California, where Perls eventually settled and built a home. Jim Simkin then purchased property next to Esalen and started his own training center, which he ran until his death in 1984. Simkin refined his precise version of Gestalt therapy, training psychologists, psychiatrists, counselors and social workers within a very rigorous, residential training model.
In 1997 Chuck Kanner founded Kanner Academy and Community Schools in Sarasota Florida. These residential schools for struggling kids and families were the first example of a consensus based therapeutic community grounded in gestalt principals.
=== Schism ===
In the 1960s, Perls became infamous among the professional elite for his public workshops at Esalen Institute. Isadore From referred to some of Fritz's brief workshops as "hit-and-run" therapy, because of Perls's alleged emphasis on showmanship with little or no follow-through—but Perls never considered these workshops to be complete therapy; rather, he felt he was giving demonstrations of key points for a largely professional audience. Unfortunately, some films and tapes of his work were all that most graduate students were exposed to, along with the misperception that these represented the entirety of Perls's work.
When Fritz Perls left New York for California, there began to be a split with those who saw Gestalt therapy as a therapeutic approach similar to psychoanalysis. This view was represented by Isadore From, who practiced and taught mainly in New York, as well as by the members of the Cleveland Institute, which was co-founded by From. An entirely different approach was taken, primarily in California, by those who saw Gestalt therapy not just as a therapeutic modality, but as a way of life. The East Coast, New York–Cleveland axis was often appalled by the notion of Gestalt therapy leaving the consulting room and becoming a way of life on the West Coast in the 1960s (see the "Gestalt prayer").
An alternative view of this split saw Perls in his last years continuing to develop his a-theoretical and phenomenological methodology, while others, inspired by From, were inclined to theoretical rigor which verged on replacing experience with ideas.
The split continues between what has been called "East Coast Gestalt" and "West Coast Gestalt," at least from an Amerocentric point of view. While the communitarian form of Gestalt continues to flourish, Gestalt therapy was largely replaced in the United States by cognitive behavioral therapy, and many Gestalt therapists in the U.S. drifted toward organizational management and coaching. At the same time, contemporary Gestalt Practice (to a large extent based upon Gestalt therapy theory and practice) was developed by Dick Price, the co-founder of Esalen Institute. Price was one of Perls's students at Esalen.
=== Post-Perls ===
In 1969, Fritz Perls left the United States to start a Gestalt community at Lake Cowichan on Vancouver Island, Canada. He died almost one year later, on 14 March 1970, in Chicago. One member of the Gestalt community was Barry Stevens. Her book about that phase of her life, Don't Push the River, became very popular. She developed her own form of Gestalt therapy body work, which is essentially a concentration on the awareness of body processes.
==== Polsters ====
Erving and Miriam Polster started a training center in La Jolla, California, and published a book, Gestalt Therapy Integrated, in the 1970s.
They were influential in advancing the idea of contact boundary phenomena, which is a key part of Gestalt theory. The standard contact boundary resistances were confluence, introjection, projection, and retroflection, but the Polsters added "deflection" as a way of avoiding contact. Boundary phenomena can have good or bad effects, depending on the situation. For example, it's normal for a baby and mother to merge, but not for a therapist and client. If the therapist and client become too merged, then there can be no progress because there is no boundary for them to connect with. The client will not be able to learn anything new because the therapist will just become a part of them.
== Influences upon Gestalt therapy ==
=== Some examples ===
There were a variety of psychological and philosophical influences upon the development of Gestalt therapy, not the least of which were the social forces at the time and place of its inception. Gestalt therapy is an approach that is holistic (including mind, body, and culture). It is present-centered and related to existential therapy in its emphasis on personal responsibility for action, and on the value of "I–thou" relationship in therapy. In fact, Perls considered calling Gestalt therapy existential-phenomenological therapy. "The I and thou in the Here and Now" was a semi-humorous shorthand mantra for Gestalt therapy, referring to the substantial influence of the work of Martin Buber—in particular his notion of the I–Thou relationship—on Perls and Gestalt. Buber's work emphasized immediacy, and required that any method or theory answer to the therapeutic situation, seen as a meeting between two people. Any process or method that turns the patient into an object (the I–It) must be strictly secondary to the intimate, and spontaneous, I–Thou relation. This concept became important in much of Gestalt theory and practice.
Both Fritz and Laura Perls were students and admirers of the neuropsychiatrist Kurt Goldstein. Gestalt therapy was based in part on Goldstein's concept called Organismic theory. Goldstein viewed a person in terms of a holistic and unified experience; he encouraged a "big picture" perspective, taking into account the whole context of a person's experience. The word Gestalt means whole, or configuration. Laura Perls, in an interview, denotes the Organismic theory as the base of Gestalt therapy.
There were additional influences on Gestalt therapy from existentialism, particularly the emphasis upon personal choice and responsibility.
The late 1950s–1960s movement toward personal growth and the human potential movement in California fed into, and was itself influenced by, Gestalt therapy. In this process Gestalt therapy somehow became a coherent Gestalt, which is the Gestalt psychology term for a perceptual unit that holds together and forms a unified whole.
=== Psychoanalysis ===
Fritz Perls trained as a neurologist at major medical institutions and as a Freudian psychoanalyst in Berlin and Vienna, the most important international centers of the discipline in his day. He worked as a training analyst for several years with the official recognition of the International Psychoanalytic Association (IPA), and must be considered an experienced clinician.
Gestalt therapy was influenced by psychoanalysis: it was part of a continuum moving from the early work of Freud, to the later Freudian ego analysis, to Wilhelm Reich and his character analysis and notion of character armor, with attention to nonverbal behavior; this was consonant with Laura Perls's background in dance and movement therapy. To this was added the insights of academic Gestalt psychology, including perception, Gestalt formation, and the tendency of organisms to complete an incomplete Gestalt and to form "wholes" in experience.
Central to Fritz and Laura Perls's modifications of psychoanalysis was the concept of dental or oral aggression. In Ego, Hunger and Aggression (1947), Fritz Perls's first book, to which Laura Perls contributed (ultimately without recognition), Perls suggested that when the infant develops teeth, he or she has the capacity to chew, to break food apart, and, by analogy, to experience, taste, accept, reject, or assimilate. This was opposed to Freud's notion that only introjection takes place in early experience. Thus Perls made assimilation, as opposed to introjection, a focal theme in his work, and the prime means by which growth occurs in therapy.
In contrast to the psychoanalytic stance, in which the "patient" introjects the (presumably more healthy) interpretations of the analyst, in Gestalt therapy the client must "taste" his or her own experience and either accept or reject it—but not introject or "swallow whole." Hence, the emphasis is on avoiding interpretation, and instead encouraging discovery. This is the key point in the divergence of Gestalt therapy from traditional psychoanalysis: growth occurs through gradual assimilation of experience in a natural way, rather than by accepting the interpretations of the analyst; thus, the therapist should not interpret, but lead the client to discover for him- or herself.
The Gestalt therapist contrives experiments that lead the client to greater awareness and fuller experience of his or her possibilities. Experiments can be focused on undoing projections or retroflections. The therapist can work to help the client with closure of unfinished Gestalts ("unfinished business" such as unexpressed emotions towards somebody in the client's life). There are many kinds of experiments that might be therapeutic, but the essence of the work is that it is experiential rather than interpretive, and in this way, Gestalt therapy distinguishes itself from psychoanalysis.
=== Principal influences: a summary list ===
Otto Rank's invention of "here-and-now" therapy and Rank's post-Freudian book Art and Artist (1932), both of which strongly influenced Paul Goodman
Wilhelm Reich's psychoanalytic developments, especially his early character analysis, and the later concept of character armor and its focus on the body
Jacob Moreno's psychodrama, principally the development of enactment techniques for the resolution of psychological conflicts
Kurt Goldstein's holistic theory of the organism, based on Gestalt theory
Martin Buber's philosophy of dialogue and relationship ("I–Thou")
Kurt Lewin's field theory as applied to the social sciences and group dynamics
European phenomenology of Franz Brentano, Edmund Husserl, Martin Heidegger, and Maurice Merleau-Ponty
The existentialism of Kierkegaard over that of Sartre, rejecting nihilism
The Jungian psychology of Carl Jung, particularly the polarities concept
Some elements from Zen Buddhism
Differentiation between thing and concept from Zen and the works of Alfred Korzybski
The American pragmatism of William James, George Herbert Mead, and John Dewey
== Therapies influenced by Gestalt therapy ==
Psychotherapies influenced by Gestalt therapy include:
Acceptance and commitment therapy
Emotion-focused therapy
== Current status ==
Gestalt therapy reached a zenith in the United States in the late 1970s and early 1980s. Since then, it has influenced other fields like organizational development, coaching, and teaching. Many of its contributions have become assimilated into other schools of therapy. In recent years, it has seen a resurgence in popularity as an active, psychodynamic form of therapy which has also incorporated some elements of recent developments in attachment theory. There are, for example, four Gestalt training institutes in the New York City metropolitan area alone, in addition to dozens of others worldwide.
Gestalt therapy continues to thrive as a widespread form of psychotherapy, especially throughout Europe, where there are many practitioners and training institutions. Dan Rosenblatt led Gestalt therapy training groups and public workshops at the Tokyo Psychotherapy Academy for seven years. Stewart Kiritz continued in this role from 1997 to 2006.
=== Training of Gestalt therapists ===
==== Pedagogical approach ====
Many Gestalt therapy training organizations exist worldwide. Ansel Woldt asserted that Gestalt teaching and training are built upon the belief that people are, by nature, health-seeking. Thus, such commitments as authenticity, optimism, holism, health, and trust become important principles to consider when engaged in the activity of teaching and learning—especially Gestalt therapy theory and practice.
=== Associations ===
The Association for the Advancement of Gestalt Therapy holds a biennial international conference in various locations—the first was in New Orleans, in 1995. Subsequent conferences have been held in San Francisco, Cleveland, New York, Dallas, St. Pete's Beach, Vancouver (British Columbia), Manchester (England), and Philadelphia. In addition, the it holds regional conferences, and its regional network has spawned regional conferences in Amsterdam, the Southwest and the Southeast of the United States, England, and Australia. Its Research Task Force generates and nurtures active research projects and an international conference on research.
The European Association for Gestalt Therapy, founded in 1985 to gather European individual Gestalt therapists, training institutes, and national associations from more than twenty European nations.
Gestalt Australia and New Zealand was formally established at the first Gestalt Therapy Conference held in Perth in September 1998.
== See also ==
Role reversal
Topdog vs. underdog
Violet Oaklander
== References ==
== Further reading ==
Perls, F. (1969) Ego, Hunger, and Aggression: The Beginning of Gestalt Therapy. New York, NY: Random House. (originally published in 1942, and re-published in 1947)
Perls, F. (1969) Gestalt Therapy Verbatim. Moab, UT: Real People Press.
Perls, F., Hefferline, R., & Goodman, P. (1951) Gestalt Therapy: Excitement and growth in the human personality. New York, NY: Julian.
Perls, F. (1973) The Gestalt Approach & Eye Witness to Therapy. New York, NY: Bantam Books.
Brownell, P. (2012) Gestalt Therapy for Addictive and Self-Medicating Behaviors. New York, NY: Springer Publishing.
Levine, T.B-Y. (2011) Gestalt Therapy: Advances in Theory and Practice. New York, NY: Routledge.
Bloom, D. & Brownell, P. (eds)(2011) Continuity and Change: Gestalt Therapy Now. Newcastle, UK: Cambridge Scholars Publishing
Mann, D. (2010) Gestalt Therapy: 100 Key Points & Techniques. London & New York: Routledge.
Truscott, Derek (2010). "Gestalt therapy". Becoming an effective psychotherapist: adopting a theory of psychotherapy that's right for you and your client. Washington, D.C.: American Psychological Association. pp. 83–96. ISBN 978-1-4338-0473-1. OCLC 612728376.
Staemmler, F-M. (2009) Aggression, Time, and Understanding: Contributions to the Evolution of Gestalt Therapy. New York, NY, US: Routledge/Taylor & Francis Group; GestaltPress Book
Woldt, A. & Toman, S. (2005) "Gestalt Therapy: History, Theory and Practice." Thousand Oaks, CA: Sage Publications.
Bretz, HJ; Heekerens, HP; Schmitz, B (1994). "Eine Metaanalyse der Wirksamkeit von Gestalttherapie" [A meta-analysis of the effectiveness of gestalt therapy]. Zeitschrift für klinische Psychologie, Psychopathologie und Psychotherapie (in German). 42 (3): 241–60. ISSN 0723-6557. PMID 7941644.
Toman, Sarah; Woldt, Ansel, eds. (2005). Gestalt Therapy History, Theory, and Practice (pbk. ed.). Gestalt Press. ISBN 0761927913.
== External links == | Wikipedia/Gestalt_therapy |
Existential therapy is a form of psychotherapy based on the model of human nature and experience developed by the existential tradition of European philosophy. It focuses on the psychological experience revolving around universal human truths of existence such as death, freedom, isolation and the search for the meaning of life. Existential therapists largely reject the medical model of mental illness that views mental health symptoms as the result of biological causes. Rather, symptoms such as anxiety, alienation and depression arise because of attempts to deny or avoid the givens of existence, often resulting in an existential crisis. For example, existential therapists highlight the fact that since we have the freedom to choose, there will always be uncertainty - and therefore, there will always be a level of existential anxiety present in our lives.
Existential therapists also draw heavily from the methods of phenomenology, a philosophical approach developed by Edmund Husserl and later expanded on by Martin Heidegger that concentrates on the study of consciousness and the objects of direct experience. When working with clients, existential therapists focus on the client's lived experience of their subjective reality. While other types of therapies like Freudian psychoanalysis are aimed at analyzing and interpreting the client's experience, existential therapists are encouraged to "bracket", or set aside, their preconceived notions and biases in order to identify the core aspects of the client's experience. In existential therapy, clients gain self-awareness into their own existence, confront existential concerns, and are encouraged to use their freedom to choose a path towards a more authentic and meaningful life.
== Background ==
The philosophers who are especially pertinent to the development of existential psychotherapy are those whose works were directly aimed at making sense of human existence. For example, the fields of phenomenology and existential philosophy are especially and directly responsible for the generation of existential therapy.
The starting point of existential philosophy (see Warnock 1970; Macquarrie 1972; Mace 1999; van Deurzen and Kenward 2005) can be traced back to the nineteenth century and the works of Søren Kierkegaard and Friedrich Nietzsche. Their works conflicted with the predominant ideologies of their time and committed to the exploration of reality as it can be experienced in a passionate and personal manner.
=== Søren Kierkegaard (1813–1855) ===
Soren Kierkegaard (1813–1855) protested vehemently against popular misunderstanding and abuse of Christian dogma and the so-called 'objectivity' of science (Kierkegaard, 1841, 1844). He thought that both were ways of avoiding the anxiety inherent in human existence. He had great contempt for the way life was lived by those around him and believed truth could only be discovered subjectively by the individual in action. He felt people lacked the courage to take a leap of faith and live with passion and commitment from the inward depth of existence. This involved a constant struggle between the finite and infinite aspects of our nature as part of the difficult task of creating a self and finding meaning. As Kierkegaard lived by his word, he was lonely and much ridiculed during his lifetime.
=== Friedrich Nietzsche (1844–1900) ===
Friedrich Nietzsche (1844–1900) took this philosophy of life a step further. His starting point was the notion that God is dead, that is, the idea of God was outmoded and limiting (Nietzsche, 1861, 1874, 1886). Furthermore, the Enlightenment—with the newfound faith in reason and rationality—had killed or replaced God with a new Truth that was perhaps more pernicious than the one it replaced. Science and rationality were the new "God," but instead took the form of a deity that was colder and less comforting than before. Nietzsche exerted a significant impact upon the development of psychology in general, but he specifically influenced an approach which emphasized an understanding of life from a personal perspective. In exploring the various needs of the individual about the ontological conditions of being, Nietzsche asserted that all things are in a state of "ontological privation," in which they long to become more than they are. This state of deprivation has major implications for the physiological and psychological needs of the individual.
=== Edmund Husserl (1859–1938) ===
While Kierkegaard and Nietzsche drew attention to the human issues that needed to be addressed, Edmund Husserl's phenomenology (Husserl, 1960, 1962; Moran, 2000) provided the method to address them rigorously. He contended that natural sciences assume the separateness of subject and object and that this kind of dualism can only lead to error. He proposed a whole new mode of investigation and understanding of the world and our experience of it. He said that prejudice has to be put aside or 'bracketed,' for us to meet the world afresh and discover what is absolutely fundamental, and only directly available to us through intuition. If people want to grasp the essence of things, instead of explaining and analyzing them, they have to learn to describe and understand them.
=== Max Scheler (1874-1928) ===
Max Scheler (1874-1928) developed philosophical anthropology from a material ethic of values ("Materielle Wertethik") that opposed Immanuel Kant's ethics of duty ("Pflichtethik"). He described a hierarchical system of values that further developed phenomenological philosophy. Scheler described the human psyche as being composed of four layers analogous to the layers of organic nature. However, in his description, the human psyche is opposed by the principle of the human spirit. Scheler's philosophy forms the basis of Viktor Frankl's logotherapy and existential analysis.
=== Martin Heidegger (1889–1976) ===
Martin Heidegger (1889–1976) applied the phenomenological method to understanding the meaning of being (Heidegger, 1962, 1968). He argued that poetry and deep philosophical thinking could bring greater insight into what it means to be in the world than what can be achieved through scientific knowledge. He explored human beings in the world in a manner that revolutionized classical ideas about the self and psychology. He recognized the importance of time, space, death, and human relatedness. He also favored hermeneutics, an old philosophical method of investigation, which is the art of interpretation.
Unlike interpretation as practiced in psychoanalysis (which consists of referring a person's experience to a pre-established theoretical framework), this kind of interpretation seeks to understand how the person himself/herself subjectively experiences something.
=== Jean-Paul Sartre (1905–1980) ===
Jean-Paul Sartre (1905–1980) contributed many other strands of existential exploration, particularly regarding emotions, imagination, and the person's insertion into a social and political world.
The philosophy of existence, on the contrary, is carried by a wide-ranging literature, which includes many authors, such as Karl Jaspers (1951, 1963), Paul Tillich, Martin Buber, and Hans-Georg Gadamer within the Germanic tradition and Albert Camus, Gabriel Marcel, Paul Ricoeur, Maurice Merleau-Ponty, Simone de Beauvoir and Emmanuel Lévinas within the French tradition (see for instance Spiegelberg, 1972, Kearney, 1986 or van Deurzen-Smith, 1997).
== Existentialism and Therapy ==
Throughout the 20th century, psychotherapists began incorporating both the themes of existentialism as well as the phenomenological methods of describing experience into their therapeutic practice:
Otto Rank (1884–1939) was an Austrian psychoanalyst who broke with Freud in the mid-1920s. He did not consider himself an existential therapist, but his ideas revolving the concept of "will" as a factor in human motivation, as well as the fear of death and the fear of living authentically would pave the foundation for later writers.
Throughout the 1930's and 40's, the Swiss psychiatrists Ludwig Binswanger and Medard Boss each developed a form of psychotherapy known as Daseinsanalysis. Daseinsanalysis merges Freudian psychoanalysis with the existential phenomenology of Martin Heidegger, particularly his concept of Dasein ("being"). It focuses on understanding the client's experience of Being-in-the-world, rather than diagnosing symptoms. Much of Binswanger's work was translated into English during the 1940s and 1950s and, together with the immigration to the USA of Paul Tillich (1886–1965) (Tillich, 1952) and others, this had a considerable effect on the popularization of existential ideas as a basis for therapy (Valle and King, 1978; Cooper, 2003).
Rollo May (1909–1994) played an important role in this, and is considered by many to be the "father" of existential therapy. His writings in the 1950's and 60's (1969, 1983; May et al., 1958) became the foundation of existential-humanistic therapy that would flourish in America (Bugental, 1981; May and Yalom, 1985; Yalom, 1980). May also worked closely with Carl Rogers and Abraham Maslow, founders of the humanistic psychology movement. As such, existential therapy in America became closely associated with humanistic psychology and the principles of Rogers' person-centered therapy, particularly regarding how the therapist and client should interact.
Viktor Frankl (1905–1997) was possibly the individual most responsible for spreading existential psychology throughout the world. His 1959 book Man's Search for Meaning created a unique branch of existential therapy known as Logotherapy. Logotherapy is premised on the idea that the primary motivation of individuals is to find meaning in life. He was invited by over 200 universities worldwide and accomplished over 80 journeys to North America alone, first invited by Gordon Allport at Harvard University.
In 1980, Irvin D. Yalom published 'Existential Psychotherapy'. This book was the first to provide a comprehensive overview of existential therapy. In it, Yalom identifies four existential concerns, or "givens", of life that underlie human experience - death, freedom, isolation, and meaninglessness. Yalom argues that the role of the therapist in existential therapy is not to provide solutions or answers, but to guide the client in exploring and confronting these challenges. Unlike other forms of therapy, Yalom does not prescribe specific techniques, rather, Yalom suggests existential therapy should be a personalized collaboration between therapist and client, tailored to each clients’ unique existential concerns.
== Development ==
=== Development in Europe ===
The European School of existential analysis is dominated by two forms of therapy: Logotherapy, and Daseinsanalysis. Logotherapy was developed by psychiatrist Viktor E. Frankl. Frankl was heavily influenced by existential philosophy, as well as his own experience in the Nazi concentration camps of World War II. The three main components to Logotherapy are Freedom of Will, which is the ability to change one's life to the degree that such change is possible, Will to Meaning, which places meaning at the center of well-being, and Meaning in Life, which asserts the objectivity of meaning. The primary techniques of Logotherapy involve helping the clients to identify and remove any barriers to the pursuit of meaning in their own lives, to determine what is personally meaningful, and to then help patients effectively pursue related goals.
Daseinsanalysis is a psychotherapeutic system developed upon the ideas of Martin Heidegger, as well as the psychoanalytic theories of Sigmund Freud, that seeks to help the individual find autonomy and meaning in their "being in the world" (a rough translation of "Dasein").
=== Development in Britain ===
Britain became a fertile ground for further development of the existential approach when R. D. Laing and David Cooper, often associated with the anti-psychiatry movement, took Sartre's existential ideas as the basis for their work (Laing, 1960, 1961; Cooper, 1967; Laing and Cooper, 1964). Without developing a concrete method of therapy, they critically reconsidered the notion of mental illness and its treatment. In the late 1960s, they established an experimental therapeutic community at Kingsley Hall in the East End of London, where people could come to live through their 'madness' without the usual medical treatment. They also founded the Philadelphia Association, an organization providing an alternative living, therapy, and therapeutic training from this perspective. The Philadelphia Association is still in existence today and is now committed to the exploration of the works of philosophers such as Ludwig Wittgenstein, Jacques Derrida, Levinas, and Michel Foucault as well as the work of the French psychoanalyst Jacques Lacan. It also runs some small therapeutic households along these lines. The Arbours Association is another group that grew out of the Kingsley Hall experiment. Founded by Joseph Berke and Schatzman in the 1970s, it now runs a training program in psychotherapy, a crisis center, and several therapeutic communities. The existential input in the Arbours has gradually been replaced with a more neo-Kleinian emphasis.
The impetus for further development of the existential approach in Britain has primarily come from the development of some existentially based courses in academic institutions. This started with the programs created by Emmy van Deurzen, initially at Antioch University in London and subsequently at Regent's College, London and since then at the New School of Psychotherapy and Counseling, also located in London. The latter is a purely existentially based training institute, which offers postgraduate degrees validated by the University of Sheffield and Middlesex University. In the past few decades, the existential approach has spread rapidly and has become a welcome alternative to established methods. There are now many other, mostly academic, centers in Britain that provide training in existential counseling and psychotherapy and a rapidly growing interest in the approach in the voluntary sector and the National Health Service.
British publications dealing with existential therapy include contributions by these authors: Jenner (de Koning and Jenner, 1982), Heaton (1988, 1994), Cohn (1994, 1997), Spinelli (1997), Cooper (1989, 2002), Eleftheriadou (1994), Lemma-Wright (1994), Du Plock (1997), Strasser and Strasser (1997), van Deurzen (1997, 1998, 2002), van Deurzen and Arnold-Baker (2005), and van Deurzen and Kenward (2005). Other writers such as Lomas (1981) and Smail (1978, 1987, 1993) have published work relevant to the approach, although not explicitly 'existential' in orientation. The journal of the British Society for Phenomenology regularly publishes work on existential and phenomenological psychotherapy. The Society for Existential Analysis was founded in 1988, initiated by van Deurzen. This society brings together psychotherapists, psychologists, psychiatrists, counselors, and philosophers working from an existential perspective. It offers regular fora for discussion and debate as well as significant annual conferences. It publishes the Journal of the Society for Existential Analysis twice a year. It is also a member of the International Federation of Daseinsanalysis, which stimulates international exchange between representatives of the approach from around the world. An International Society for Existential Therapists also exists. It was founded in 2006 by Emmy van Deurzen and Digby Tantam and is called the International Community of Existential Counsellors and Therapists (ICECAP).
=== Development in Canada ===
New developments in existential therapy in the last 20 years include existential positive psychology and meaning therapy. Different from the traditional approach to existential therapy, these new developments incorporate research findings from contemporary positive psychology.
Existential positive psychology can reframe the traditional issues of existential concerns into positive psychology questions that can be subjected to empirical research. It also focuses on personal growth and transformation as much as on existential anxiety. Later, existential positive psychology was incorporated into the second wave of positive psychology.
Meaning therapy (MT) is an extension of Frankl's logotherapy and America's humanistic-existential tradition; it is also pluralistic because it incorporates elements of cognitive-behavioral therapy, narrative therapy, and positive psychotherapy, with meaning as its central organizing construct. MT not only appeals to people's natural desires for happiness and significance but also makes skillful use of their innate capacity for meaning-seeking and meaning-making. MT strikes a balance between a person-centered approach and a psycho-educational approach. At the outset of therapy, clients are informed of the use of meaning-centered interventions appropriate for their predicaments because of the empirical evidence for the vital role of meaning in healing and thriving. MT is a comprehensive and pluralistic way to address all aspects of clients' existential concerns. Clients can benefit from MT in two ways: (1) a custom-tailored treatment to solve their presenting problems, and (2) a collaborative journey to create a preferred better future.
== Themes ==
Existential themes are issues that are considered central to human existence or the human experience. As such, they are broad and are often difficult to describe. The most common framework is given by Irvin Yalom, who divides existental issues into four core themes:
Death. The fact that humans are aware of their own inevitable death - and uncertain about what happens after - can cause considerable distress. Some people go to great lengths to deny or avoid thinking about death, while others may succumb to despair or hopelessness. Existential therapy aims to help clients embrace life more fully by recognizing the finite nature of their existence.
Freedom and responsibility. Alongside humanistic psychology, existential therapy places a great deal of importance on the concept of free will. Humans have the freedom to make choices and create their own meaning in life. To be free means to be responsible for one’s life and to be the author of one’s own destiny. However, this freedom carries with it a tremendous amount of responsibility. Experiencing existential guilt over the choices we have made and the possibilities of what could have been is a natural part of life. If overwhelmed by the responsibility to choose, people may defer to external authorities and societal norms to make decisions for them. Existential therapy encourages clients to recognize their freedom and live more authentic lives by taking responsibility for their actions, choices, and direction in life.
Isolation. Yalom describes existential isolation as "An unbridgeable gap between oneself and any other being. It refers, too, to an isolation even more fundamental—a separation between the individual and the world." This represents the tension between the fact that humans are inherently social creatures, and long to be connected to others - Yet at the same time no one can fully share or take on another’s subjective experience, pain, or death. People may attempt to avoid these feelings of isolation through becoming dependent on others or conforming to social norms. Existential therapy helps clients confront their aloneness by building authentic relationships, where connections with others are based on mutual respect rather than an avoidance of isolation.
Meaning and meaninglessness. Humans seek out meaning - the question of "What is the meaning of life?" is one of the most central questions to philosophy. Existential philosophers argue that there is no "inherent," fundamental meaning in the world. Rather, the task falls on the individual for *discovering* or *creating* their own meaning in life. This realization can be incredibly distressing to people. In existential therapy, clients learn to create personal meaning through their actions, values, and relationships rather than relying on external, fixed sources of meaning. Viktor Frankl's logotherapy specifically focuses on how a lack of meaning in life can lead to severe mental distress.
In addition to these four themes, a central concept that underlies existential therapy is existential anxiety, more colloquially known as an existential crisis. When people become aware of these existential themes, they feel anxiety, guilt, or other forms of distress. Unlike Freudian or medical models of mental illness that strictly view anxiety as a "symptom" that must be "cured", existential therapy stresses that existential anxiety is an inevitable part of life. Under this framework, anxiety is not seen as something to be eliminated but as a natural and even necessary part of life, and that working through these anxieties can provide a powerful source of personal growth and transformation.
To this end, one of the central goals of existential therapy is to help clients live with authenticity. To live with authenticity means to fully acknowledge life's existential givens without avoiding them. It also means to live a life that is in line with one's personal values. Many people live inauthentic lives by succumbing to peer pressure or conformity. Existential therapists guide clients to reflect on their values, choices, and patterns of behavior to identify areas whether they may be living inauthentically. For example, a client may be stuck in an unfulfilling job that does not align with their values or stuck in an unsatisfying relationship due to their fear of loneliness. By helping clients confront existential anxiety, clarify their values, and find areas where they can exercise their freedom, therapists support them in creating a life that feels true, meaningful, and fully their own.
== Psychological dysfunction ==
Because there is no single existential view, opinions about psychological dysfunction vary.
For theorists aligned with Yalom, psychological dysfunction results from the individual's refusal or inability to deal with the normal existential anxiety that comes from confronting life's "givens": death, freedom, isolation,
and meaninglessness.
For other theorists, there is no such thing as psychological dysfunction or mental illness. Every way of being is merely an expression of how one chooses to live one's life. However, one may feel unable to come to terms with the anxiety of being alone in the world. If so, an existential psychotherapist can assist one in accepting these feelings rather than trying to change them as if there is something wrong. Everyone has the freedom to choose how they are going to exist in life; however, this freedom may go unpracticed. It may appear easier and safer not to make decisions that one will be responsible for. Many people will remain unaware of alternative choices in life for various societal reasons.
== Personal element ==
Existential counsellors stress the importance of the examined life, and of preparatory work on oneself, in paving the way for effective counselling. Thus in counselling adolescents the counsellor can optimally model an autonomous life based on the making of realistic decisions, but one which also acknowledges the role of failure as well as success in everyday life, and the ongoing and inescapable presence of anxiety.
The strictly Sartrean perspective of existential psychotherapy is generally unconcerned with the client's past, but instead, the emphasis is on the choices to be made in the present and future. The counselor and the client may reflect upon how the client has answered life's questions in the past, but attention ultimately shifts to searching for a new and increased awareness in the present and enabling a new freedom and responsibility to act. The patient can then accept that they are not special and that their existence is simply coincidental, or without destiny or fate. By accepting this, they can overcome their anxieties and instead view life as moments in which they are fundamentally free.
== Four worlds ==
Existential thinkers seek to avoid restrictive models that categorize or label people. Instead, they look for the universals that can be observed cross-culturally. There is no existential personality theory which divides humanity into types or reduces people to part components. Instead, there is a description of the different levels of experience and existence with which people are inevitably confronted. The way in which a person is in the world at a particular stage can be charted on this general map of human existence (Binswanger, 1963; Yalom, 1980; van Deurzen, 1984).
In line with the view taken by van Deurzen, one can distinguish four basic dimensions of human existence: the physical, the social, the psychological, and the spiritual; some only believe in the first three.
On each of these dimensions, people encounter the world and shape their attitude out of their particular take on their experience. Their orientation towards the world defines their reality. The four dimensions are interwoven and provide a complex four-dimensional force field for their existence. Individuals are stretched between a positive pole of what they aspire to on each dimension and a negative pole of what they fear. Binswanger proposed the first three of these dimensions from Heidegger's description of Umwelt and Mitwelt and his further notion of Eigenwelt. The fourth dimension was added by van Deurzen from Heidegger's description of a spiritual world (Überwelt) in Heidegger's later work.
=== Physical dimension ===
On the physical dimension (Umwelt), individuals relate to their environment and the givens of the natural world around them. This includes their attitude to the body they have, to the concrete surroundings they find themselves in, to the climate and the weather, to objects and material possessions, to the bodies of other people, their own bodily needs, to health and illness and their mortality. The struggle on this dimension is, in general terms, between the search for domination over the elements and natural law (as in technology, or in sports) and the need to accept the limitations of natural boundaries (as in ecology or old age). While people generally aim for security on this dimension (through health and wealth), much of life brings a gradual disillusionment and realization that such security can only be temporary. Recognizing limitations can deliver a significant release of tension.
=== Social dimension ===
On the social dimension (Mitwelt), individuals relate to others as they interact with the public world around them. This dimension includes their response to the culture they live in, as well as to the class and race they belong to (and also those they do not belong to). Attitudes here range from love to hate and from cooperation to competition. The dynamic contradictions can be understood concerning acceptance versus rejection or belonging versus isolation. Some people prefer to withdraw from the world of others as much as possible. Others blindly chase public acceptance by going along with the rules and fashions of the moment. Otherwise, they try to rise above these by becoming trendsetters themselves. By acquiring fame or other forms of power, individuals can attain dominance over others temporarily. Sooner or later, however, everyone is confronted with both failure and aloneness.
=== Psychological dimension ===
On the psychological dimension (Eigenwelt), individuals relate to themselves and in this way create a personal world. This dimension includes views about their character, their past experience and their future possibilities. Contradictions here are often experienced regarding personal strengths and weaknesses. People search for a sense of identity, a feeling of being substantial and having a self.
But inevitably many events will confront them with evidence to the contrary and plunge them into a state of confusion or disintegration. Activity and passivity are an important polarity here. Self-affirmation and resolution go with the former and surrender and yielding with the latter. Facing the final dissolution of self that comes with personal loss and the facing of death might bring anxiety and confusion to many who have not yet given up their sense of self-importance.
=== Spiritual dimension ===
On the spiritual dimension (Überwelt) (van Deurzen, 1984), individuals relate to the unknown and thus create a sense of an ideal world, an ideology, and a philosophical outlook. It is there that they find meaning by putting all the pieces of the puzzle together for themselves. For some people, this is done by adhering to a religion or other prescriptive worldview; for others, it is about discovering or attributing meaning in a more secular or personal way. The contradictions that must be faced on this dimension are often related to the tension between purpose and absurdity, hope and despair. People create their values in search of something that matters enough to live or die for, something that may even have ultimate and universal validity. Usually, the aim is the conquest of a soul or something that will substantially surpass mortality (as in having contributed something valuable to humankind). Facing the void and the possibility of nothingness are the indispensable counterparts of this quest for the eternal.
== Research support ==
There has not been a tremendous amount of research on existential therapy. Much of the research focuses on people receiving therapy who also have medical concerns such as cancer. Despite this, some studies have indicated positive efficacy for existential therapies with certain populations. Overall, however, more research is needed before definitive scientific claims can be made.
In the debate on evidence-based research in counselling, existential counsellors tend to stress the dangers of over-simplification, and the importance of qualitative as well as quantitative measurements of outcome. While not necessarily expecting an easy resolution of the specific/non-specific factors in therapy debate, an existential counsellor will nonetheless favor evidence-based practice.
== See also ==
== References ==
== Further reading ==
== External links ==
Existential positive psychology
Searching for meaning | Wikipedia/Existential_therapy |
Couples therapy (also known as couples' counseling, marriage counseling, or marriage therapy) is a form of psychotherapy that seeks to improve intimate relationships, resolve interpersonal conflicts and repair broken bonds of love.
== History ==
Marriage counseling began in Germany in the 1920s as part of the eugenics movement. The first institutes for marriage counselling in the United States started in the 1930s, partly in response to Germany's medically directed, racial purification marriage counselling centers. It was promoted by prominent American eugenicists such as Paul Popenoe, who directed the American Institute of Family Relations until 1976, Robert Latou Dickinson, and by birth control advocates such as Abraham and Hannah Stone who wrote A Marriage Manual in 1935 and were involved with Planned Parenthood, as well as Lena Levine and Margaret Sanger.
It wasn't until the 1950s that therapists began treating psychological problems within the context of the family. Relationship counseling as a distinct, professional service is thus a recent phenomenon. Until the late 20th century, relationship counseling was informally provided by close friends, family members, or local religious leaders. Psychiatrists, psychologists, counselors and social workers historically dealt primarily with individual psychological problems within a medical and psychoanalytic framework. In many cultures, the institution of the family or group elders fulfill the role of relationship counseling; marriage mentoring mirrors these cultures.
With increasing modernization or westernization and the continuous shift towards isolated nuclear families, the trend is towards trained and accredited relationship counselors or couple therapists. Sometimes volunteers are trained by either the government or social service institutions to help those who need family or marital counseling. Many communities and government departments have their own teams of trained voluntary and professional relationship counselors. Similar services are operated by many universities and colleges, sometimes staffed by volunteers from among the student peer group. Some large companies maintain full-time professional counseling staff to facilitate smoother interactions between corporate employees and to minimize the negative effects that personal difficulties might have on work performance.
There is an increasing trend toward professional certification and government registration of these services, in part due to duty of care issues and the consequences of the counsellor or therapist's services being provided in a fiduciary relationship.
== Basic principles ==
It is estimated that nearly 50% of all married couples divorce, and about one in five marriages experience distress at some time. These numbers vary between countries and over time; for example, in Germany only 35.74% of marriages ended in divorce, half of those involving children under 18. Challenges with affection, communication, disagreements, and fears of divorce are some of the most common reasons couples seek help. Couples who are dissatisfied with their relationship may seek help from a variety of sources including online courses, self-help books, retreats, workshops, and couples' counseling.
Before a relationship between individuals can be understood, it is important to recognize and acknowledge that each person, including the counselor, has a unique personality, perception, opinions, set of values, and history. Individuals in the relationship may adhere to different and unexamined value systems. Institutional and societal variables (like social or religious groups, and other collective factors) which shape a person's nature and behavior, are considered in counseling and therapy. A tenet of relationship counseling is that it is intrinsically beneficial for all the participants to interact with each other, and with society at large with optimal amounts of conflict. A couple's conflict resolution skills seem to predict divorce rates.
Most relationships will experience strain at some point, resulting in a failure to function optimally and causing self-reinforcing, maladaptive patterns to form, sometimes called "negative interaction cycles." There are many possible reasons for this, including insecure attachment, ego, arrogance, jealousy, anger, greed, poor communication/understanding or problem-solving, ill health, and third parties.
Changes in circumstances, like financials, physical health, and the influence of other family members can significantly influence the conduct, responses, and actions of the individuals in a relationship.
Often, it is an interaction between two or more factors, and frequently, it is not just one of the people involved who exhibit such traits. Relationship influences are reciprocal: each person involved contributes to causing and managing problems.
A viable solution to the problem, and setting these relationships back on track, may be to reorient the individuals' perceptions and emotions - how one views or responds to situations, and how one feels about them. Perceptions of, and emotional responses to, a relationship are contained within an often unexamined mental map of the relationship, also called a 'love map' by John Gottman. These can be explored collaboratively and discussed openly. The core values they comprise can then be understood and respected, or changed when no longer appropriate. This implies that each person takes equal responsibility for awareness of the problem as it arises, awareness of their own contribution to the problem, and making fundamental changes in thought and feeling.
The next step is to adopt conscious, implement structural changes to the inter-personal relationships, and evaluate the effectiveness of those changes over time.
Indeed, "typically for those close personal relations, there is a certain degree in 'interdependence' - which means that the partners are alternately mutually dependent on each other. As a special aspect of such relations, something contradictory is put outside: the need for intimacy and for autonomy."
"The common counterbalancing satisfaction these both needs, intimacy, and autonomy, leads to alternate satisfaction in the relationship and stability. But it depends on the specific developing duties of each partner in every life phase and maturity".
== Basic practices ==
Two methods of couples therapy focus primarily on the process of communicating. The most commonly used method is active listening, used by the late Carl Rogers and Virginia Satir. More recently, a method called "Cinematic Immersion" has been developed by Warren Farrell. Each helps couples learn a method of communicating designed to create a safe environment for each partner to express and hear feelings.
When the Munich Marital Study discovered active listening was not used in the long run, Warren Farrell observed that active listening did a better job of creating a safe environment for the criticizer to criticize than for the listener to hear the criticism. The listener, often feeling overwhelmed by the criticism, tended to avoid future encounters. He hypothesized that people are biologically programmed to respond defensively to criticism, and therefore the listener needed in-depth training with mental exercises and methods to interpret as love what might otherwise feel abusive. His method is Cinematic Immersion.
After 30 years of research into marriage, John Gottman found that healthy couples almost never listen and echo each other's feelings naturally. Whether miserable or radiantly happy, couples said what they thought about an issue, and "they got angry or sad, but their partner's response was never anything like what we were training people to do in the listener/speaker exercise, not even close."
Such exchanges occurred in less than 5 percent of marital interactions and they predicted nothing about whether the marriage would do well or badly. What's more, Gottman noted, data from a 1984 Munich study demonstrated that the (reflective listening) exercise itself didn't help couples to improve their marriages. To teach such interactions, whether as a daily tool for couples or as a therapeutic exercise in empathy, was a clinical dead end.
Emotionally focused therapy for couples (EFT-C) is based on attachment theory and uses emotion as the target and agent of change. Emotions bring the past alive in rigid interaction patterns, which create and reflect absorbing emotional states. As one of its founders, Sue Johnson says,
Forget about learning how to argue better, analyzing your early childhood, making grand romantic gestures, or experimenting with new sexual positions. Instead, recognize and admit that you are emotionally attached to and dependent on your partner in much the same way that a child is on a parent for nurturing, soothing, and protection. From the book, "Hold Me Tight" by Sue Johnson, Page 6.
== Research on therapy ==
The most researched approach to couples therapy is behavioral couples therapy. It is a well established treatment for marital discord. This form of therapy has evolved into integrative behavioral couples therapy. Integrative behavioral couples therapy appears to be effective for 69% of couples in treatment, while the traditional model was effective for 50-60% of couples. At a five-year follow-up, the marital happiness of the 134 couples who had participated in either integrative behavioral couples therapy or traditional couples therapy showed that 14% of relationships remained unchanged, 38% deteriorated, and 48% improved or recovered completely.
A 2018 review by Cochrane (organization) states that the available evidence does not suggest that couples therapy is more or less effective than individual therapy for treating depression.
A meta analysis published in 2023, covering 48 studies of non-randomized couples therapies, identified influencing factors on the efficiency. These factors include the age of the partners, the length of the relationship, and the type of institution that provided therapy.
Many studies about research on couples therapy can be found in Family Process Journal and Journal of Marital and Family Therapy, both published by Wiley. Further studies can be found on Research Gate, showing a lot of comparative research activity in Iran in 2024. A biannual newsletter provides short conclusions about actual publications, focusing on practical implications for couples therapy.
== Relationship counselor or couple's therapist ==
Licensed couple therapists may refer to a psychiatrist, clinical social workers, counseling psychologists, clinical psychologists, pastoral counselors, marriage and family therapists, and psychiatric nurses. The role of a relationship counselor or couples therapist is to listen, respect, understand, and facilitate better functioning between those involved.
The basic principles for a counselor include:
Providing a confidential dialogue, which normalizes and validates feelings
Enabling each person to be heard and to hear themselves
Providing a mirror with expertise to reflect the relationship's difficulties and the potential and direction for change
Empowering the relationship to take control of its own destiny and make vital decisions
Delivering relevant and appropriate information
Changing the view of the relationship
Improving communication
Setting clear goals and objectives
As well as the above, the basic principles for a couples therapist also include:
Identifying the repetitive, negative interaction cycle as a pattern.
Understanding the source of reactive emotions that drive the pattern.
Expanding and re-organizing key emotional responses in the relationship.
Facilitating a shift in partners' interaction to new patterns of interaction.
Creating new and positively bonding emotional events in the relationship
Fostering a secure attachment between partners.
Helping maintain a sense of intimacy.
Common core principles of relationship counseling and couples therapy are:
Respect
Empathy
Tact
consensual
Confidentiality
Accountability
Expertise
Evidence based
Certification and ongoing training
In both methods, the practitioner evaluates the couple's personal and relationship story as it is narrated, interrupts wisely, facilitates both de-escalations of unhelpful conflict and the development of realistic, practical solutions. The practitioner may meet each person individually at first, but only if this is beneficial to both, there is consensual, and is unlikely to cause harm; individualistic approaches to couple problems can cause harm. The counselor or therapist encourages the participants to give their best efforts to reorient their relationship with each other. One of the challenges here is for each person to change their own responses to their partner's behavior. Other challenges to the process are disclosing controversial or shameful events, and revealing closely guarded secrets. Not all couples put all of their cards on the table at first. This can take time, and requires patience and commitment to repairing the relationship.
== Novel practices ==
A novel development involves introducing insights gained from affective neuroscience and psychopharmacology into clinical practice.couples therapy
=== Oxytocin ===
There has been interest in using oxytocin during therapy sessions, although this is still largely experimental and somewhat controversial.love hormone Some researchers have argued oxytocin has a general enhancing effect on all social emotions, since intranasal administration of oxytocin also increases envy and Schadenfreude. Also, oxytocin has the potential for abuse in confidence tricks.
== Popularized methodologies ==
Although results are almost certainly significantly better when professional guidance is utilized, numerous attempts to make the methodologies generally available via self-help books and other media are available (see especially family therapy). In recent years, self-help books have become increasingly popularized and published as e-books available on the web, or through content articles on blogs and websites. The challenges for individuals utilizing these methods are most commonly associated with those of other self-help therapies or self-diagnosis.
Using modern technologies such as Skype VoIP conferencing to interact with practitioners is also becoming increasingly popular for the added accessibility as well as discarding any existing geographic barriers. Entrusting the performance and privacy of these technologies may pose concerns despite the convenient structure, especially compared to the comfort of in-person meetings.
== With gay and bisexual clients ==
Differing psychological theories play an important role in determining how effective relationship counseling is, especially when it concerns gay and bisexual clients. Some experts tout cognitive behavioral therapy as the tool of choice for intervention, while many rely on acceptance and commitment therapy or cognitive analytic therapy. One major progress in this area is the fact that "marital therapy" is now referred to as "couples therapy" in order to include individuals who are not married or those who are engaged in same-sex relationships. Most relationship issues are shared equally among couples regardless of sexual orientation, but LGBT clients additionally have to deal with heteronormativity, homophobia, biphobia, and both socio-cultural and legal discrimination. Individuals may experience relational ambiguity from being in different stages of the coming out process or having an HIV serodiscordant relationship. Often, same-sex couples do not have as many role models of successful relationships as opposite-sex couples. In many jurisdictions, committed LGBT couples desiring a family are denied access to assisted reproduction, adoption and fostering, leaving them childless, feeling excluded, other, and bereaved. There may be issues with gender role socialization that do not affect opposite-sex couples.
A significant number of men and women experience conflict surrounding homosexual expression within a mixed-orientation marriage. Couple therapy may include helping the clients feel more comfortable and accepting of same-sex feelings, and to explore ways of incorporating same-sex and opposite-sex feelings into life patterns. Although a strong gay identity identity was associated with difficulties in marital satisfaction, viewing same-sex activities as compulsive facilitated commitment to the marriage and to monogamy.
== See also ==
Counseling
Counseling psychology
Family therapy
Interpersonal psychotherapy
List of basic relationship topics
Relational disorder
Relationship education
Sex therapy
Social work
List of counseling topics
== References ==
37. Conflict to Connection: Importance of Marriage Counselling by Mcdowall Health, Author: James Arrington, Published on August 2, 2023 | Wikipedia/Couples_therapy |
In behavioral psychology, stimulus control is a phenomenon in operant conditioning that occurs when an organism behaves in one way in the presence of a given stimulus and another way in its absence. A stimulus that modifies behavior in this manner is either a discriminative stimulus or stimulus delta. For example, the presence of a stop sign at a traffic intersection alerts the driver to stop driving and increases the probability that braking behavior occurs. Stimulus control does not force behavior to occur, as it is a direct result of historical reinforcement contingencies, as opposed to reflexive behavior elicited through classical conditioning.
Some theorists believe that all behavior is under some form of stimulus control. For example, in the analysis of B. F. Skinner, verbal behavior is a complicated assortment of behaviors with a variety of controlling stimuli.
== Characteristics ==
The controlling effects of stimuli are seen in quite diverse situations and in many aspects of behavior. For example, a stimulus presented at one time may control responses emitted immediately or at a later time; two stimuli may control the same behavior; a single stimulus may trigger behavior A at one time and behavior B at another; a stimulus may control behavior only in the presence of another stimulus, and so on. These sorts of control are brought about by a variety of methods and they can explain many aspects of behavioral processes.
In simple, practical situations, for example if one were training a dog using operant conditioning, optimal stimulus control might be described as follows:
The behavior occurs immediately when the discriminative stimulus is given.
The behavior never occurs in the absence of the stimulus.
The behavior never occurs in response to some other stimulus.
No other behavior occurs in response to this stimulus.
== Establishing stimulus control through operant conditioning ==
=== Discrimination training ===
Operant stimulus control is typically established by discrimination training. For example, to make a light control a pigeon's pecks on a button, reinforcement only occurs following a peck to the button. Over a series of trials the pecking response becomes more probable in the presence of the light and less probable in its absence, and the light is said to become a discriminative stimulus or SD. Virtually any stimulus that the animal can perceive may become a discriminative stimulus, and many different schedules of reinforcement may be used to establish stimulus control. For example, a green light might be associated with a VR 10 schedule and a red light associated with a FI 20-sec schedule, in which case the green light will control a higher rate of response than the red light.
=== Generalization ===
After a discriminative stimulus is established, similar stimuli are found to evoke the controlled response. This is called stimulus generalization. As the stimulus becomes less and less similar to the original discriminative stimulus, response strength declines; measurements of the response thus describe a generalization gradient.
An experiment by Hanson (1959) provides an early, influential example of the many experiments that have explored the generalization phenomenon. First a group of pigeons was reinforced for pecking a disc illuminated by a light of 550 nm wavelength, and never reinforced otherwise. Reinforcement was then stopped, and a series of different wavelength lights was presented one at a time. The results showed a generalization gradient: the more the wavelength differed from the trained stimulus, the fewer responses were produced.
Many factors modulate the generalization process. One is illustrated by the remainder of Hanson's study, which examined the effects of discrimination training on the shape of the generalization gradient. Birds were reinforced for pecking at a 550 nm light, which looks yellowish-green to human observers. The birds were not reinforced when they saw a wavelength more toward the red end of the spectrum. Each of four groups saw a single unreinforced wavelength, either 555, 560, 570, or 590 nm, in addition to the reinforced 550 wavelength. The birds were then tested as before, with a range of unreinforced wavelengths. This procedure yielded sharper generalization gradients than did the simple generalization procedure used in the first procedure. In addition, however, Hansen's experiment showed a new phenomenon, called the "peak shift". That is, the peak of the test gradients shifted away from the SD, such that the birds responded more often to a wavelength they had never seen before than to the reinforced SD. An earlier theory involving inhibitory and excitatory gradients partially explained the results, A more detailed quantitative model of the effect was proposed by Blough (1975). Other theories have been proposed, including the idea that the peak shift is an example of relational control; that is, the discrimination was perceived as a choice between the "greener" of two stimuli, and when a still greener stimulus was offered the pigeons responded even more rapidly to that than to the originally reinforced stimulus.
== Matching to sample ==
In a typical matching-to-sample task, a stimulus is presented in one location (the "sample"), and the subject chooses a stimulus in another location that matches the sample in some way (e.g., shape or color). In the related "oddity" matching procedure, the subject responds to a comparison stimulus that does not match the sample. These are called "conditional" discrimination tasks because which stimulus is responded to depends or is "conditional" on the sample stimulus.
The matching-to-sample procedure has been used to study a very wide range of problems. Of particular note is the "delayed matching to sample" variation, which has often been used to study short-term memory in animals. In this variation, the subject is exposed to the sample stimulus, and then the sample is removed and a time interval, the "delay", elapses before the choice stimuli appear. To make a correct choice the subject has to retain information about the sample across the delay. The length of the delay, the nature of the stimuli, events during the delay, and many other factors have been found to influence performance on this task.
== Cannabinoids ==
Psychoactive cannabinoids produce discriminative stimulus effects by stimulation of CB1 receptors in the brain.
== See also ==
Behavior therapy
Behaviorism
Motivating operation
Quantitative analysis of behavior
Signal detection
Self-control
== References ==
== Further reading ==
James E. Mazur (10 November 2016). Learning & Behavior: Eighth Edition. Taylor & Francis. ISBN 978-1-315-45026-1.
Nevin, J. A. (1965). "Decision theory in studies of discrimination in animals". Science. 150 (3699): 1057. Bibcode:1965Sci...150.1057N. doi:10.1126/science.150.3699.1057. PMID 5843623.
Nevin, J. A. (1969). "Signal detection theory and operant behavior". Journal of the Experimental Analysis of Behavior. 12 (3): 475–480. doi:10.1901/jeab.1969.12-475. PMC 1338610.
Staddon, J. E. R. (2001). Adaptive dynamics – The theoretical analysis of behavior. The MIT Press. London, England.
J. E. R. Staddon (7 March 2016). Adaptive Behavior and Learning. Cambridge University Press. ISBN 978-1-316-46776-3. | Wikipedia/Stimulus_control |
Alzheimer's disease (AD) is a neurodegenerative disease and the cause of 60–70% of cases of dementia. The most common early symptom is difficulty in remembering recent events. As the disease advances, symptoms can include problems with language, disorientation (including easily getting lost), mood swings, loss of motivation, self-neglect, and behavioral issues. As a person's condition declines, they often withdraw from family and society. Gradually, bodily functions are lost, ultimately leading to death. Although the speed of progression can vary, the average life expectancy following diagnosis is three to twelve years.
The causes of Alzheimer's disease remain poorly understood. There are many environmental and genetic risk factors associated with its development. The strongest genetic risk factor is from an allele of apolipoprotein E. Other risk factors include a history of head injury, clinical depression, and high blood pressure. The progression of the disease is largely characterised by the accumulation of malformed protein deposits in the cerebral cortex, called amyloid plaques and neurofibrillary tangles. These misfolded protein aggregates interfere with normal cell function, and over time lead to irreversible degeneration of neurons and loss of synaptic connections in the brain. A probable diagnosis is based on the history of the illness and cognitive testing, with medical imaging and blood tests to rule out other possible causes. Initial symptoms are often mistaken for normal brain aging. Examination of brain tissue is needed for a definite diagnosis, but this can only take place after death.
No treatments can stop or reverse its progression, though some may temporarily improve symptoms. A healthy diet, physical activity, and social engagement are generally beneficial in aging, and may help in reducing the risk of cognitive decline and Alzheimer's. Affected people become increasingly reliant on others for assistance, often placing a burden on caregivers. The pressures can include social, psychological, physical, and economic elements. Exercise programs may be beneficial with respect to activities of daily living and can potentially improve outcomes. Behavioral problems or psychosis due to dementia are sometimes treated with antipsychotics, but this has an increased risk of early death.
As of 2020, there were approximately 50 million people worldwide with Alzheimer's disease. It most often begins in people over 65 years of age, although up to 10% of cases are early-onset impacting those in their 30s to mid-60s. It affects about 6% of people 65 years and older, and women more often than men. The disease is named after German psychiatrist and pathologist Alois Alzheimer, who first described it in 1906. Alzheimer's financial burden on society is large, with an estimated global annual cost of US$1 trillion. It is ranked as the seventh leading cause of death worldwide.
Given the widespread impacts of Alzheimer's disease, both basic-science and health funders in many countries support Alzheimer's research at large scales. For example, the US National Institutes of Health program for Alzheimer's research, the National Plan to Address Alzheimer's Disease, has a budget of US$3.98 billion for fiscal year 2026. In the European Union, the 2020 Horizon Europe research programme awarded over €570 million for dementia-related projects.
== Signs and symptoms ==
The course of Alzheimer's is generally described in three stages, with a progressive pattern of cognitive and functional impairment. The three stages are described as early or mild, middle or moderate, and late or severe. The disease is known to target the hippocampus which is associated with memory, and this is responsible for the first symptoms of memory impairment. As the disease progresses so does the degree of memory impairment.
=== First symptoms ===
The first symptoms are often mistakenly attributed to aging or stress. Detailed neuropsychological testing can reveal mild cognitive difficulties up to eight years before a person fulfills the clinical criteria for diagnosis of Alzheimer's disease. These early symptoms can affect the most complex activities of daily living. The most noticeable deficit is short term memory loss, which shows up as difficulty in remembering recently learned facts and inability to acquire new information.
Subtle problems with the executive functions of attentiveness, planning, flexibility, and abstract thinking, or impairments in semantic memory (memory of meanings, and concept relationships) can also be symptomatic of the early stages of Alzheimer's disease. Apathy and depression can be seen at this stage, with apathy remaining as the most persistent symptom throughout the course of the disease. People with objective signs of cognitive impairment, but not more severe symptoms, may be diagnosed with mild cognitive impairment (MCI). If memory loss is the predominant symptom of MCI, it is termed amnestic MCI and is frequently seen as a prodromal or early stage of Alzheimer's disease. Amnestic MCI has a greater than 90% likelihood of being associated with Alzheimer's.
=== Early stage ===
In people with Alzheimer's disease, the increasing impairment of learning and memory eventually leads to a definitive diagnosis. In a small percentage, difficulties with language, executive functions, perception (agnosia), or execution of movements (apraxia) are more prominent than memory problems. Alzheimer's disease does not affect all memory capacities equally. Older memories of the person's life (episodic memory), facts learned (semantic memory), and implicit memory (the memory of the body on how to do things, such as using a fork to eat or how to drink from a glass) are affected to a lesser degree than new facts or memories.
Language problems are mainly characterised by a shrinking vocabulary and decreased word fluency, leading to a general impoverishment of oral and written language. In this stage, the person with Alzheimer's is usually capable of communicating basic ideas adequately. While performing fine motor tasks such as writing, drawing, or dressing, certain movement coordination and planning difficulties (apraxia) may be present; however, they are commonly unnoticed. As the disease progresses, people with Alzheimer's disease can often continue to perform many tasks independently; however, they may need assistance or supervision with the most cognitively demanding activities.
=== Middle stage ===
Progressive deterioration eventually hinders independence, with subjects being unable to perform most common activities of daily living. Speech difficulties become evident due to an inability to recall vocabulary, which leads to frequent incorrect word substitutions (paraphasias). Reading and writing skills are also progressively lost. Complex motor sequences become less coordinated as time passes and Alzheimer's disease progresses, so the risk of falling increases. During this phase, memory problems worsen, and the person may fail to recognise close relatives. Long-term memory, which was previously intact, becomes impaired.
Behavioral and neuropsychiatric changes become more prevalent. Common manifestations are wandering, irritability and emotional lability, leading to crying, outbursts of unpremeditated aggression, or resistance to caregiving. Sundowning can also appear. Approximately 30% of people with Alzheimer's disease develop illusionary misidentifications and other delusional symptoms. Subjects also lose insight of their disease process and limitations (anosognosia). Urinary incontinence can develop. These symptoms create stress for relatives and caregivers, which can be reduced by moving the person from home care to other long-term care facilities.
=== Late stage ===
During the final stage, known as the late-stage or severe stage, there is complete dependence on caregivers. Language is reduced to simple phrases or even single words, eventually leading to complete loss of speech. Despite the loss of verbal language abilities, people can often understand and return emotional signals. Although aggressiveness can still be present, extreme apathy and exhaustion are much more common symptoms. People with Alzheimer's disease will ultimately not be able to perform even the simplest tasks independently; muscle mass and mobility deteriorates to the point where they are bedridden and unable to feed themselves. The cause of death is usually an external factor, such as infection of pressure ulcers or pneumonia, not the disease itself. In some cases, there is a paradoxical lucidity immediately before death, where there is an unexpected recovery of mental clarity.
== Causes ==
Alzheimer's disease is believed to occur when abnormal amounts of amyloid beta (Aβ), accumulating extracellularly as amyloid plaques and tau proteins, or intracellularly as neurofibrillary tangles, form in the brain, affecting neuronal functioning and connectivity, resulting in a progressive loss of brain function. This altered protein clearance ability is age-related, regulated by brain cholesterol, and associated with other neurodegenerative diseases.
The cause for most Alzheimer's cases is still mostly unknown, except for 1–2% of cases where deterministic genetic differences have been identified. Several competing hypotheses attempt to explain the underlying cause; the most predominant hypothesis is the amyloid beta (Aβ) hypothesis.
In the 1970s, the cholinergic hypothesis proposed that Alzheimer's disease is caused by reduced synthesis of the neurotransmitter acetylcholine. The loss of cholinergic neurons noted in the limbic system and cerebral cortex, is a key feature in the progression of Alzheimer's. The 1991 amyloid hypothesis postulated that extracellular amyloid beta (Aβ) deposits are the fundamental cause of the disease. Support for this postulate comes from the location of the gene for the amyloid precursor protein (APP) on chromosome 21, together with the fact that people with trisomy 21 (Down syndrome) who have an extra gene copy almost universally exhibit at least the earliest symptoms of Alzheimer's disease by 40 years of age. A specific isoform of apolipoprotein, APOE4, is a major genetic risk factor for Alzheimer's disease. While apolipoproteins enhance the breakdown of amyloid beta, some isoforms are not very effective at this task (such as APOE4), leading to excess amyloid buildup in the brain.
=== Genetic ===
==== Late onset ====
Late-onset Alzheimer's is about 70% heritable. Most cases of Alzheimer's are not familial, and so they are termed sporadic Alzheimer's disease. Of the cases of sporadic Alzheimer's disease, most are classified as late onset where they are developed after the age of 65 years.
The strongest genetic risk factor for sporadic Alzheimer's disease is APOEε4. APOEε4 is one of four alleles of apolipoprotein E (APOE). APOE plays a major role in lipid-binding proteins in lipoprotein particles and the ε4 allele disrupts this function. Between 40% and 80% of people with Alzheimer's disease possess at least one APOEε4 allele. The APOEε4 allele increases the risk of the disease by three times in heterozygotes and by 15 times in homozygotes. Like many human diseases, environmental effects and genetic modifiers result in incomplete penetrance. For example, Nigerian Yoruba people do not show the relationship between dose of APOEε4 and incidence or age-of-onset for Alzheimer's disease seen in other human populations.
==== Early onset ====
Only 1–2% of Alzheimer's cases are inherited due to autosomal dominant effects, as Alzheimer's is highly polygenic. When the disease is caused by autosomal dominant variants, it is known as early onset familial Alzheimer's disease, which is rarer and has a faster rate of progression. Less than 5% of sporadic Alzheimer's disease have an earlier onset, and early-onset Alzheimer's is about 90% heritable. Familial Alzheimer's disease usually implies two or more persons affected in one or more generations.
Early onset familial Alzheimer's disease can be attributed to mutations in one of three genes: those encoding amyloid-beta precursor protein (APP) and presenilins PSEN1 and PSEN2. Most mutations in the APP and presenilin genes increase the production of a small protein called amyloid beta (Aβ)42, which is the main component of amyloid plaques. Some of the mutations merely alter the ratio between Aβ42 and the other major forms—particularly Aβ40—without increasing Aβ42 levels in the brain. Two other genes associated with autosomal dominant Alzheimer's disease are ABCA7 and SORL1.
Alleles in the TREM2 gene have been associated with a three to five times higher risk of developing Alzheimer's disease.
A Japanese pedigree of familial Alzheimer's disease was found to be associated with a deletion mutation of codon 693 of APP. This mutation and its association with Alzheimer's disease was first reported in 2008, and is known as the Osaka mutation. Only homozygotes with this mutation have an increased risk of developing Alzheimer's disease. This mutation accelerates Aβ oligomerization but the proteins do not form the amyloid fibrils that aggregate into amyloid plaques, suggesting that it is the Aβ oligomerization rather than the fibrils that may be the cause of this disease. Mice expressing this mutation have all the usual pathologies of Alzheimer's disease.
=== Hypotheses ===
==== Amyloid beta and tau protein ====
The tau hypothesis proposes that tau protein abnormalities initiate the disease cascade. In this model, hyperphosphorylated tau begins to pair with other threads of tau as paired helical filaments. Eventually, they form neurofibrillary tangles inside neurons. When this occurs, the microtubules disintegrate, destroying the structure of the cell's cytoskeleton which collapses the neuron's transport system.
A number of studies connect the misfolded amyloid beta and tau proteins associated with the pathology of Alzheimer's disease, as bringing about oxidative stress that leads to neuroinflammation. This chronic inflammation is also a feature of other neurodegenerative diseases including Parkinson's disease, and ALS. Spirochete infections have also been linked to dementia. DNA damages accumulate in Alzheimer's diseased brains; reactive oxygen species may be the major source of this DNA damage.
==== Sleep ====
Sleep disturbances are seen as a possible risk factor for inflammation in Alzheimer's disease. Sleep disruption was previously only seen as a consequence of Alzheimer's disease, but as of 2020, accumulating evidence suggests that this relationship may be bidirectional.
==== Metal toxicity, smoking, neuroinflammation and air pollution ====
The cellular homeostasis of biometals such as ionic copper, iron, and zinc is disrupted in Alzheimer's disease, though it remains unclear whether this is produced by or causes the changes in proteins. Smoking is a significant Alzheimer's disease risk factor. Systemic markers of the innate immune system are risk factors for late-onset Alzheimer's disease. Exposure to air pollution may be a contributing factor to the development of Alzheimer's disease.
==== Age-related myelin decline ====
Retrogenesis is a medical hypothesis that just as the fetus goes through a process of neurodevelopment beginning with neurulation and ending with myelination, the brains of people with Alzheimer's disease go through a reverse neurodegeneration process starting with demyelination and death of axons (white matter) and ending with the death of grey matter. Likewise the hypothesis is, that as infants go through states of cognitive development, people with Alzheimer's disease go through the reverse process of progressive cognitive impairment.
According to one theory, dysfunction of oligodendrocytes and their associated myelin during aging contributes to axon damage, which in turn generates in amyloid production and tau hyperphosphorylation. Comorbidities between the demyelinating disease, multiple sclerosis, and Alzheimer's disease have been reported.
==== Other hypotheses ====
The association with celiac disease is unclear, with a 2019 study finding no increase in dementia overall in those with celiac disease while a 2018 review found an association with several types of dementia including Alzheimer's disease.
Studies have shown a potential link between infection with certain viruses and developing Alzheimer's disease later in life. Notably, a large scale study conducted on 6,245,282 patients has shown an increased risk of developing Alzheimer's disease following COVID-19 infection in cognitively normal individuals over 65.
Some evidence suggests that some viral infections such as Herpes simplex virus 1 (HSV-1) may be associated with dementia, but there are conflicting results and the association with Alzheimer's is unclear as of 2024.
Some researchers have proposed that Alzheimer's disease is Type 3 diabetes because of a number of correspondences with both Type 1 and Type 2 diabetes.
== Pathophysiology ==
=== Neuropathology ===
Alzheimer's disease is characterised by loss of neurons and synapses in the cerebral cortex and certain subcortical regions. This loss results in gross atrophy of the affected regions, including degeneration in the temporal lobe and parietal lobe, and parts of the frontal cortex and cingulate gyrus. Degeneration is also present in brainstem nuclei particularly the locus coeruleus in the pons. Studies using MRI and PET have documented reductions in the size of specific brain regions in people with Alzheimer's disease as they progressed from mild cognitive impairment to Alzheimer's disease, and in comparison with similar images from healthy older adults.
Both Aβ plaques and neurofibrillary tangles are clearly visible by microscopy in brains of those with Alzheimer's disease, especially in the hippocampus. However, Alzheimer's disease may occur without neurofibrillary tangles in the neocortex. Plaques are dense, mostly insoluble deposits of amyloid beta peptide and cellular material outside and around neurons. Neurofibrillary tangles are aggregates of the microtubule-associated protein tau which has become hyperphosphorylated and accumulate inside the cells themselves. Although many older individuals develop some plaques and tangles as a consequence of aging, the brains of people with Alzheimer's disease have a greater number of them in specific brain regions such as the temporal lobe. Lewy bodies are not rare in the brains of people with Alzheimer's disease.
=== Biochemistry ===
==== Amyloid beta ====
Alzheimer's disease has been identified as a protein misfolding disease, a proteopathy, caused by the accumulation of abnormally folded amyloid beta protein into amyloid plaques, and tau protein into neurofibrillary tangles in the brain. Plaques are made up of small peptides, 39–43 amino acids in length, called amyloid beta. Amyloid beta is a fragment from the larger amyloid-beta precursor protein (APP) a transmembrane protein that penetrates the cell's membrane. APP is critical to neuron growth, survival, and post-injury repair. In Alzheimer's disease, gamma secretase and beta secretase act together in a proteolytic process which causes APP to be divided into smaller fragments. Although commonly researched as neuronal proteins, APP and its processing enzymes are abundantly expressed by other brain cells. One of these fragments gives rise to fibrils of amyloid beta, which then form clumps that deposit outside neurons in dense formations known as amyloid plaques. Excitatory neurons are known to be the major producers of amyloid beta that contribute to major extracellular plaque deposition.
==== Phosphorylated tau ====
Alzheimer's disease is also considered a tauopathy due to abnormal aggregation of the tau protein. Every neuron has a cytoskeleton, an internal support structure partly made up of structures called microtubules. These microtubules act like tracks, guiding nutrients and molecules from the body of the cell to the ends of the axon and back. A protein called tau stabilises the microtubules when phosphorylated, and is therefore called a microtubule-associated protein. In Alzheimer's disease, tau undergoes chemical changes, becoming hyperphosphorylated; it then begins to pair with other threads, creating neurofibrillary tangles and disintegrating the neuron's transport system. Pathogenic tau can also cause neuronal death through transposable element dysregulation. Necroptosis has also been reported as a mechanism of cell death in brain cells affected with tau tangles.
=== Disease mechanism ===
Exactly how disturbances of production and aggregation of the beta-amyloid peptide give rise to the pathology of Alzheimer's disease is not known. The amyloid hypothesis traditionally points to the accumulation of beta-amyloid peptides as the central event triggering neuron degeneration. Accumulation of aggregated amyloid fibrils, which are believed to be the toxic form of the protein responsible for disrupting the cell's calcium ion homeostasis, induces programmed cell death (apoptosis). It is also known that Aβ selectively builds up in the mitochondria in the cells of Alzheimer's-affected brains, and it also inhibits certain enzyme functions and the utilisation of glucose by neurons.
Evidence supports Aβ as playing a central role in the pathogenesis of AD, but it does not completely explain the condition, as individuals may have normal cognition and very high Aβ burden in their brains at an advanced age, and the beneficial effect of therapeutics (such as monoclonal antibodies) promoting Aβ clearance has ranged from nonexistent to modest.
Iron dyshomeostasis is linked to disease progression, an iron-dependent form of regulated cell death called ferroptosis could be involved. Products of lipid peroxidation are also elevated in AD brain compared with controls.
Various inflammatory processes and cytokines may also have a role in the pathology of Alzheimer's disease. Inflammation is a general marker of tissue damage in any disease, and may be either secondary to tissue damage in Alzheimer's disease or a marker of an immunological response. There is increasing evidence of a strong interaction between the neurons and the immunological mechanisms in the brain. Obesity and systemic inflammation may interfere with immunological processes which promote disease progression.
Alterations in the distribution of different neurotrophic factors and in the expression of their receptors such as the brain-derived neurotrophic factor (BDNF) have been described in Alzheimer's disease.
Evidence has accrued for microglia as central actors in the mechanism of AD. Microglia are topographically associated with pTau and Aβ within the brain, even when each pathologic component occurs in distinct brain regions, and microglial activation has been documented in those with mild cognitive impairment, despite a lack of tracer uptake, suggesting that microglial dysfunction may precede plaque deposition as an inciting event in AD. Microglia are the principal immunological cells of the central nervous system, serving as the tissue-resident macrophages of the brain; they are capable of recognizing and taking up Aβ through multiple pattern recognition receptors, making them central to amyloid clearance within the brain. However, microglia can also be a major source of pro-inflammatory mediators which can be deleterious to neurological function.
== Diagnosis ==
Alzheimer's disease (AD) can only be definitively diagnosed with autopsy findings; in the absence of autopsy, clinical diagnoses of AD are "possible" or "probable", based on other findings. Up to 23% of those clinically diagnosed with AD may be misdiagnosed and may have pathology suggestive of another condition with symptoms that mimic those of AD.
AD is usually clinically diagnosed based on a person's medical history, observations from friends or relatives, and behavioral changes. The presence of characteristic neuropsychological changes with impairments in at least two cognitive domains that are severe enough to affect a person's functional abilities are required for the diagnosis. Domains that may be impaired include memory (most commonly impaired), language, executive function, visuospatial functioning, or other areas of cognition. The neurocognitive changes must be a decline from a prior level of function and the diagnosis requires ruling out other common causes of neurocognitive decline. Advanced medical imaging with computed tomography (CT) or magnetic resonance imaging (MRI), and with single-photon emission computed tomography (SPECT) or positron emission tomography (PET), can be used to help exclude other cerebral pathology or subtypes of dementia. On MRI or CT, Alzheimer's disease usually shows a generalised or focal cortical atrophy, which may be asymmetric. Atrophy of the hippocampus is also commonly seen. Brain imaging commonly also shows cerebrovascular disease, most commonly previous strokes (small or large territory strokes), and this is thought to be a contributing cause of many cases of dementia (up to 46% cases of dementia also have cerebrovascular disease on imaging). FDG-PET scan is not required for the diagnosis but it is sometimes used when standard testing is unclear. FDG-PET shows a bilateral, asymmetric, temporal and parietal reduced activity. Advanced imaging may predict conversion from prodromal stages (mild cognitive impairment) to Alzheimer's disease. FDA-approved radiopharmaceutical diagnostic agents used in PET for Alzheimer's disease are florbetapir (2012), flutemetamol (2013), florbetaben (2014), and flortaucipir (2020). Because many insurance companies in the United States do not cover this procedure, its use in clinical practice is largely limited to clinical trials as of 2018.
Assessment of intellectual functioning including memory testing can further characterise the state of the disease. Medical organizations have created diagnostic criteria to ease and standardise the diagnostic process for practising physicians. Definitive diagnosis can only be confirmed with post-mortem evaluations when brain material is available and can be examined histologically for senile plaques and neurofibrillary tangles.
=== Criteria ===
There are three sets of criteria for the clinical diagnoses of the spectrum of Alzheimer's disease: the 2013 fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5); the National Institute on Aging-Alzheimer's Association (NIA-AA) definition as revised in 2011; and the International Working Group criteria as revised in 2010.
Eight intellectual domains are most commonly impaired in AD—memory, language, perceptual skills, attention, motor skills, orientation, problem solving and executive functional abilities, as listed in the fourth text revision of the DSM (DSM-IV-TR).
The DSM-5 defines criteria for probable or possible AD for both major and mild neurocognitive disorder. Major or mild neurocognitive disorder must be present along with at least one cognitive deficit for a diagnosis of either probable or possible AD. For major neurocognitive disorder due to AD, probable Alzheimer's disease can be diagnosed if the individual has genetic evidence of AD or if two or more acquired cognitive deficits, and a functional disability that is not from another disorder, are present. Otherwise, possible AD can be diagnosed as the diagnosis follows an atypical route. For mild neurocognitive disorder due to AD, probable Alzheimer's disease can be diagnosed if there is genetic evidence, whereas possible AD can be met if all of the following are present: no genetic evidence, decline in both learning and memory, two or more cognitive deficits, and a functional disability not from another disorder.
The NIA-AA criteria are used mainly in research rather than in clinical assessments. They define AD through three major stages: preclinical, mild cognitive impairment (MCI), and Alzheimer's dementia. Diagnosis in the preclinical stage is complex and focuses on asymptomatic individuals; the latter two stages describe individuals experiencing symptoms, along with biomarkers, predominantly those for neuronal injury (mainly tau-related) and amyloid beta deposition. The core clinical criteria itself rests on the presence of cognitive impairment without the presence of comorbidities. The third stage is divided into probable and possible AD dementia. In probable AD dementia there is steady impairment of cognition over time and a memory-related or non-memory-related cognitive dysfunction. In possible AD dementia, another causal disease such as cerebrovascular disease is present.
=== Techniques ===
Neuropsychological tests including cognitive tests such as the mini–mental state examination (MMSE), the Montreal Cognitive Assessment (MoCA) and the Mini-Cog are widely used to aid in diagnosis of the cognitive impairments in AD. These tests may not always be accurate, as they lack sensitivity to mild cognitive impairment, and can be biased by language or attention problems; more comprehensive test arrays are necessary for high reliability of results, particularly in the earliest stages of the disease.
Further neurological examinations are crucial in the differential diagnosis of Alzheimer's disease and other diseases. Interviews with family members are used in assessment; caregivers can supply important information on daily living abilities and on the decrease in the person's mental function. A caregiver's viewpoint is particularly important, since a person with Alzheimer's disease is commonly unaware of their deficits. Many times, families have difficulties in the detection of initial dementia symptoms and may not communicate accurate information to a physician.
Supplemental testing can rule out other potentially treatable diagnoses and help avoid misdiagnoses. Common supplemental tests include blood tests, thyroid function tests, as well as tests to assess vitamin B12 levels, rule out neurosyphilis and rule out metabolic problems (including tests for kidney function, electrolyte levels and for diabetes). MRI or CT scans might also be used to rule out other potential causes of the symptoms – including tumors or strokes. Delirium and depression can be common among individuals and are important to rule out.
Psychological tests for depression are used, since depression can either be concurrent with AD (see Depression of Alzheimer disease), an early sign of cognitive impairment, or even the cause.
Due to low accuracy, the C-PIB-PET scan is not recommended as an early diagnostic tool or for predicting the development of AD when people show signs of mild cognitive impairment (MCI). The use of 18F-FDG PET scans, as a single test, to identify people who may develop Alzheimer's disease is not supported by evidence.
In May 2025, the US FDA approved a blood test by Fujirebio Diagnostics’ Lumipulse G pTau217/ß-Amyloid 1-42 Plasma Ratio diagnostic device for the early detection of amyloid plaques associated with AD in adults aged 55 years and older who are exhibiting signs and symptoms of the disease.
== Prevention ==
There are no disease-modifying treatments available to cure Alzheimer's disease and because of this, AD research has focused on interventions to prevent the onset and progression. There is no evidence that supports any particular measure in preventing AD, and studies of measures to prevent the onset or progression have produced inconsistent results. Epidemiological studies have proposed relationships between an individual's likelihood of developing AD and modifiable factors, such as medications, lifestyle, and diet. There are some challenges in determining whether interventions for AD act as a primary prevention method, preventing the disease itself, or a secondary prevention method, identifying the early stages of the disease. These challenges include duration of intervention, different stages of disease at which intervention begins, and lack of standardization of inclusion criteria regarding biomarkers specific for AD. Further research is needed to determine factors that can help prevent AD.
=== Medication ===
Cardiovascular risk factors, such as hypercholesterolaemia, hypertension, diabetes, and smoking, are associated with a higher risk of onset and worsened course of AD. The use of statins to lower cholesterol may be of benefit in AD. Antihypertensive and antidiabetic medications in individuals without overt cognitive impairment may decrease the risk of dementia by influencing cerebrovascular pathology. More research is needed to examine the relationship with AD specifically; clarification of the direct role medications play versus other concurrent lifestyle changes (diet, exercise, smoking) is needed.
Depression is associated with an increased risk for AD; management with antidepressant medications may provide a preventative measure.
Historically, long-term usage of non-steroidal anti-inflammatory drugs (NSAIDs) were thought to be associated with a reduced likelihood of developing AD as it reduces inflammation, but NSAIDs do not appear to be useful as a treatment. Additionally, because women have a higher incidence of AD than men, it was once thought that estrogen deficiency during menopause was a risk factor, but there is a lack of evidence to show that hormone replacement therapy (HRT) in menopause decreases risk of cognitive decline.
=== Lifestyle ===
Certain lifestyle activities, such as physical and cognitive exercises, higher education and occupational attainment, cigarette smoking, stress, sleep, and the management of other comorbidities, including diabetes and hypertension, may affect the risk of developing AD.
Physical exercise is associated with a decreased rate of dementia, and is effective in reducing symptom severity in those with AD. Memory and cognitive functions can be improved with aerobic exercises including brisk walking three times weekly for forty minutes. It may also induce neuroplasticity of the brain. Participating in mental exercises, such as reading, crossword puzzles, and chess have shown potential to be preventive. Meeting the WHO recommendations for physical activity is associated with a lower risk of AD.
Higher education and occupational attainment, and participation in leisure activities, contribute to a reduced risk of developing AD, or of delaying the onset of symptoms. This is compatible with the cognitive reserve theory, which states that some life experiences result in more efficient neural functioning providing the individual a cognitive reserve that delays the onset of dementia manifestations. Education delays the onset of Alzheimer's disease syndrome without changing the duration of the disease.
Cessation in smoking may reduce risk of developing AD, specifically in those who carry the APOE ɛ4 allele. The increased oxidative stress caused by smoking results in downstream inflammatory or neurodegenerative processes that may increase risk of developing AD. Avoidance of smoking, counseling and pharmacotherapies to quit smoking are used, and avoidance of environmental tobacco smoke is recommended.
Alzheimer's disease is associated with sleep disorders but the precise relationship is unclear. It was once thought that as people get older, the risk of developing sleep disorders and AD independently increase, but research suggests sleep disorders may be a risk factor for AD. One theory is that the mechanisms to increase clearance of toxic substances, including Aβ, are active during sleep. With decreased sleep, a person is increasing Aβ production and decreasing Aβ clearance, resulting in Aβ accumulation. Receiving adequate sleep (approximately 7–8 hours) every night has become a potential lifestyle intervention to prevent the development of AD.
Stress is a risk factor for the development of AD. The mechanism by which stress predisposes someone to development of AD is unclear, but it is suggested that lifetime stressors may affect a person's epigenome, leading to an overexpression or under expression of specific genes. Although the relationship of stress and AD is unclear, strategies to reduce stress and relax the mind may be helpful strategies in preventing the progression or Alzheimer's disease. Meditation, for instance, is a helpful lifestyle change to support cognition and well-being, though further research is needed to assess long-term effects.
== Management ==
There is no cure for AD; available treatments offer relatively small symptomatic benefits but remain palliative in nature. Treatments can be divided into pharmaceutical, psychosocial, and caregiving.
=== Pharmaceutical ===
Medications used to treat the cognitive symptoms of AD rather than the underlying cause include: four acetylcholinesterase inhibitors (tacrine, rivastigmine, galantamine, and donepezil) and memantine, an NMDA receptor antagonist. The acetylcholinesterase inhibitors are intended for those with mild to severe AD, whereas memantine is intended for those with moderate or severe Alzheimer's disease. The benefit from their use is small.
Reduction in the activity of the cholinergic neurons is a well-known feature of AD. Acetylcholinesterase inhibitors are employed to reduce the rate at which acetylcholine (ACh) is broken down, thereby increasing the concentration of ACh in the brain and combating the loss of ACh caused by the death of cholinergic neurons. There is evidence for the efficacy of these medications in mild to moderate AD, and some evidence for their use in the advanced stage. The use of these drugs in mild cognitive impairment has not shown any effect in a delay of the onset of Alzheimer's disease. The most common side effects are nausea and vomiting, both of which are linked to cholinergic excess. These side effects arise in approximately 10–20% of users, are mild to moderate in severity, and can be managed by slowly adjusting medication doses. Less common secondary effects include muscle cramps, decreased heart rate (bradycardia), decreased appetite and weight, and increased gastric acid production.
Glutamate is an excitatory neurotransmitter of the nervous system, although excessive amounts in the brain can lead to cell death through a process called excitotoxicity which consists of the overstimulation of glutamate receptors. Excitotoxicity occurs not only in AD, but also in other neurological diseases such as Parkinson's disease and multiple sclerosis. Memantine is a noncompetitive NMDA receptor antagonist first used as an anti-influenza agent. It acts on the glutamatergic system by blocking NMDA receptors and inhibiting their overstimulation by glutamate. Memantine has been shown to have a small benefit in the treatment of moderate to severe AD. Reported adverse events with memantine are infrequent and mild, including hallucinations, confusion, dizziness, headache and fatigue. The combination of memantine and donepezil has been shown to be "of statistically significant but clinically marginal effectiveness".
An extract of Ginkgo biloba known as EGb 761 has been used for treating AD and other neuropsychiatric disorders. Its use is approved throughout Europe. The World Federation of Biological Psychiatry guidelines lists EGb 761 with the same weight of evidence (level B) given to acetylcholinesterase inhibitors and memantine. EGb 761 is the only one that showed improvement of symptoms in both AD and vascular dementia. EGb 761 may have a role either on its own or as an add-on if other therapies prove ineffective. A 2016 review concluded that the quality of evidence from clinical trials on Ginkgo biloba has been insufficient to warrant its use for treating AD.
Atypical antipsychotics are modestly useful in reducing aggression and psychosis in people with AD, but their advantages are offset by serious adverse effects, such as stroke, movement difficulties or cognitive decline. When used in the long-term, they have been shown to associate with increased mortality. They are recommended in dementia only after first line therapies such as behavior modification have failed, and due to the risk of adverse effects, they should be used for the shortest amount of time possible. Stopping antipsychotic use in this group of people appears to be safe.
=== Psychosocial ===
Psychosocial interventions are used as an adjunct to pharmaceutical treatment and can be classified within behavior-, emotion-, cognition- or stimulation-oriented approaches.
Behavioral interventions attempt to identify and reduce the antecedents and consequences of problem behaviors. This approach has not shown success in improving overall functioning, but can help to reduce some specific problem behaviors, such as incontinence. There is a lack of high quality data on the effectiveness of these techniques in other behavior problems such as wandering. Music therapy is effective in reducing behavioral and psychological symptoms.
Emotion-oriented interventions include reminiscence therapy, validation therapy, supportive psychotherapy, sensory integration, also called snoezelen, and simulated presence therapy. A Cochrane review has found no evidence that this is effective. Reminiscence therapy (RT) involves the discussion of past experiences individually or in group, many times with the aid of photographs, household items, music and sound recordings, or other familiar items from the past. A 2018 review of the effectiveness of RT found that effects were inconsistent, small in size and of doubtful clinical significance, and varied by setting. Simulated presence therapy (SPT) is based on attachment theories and involves playing a recording with voices of the closest relatives of the person with AD. There is partial evidence indicating that SPT may reduce challenging behaviors.
The aim of cognition-oriented treatments, which include reality orientation and cognitive retraining, is the reduction of cognitive deficits. Reality orientation consists of the presentation of information about time, place, or person to ease the understanding of the person about its surroundings and his or her place in them. On the other hand, cognitive retraining tries to improve impaired capacities by exercising mental abilities. Both have shown some efficacy improving cognitive capacities.
Stimulation-oriented treatments include art, music and pet therapies, exercise, and any other kind of recreational activities. Stimulation has modest support for improving behavior, mood, and, to a lesser extent, function. Nevertheless, as important as these effects are, the main support for the use of stimulation therapies is the change in the person's routine.
=== Caregiving ===
Since AD has no cure and it gradually renders people incapable of tending to their own needs, caregiving is essentially the treatment and must be carefully managed over the course of the disease.
During the early and moderate stages, modifications to the living environment and lifestyle can increase safety and reduce caretaker burden. Examples of such modifications are the adherence to simplified routines, the placing of safety locks, the labeling of household items to cue the person with the disease or the use of modified daily life objects. If eating becomes problematic, food will need to be prepared in smaller pieces or even puréed. When swallowing difficulties arise, the use of feeding tubes may be required. In such cases, the medical efficacy and ethics of continuing feeding is an important consideration of the caregivers and family members. The use of physical restraints is rarely indicated in any stage of the disease, although there are situations when they are necessary to prevent harm to the person with Alzheimer's disease or their caregivers.
During the final stages of the disease, treatment is centred on relieving discomfort until death, often with the help of hospice.
=== Diet ===
Diet may be a modifiable risk factor for the development of Alzheimer's disease but more research needs to be conducted. The Mediterranean diet, and the DASH diet are both associated with less cognitive decline. A different approach has been to incorporate elements of both of these diets into one known as the MIND diet.
Results from large-scale epidemiological studies and clinical trials have not demonstrated an independent role for most individual dietary components.
== Prognosis ==
The early stages of AD are difficult to diagnose. A definitive diagnosis is usually made once cognitive impairment compromises daily living activities, although the person may still be living independently. The symptoms will progress from mild cognitive problems, such as memory loss through increasing stages of cognitive and non-cognitive disturbances, eliminating any possibility of independent living, especially in the late stages of the disease.
Life expectancy of people with AD is reduced. The normal life expectancy for 60 to 70 years old is 23 to 15 years; for 90 years old it is 4.5 years. Following AD diagnosis it ranges from 7 to 10 years for those in their 60s and early 70s (a loss of 13 to 8 years), to only about 3 years or less (a loss of 1.5 years) for those in their 90s.
Fewer than 3% of people live more than fourteen years after diagnosis. Disease features significantly associated with reduced survival are an increased severity of cognitive impairment, decreased functional level, disturbances in the neurological examination, history of falls, malnutrition, dehydration and weight loss. Other coincident diseases such as heart problems, diabetes, or history of alcohol abuse are also related with shortened survival. While the earlier the age at onset the higher the total survival years, life expectancy is particularly reduced when compared to the healthy population among those who are younger. Men have a less favourable survival prognosis than women.
Aspiration pneumonia is the most frequent immediate cause of death brought by AD. While the reasons behind the lower prevalence of cancer in AD patients remain unclear, some researchers hypothesize that biological mechanisms shared by both diseases might play a role. However, this requires further investigation.
== Epidemiology ==
Two main measures are used in epidemiological studies: incidence and prevalence. Incidence is the number of new cases per unit of person-time at risk (usually number of new cases per thousand person-years); while prevalence is the total number of cases of the disease in the population at any given time.
Regarding incidence, cohort longitudinal studies (studies where a disease-free population is followed over the years) provide rates between 10 and 15 per thousand person-years for all dementias and 5–8 for AD, which means that half of new dementia cases each year are Alzheimer's disease. Advancing age is a primary risk factor for the disease and incidence rates are not equal for all ages: every 5 years after the age of 65, the risk of acquiring the disease approximately doubles, increasing from 3 to as much as 69 per thousand person years. Females with AD are more common than males, but this difference is likely due to women's longer life spans. When adjusted for age, both sexes are affected by Alzheimer's at equal rates. In the United States, the risk of dying from AD in 2010 was 26% higher among the non-Hispanic white population than among the non-Hispanic black population, and the Hispanic population had a 30% lower risk than the non-Hispanic white population. However, much AD research remains to be done in minority groups, such as the African American, East Asian and Hispanic/Latino populations. Studies have shown that these groups are underrepresented in clinical trials and do not have the same risk of developing AD when carrying certain genetic risk factors (i.e. APOE4), compared to their caucasian counterparts.
The prevalence of AD in populations is dependent upon factors including incidence and survival. Since the incidence of AD increases with age, prevalence depends on the mean age of the population for which prevalence is given. In the United States in 2020, AD dementia prevalence was estimated to be 5.3% for those in the 60–74 age group, with the rate increasing to 13.8% in the 74–84 group and to 34.6% in those greater than 85. Prevalence rates in some less developed regions around the globe are lower. Both the prevalence and incidence rates of AD are steadily increasing, and the prevalence rate is estimated to triple by 2050 reaching 152 million, compared to the 50 million people with AD globally in 2020.
== History ==
The ancient Greek and Roman philosophers and physicians associated old age with increasing dementia. It was not until 1901 that German psychiatrist Alois Alzheimer identified the first case of what became known as Alzheimer's disease, named after him, in a fifty-year-old woman he called Auguste D. He followed her case until she died in 1906 when he first reported publicly on it. During the next five years, eleven similar cases were reported in the medical literature, some of them already using the term Alzheimer's disease. The disease was first described as a distinctive disease by Emil Kraepelin after suppressing some of the clinical (delusions and hallucinations) and pathological features (arteriosclerotic changes) contained in the original report of Auguste D. He included Alzheimer's disease, also named presenile dementia by Kraepelin, as a subtype of senile dementia in the eighth edition of his Textbook of Psychiatry, published on 15 July 1910.
For most of the 20th century, the diagnosis of Alzheimer's disease was reserved for individuals between the ages of 45 and 65 who developed symptoms of dementia. The terminology changed after 1977 when a conference on Alzheimer's disease concluded that the clinical and pathological manifestations of presenile and senile dementia were almost identical, although the authors also added that this did not rule out the possibility that they had different causes. This eventually led to the diagnosis of Alzheimer's disease independent of age. The term senile dementia of the Alzheimer type (SDAT) was used for a time to describe the condition in those over 65, with classical Alzheimer's disease being used to describe those who were younger. Eventually, the term Alzheimer's disease was formally adopted in medical nomenclature to describe individuals of all ages with a characteristic common symptom pattern, disease course, and neuropathology.
The National Institute of Neurological and Communicative Disorders and Stroke (NINCDS) and the Alzheimer's Disease and Related Disorders Association (ADRDA, now known as the Alzheimer's Association) established the most commonly used NINCDS-ADRDA Alzheimer's Criteria for diagnosis in 1984, extensively updated in 2007. These criteria require that the presence of cognitive impairment, and a suspected dementia syndrome, be confirmed by neuropsychological testing for a clinical diagnosis of possible or probable Alzheimer's disease. A histopathologic confirmation including a microscopic examination of brain tissue is required for a definitive diagnosis. Good statistical reliability and validity have been shown between the diagnostic criteria and definitive histopathological confirmation.
== Society and culture ==
=== Social costs ===
Dementia, and specifically Alzheimer's disease, may be among the most costly diseases for societies worldwide. As populations age, these costs will probably increase and become an important social problem and economic burden. Costs associated with AD include direct and indirect medical costs, which vary between countries depending on social care for a person with AD. Direct costs include doctor visits, hospital care, medical treatments, nursing home care, specialised equipment, and household expenses. Indirect costs include the cost of informal care and the loss in productivity of informal caregivers.
In the United States as of 2019, informal (family) care is estimated to constitute nearly three-fourths of caregiving for people with AD at a cost of US$234 billion per year and approximately 18.5 billion hours of care. The cost to society worldwide to care for individuals with AD is projected to increase nearly ten-fold, and reach about US$9.1 trillion by 2050.
Costs for those with more severe dementia or behavioral disturbances are higher and are related to the additional caregiving time to provide physical care.
=== Caregiving burden ===
Individuals with Alzheimer's will require assistance in their lifetime, and care will most likely come in the form of a full-time caregiver which is often a role that is taken on by the spouse or a close relative. Caregiving tends to include physical and emotional burdens as well as time and financial strain at times on the person administering the aid. Alzheimer's disease is known for placing a great burden on caregivers which includes social, psychological, physical, or economic aspects. Home care is usually preferred by both those people with Alzheimer's disease as well as their families. This option also delays or eliminates the need for more professional and costly levels of care. Nevertheless, two-thirds of nursing home residents have dementias.
Dementia caregivers are subject to high rates of physical and mental disorders. Factors associated with greater psychosocial problems of the primary caregivers include having an affected person at home, the caregiver being a spouse, demanding behaviors of the cared person such as depression, behavioral disturbances, hallucinations, sleep problems or walking disruptions and social isolation. In the United States, the yearly cost of caring for a person with dementia ranges from $28,078-$56,022 per year for formal medical care and $36,667-$92,689 for informal care provided by a relative or friend (assuming market value replacement costs for the care provided by the informal caregiver) and $15,792-$71,813 in lost wages.
Cognitive behavioral therapy and the teaching of coping strategies either individually or in group have demonstrated their efficacy in improving caregivers' psychological health.
=== Media ===
Alzheimer's disease has been portrayed in films such as: Iris (2001), based on John Bayley's memoir of his wife Iris Murdoch; The Notebook (2004), based on Nicholas Sparks's 1996 novel of the same name; A Moment to Remember (2004); Thanmathra (2005); Memories of Tomorrow (Ashita no Kioku) (2006), based on Hiroshi Ogiwara's novel of the same name; Away from Her (2006), based on Alice Munro's short story The Bear Came over the Mountain; Still Alice (2014), about a Columbia University professor who has early onset Alzheimer's disease, based on Lisa Genova's 2007 novel of the same name and featuring Julianne Moore in the title role. Documentaries on Alzheimer's disease include Malcolm and Barbara: A Love Story (1999) and Malcolm and Barbara: Love's Farewell (2007), both featuring Malcolm Pointon.
Alzheimer's disease has also been portrayed in music by English musician the Caretaker in releases such as Persistent Repetition of Phrases (2008), An Empty Bliss Beyond This World (2011), and Everywhere at the End of Time (2016–2019). Paintings depicting the disorder include the late works by American artist William Utermohlen, who drew self-portraits from 1995 to 2000 as an experiment of showing his disease through art.
== Research directions ==
Antibodies may have the ability to alter the disease course by targeting amyloid beta with immunotherapy medications such as donanemab and lecanemab.
Lecanemab was approved via the FDA accelerated approval process, and was converted to traditional approval in July 2023, after further testing, along with the addition of a boxed warning about amyloid-related imaging abnormalities. As of early August 2024, lecanemab was approved for sale in Japan, South Korea, China, Hong Kong and Israel although it was recommended against approval by an advisory body of the European Union on July 26, citing its side effects.
Donanemab was approved by the FDA in July 2024. Anti-amyloid drugs also cause brain shrinkage. The cholinesterase inhibitor benzgalantamine was approved by the FDA in July 2024.
Specific medications that may reduce the risk or progression of Alzheimer's disease have been studied. The research trials investigating medications generally impact Aβ plaques, inflammation, APOE, neurotransmitter receptors, neurogenesis, growth factors or hormones.
Machine learning algorithms with electronic health records are being studied as a way to predict Alzheimer's disease earlier.
== References ==
== External links ==
"Alzheimer's Disease Research Timeline – Alzforum". www.alzforum.org.
"Alzheimer's Disease Brain Cell Atlas- brain-map.org". portal.brain-map.org. | Wikipedia/Alzheimer's_disease |
Co-therapy is a kind of psychotherapy conducted with more than one therapist present. It is different from conjoint therapy, which is psychotherapy conducted with more than one person as the client. For example, family therapy and couples therapy are types of conjoint therapy. A therapy can be conjoint therapy and not co-therapy, or co-therapy and not conjoint therapy, or both co-therapy and conjoint therapy. Co-therapy is especially applied during couple therapy. Carl Whitaker and Virginia Satir are credited as the founders of co-therapy. Co-therapy dates back to the early twentieth century in Vienna, where psychoanalytic practices were first taking place. It was originally named "multiple therapy" by Alfred Alder, and later introduced separately as "co-therapy" in the 1940s. Co-therapy began with two therapists of differing abilities, one essentially learning from the other, and providing the opportunity to hear feedback on their work.
== Advantages of co-therapy ==
=== An active support system ===
Co-therapy has recently been discussed more thoroughly, and its advantageous aspects have been analysed. Researchers, namely Bowers & Gauron, suggest that co-therapy provides each therapist with a "support system" in their partner. This allows for appropriate communication and the ability to lean on each other when "in the face of the power of the group". Bowers & Gauron are supported by other researchers in this aspect of co-therapy. Russell and Russell also suggest that both therapists are sources of support for each other. This can be in the case of clients (either singular, couples, or families) who express delusional systems or aspects of psychopathy that may be difficult to deal with alone. A co-therapeutic design is more beneficial in these situations as therapists act objectively in each others' aid. This situation highlights an additional advantage of the amount of emotional draining experienced by each therapist individually. Support of both therapists is carried through - if one is absent, there will always be someone available to collect information and continue with the sessions.
=== An educational model ===
Additionally, researchers suggest that a co-therapy relationship is beneficial as an educational model. Cividini and Klain proposed three models of co-therapy education. These three designs all incorporated differing levels of skill in each therapist, for example: situation one, having one experienced and one inexperienced therapist; situation two, including two inexperienced therapists; and situation three, involving two experienced therapists. All models are said to be advantageous, as they all provide educational benefits, such as an inexperienced therapist gaining confidence whilst alongside one with more experience, and in an inexperienced model, the likeliness of a therapist overruling a session is wildly reduced. Moreover, a co-therapist relationship can "compensate for individual weaknesses", meaning that more rounded conclusions can be drawn from therapy sessions as research has shown that co-therapeutic relationships provide greater insight into a client's analysis. Russell & Russell add to this notion by mentioning that conjoint therapeutic relationships can be valuable within the realm of education in order to "role-model didactically", suggesting that it is extremely beneficial for a more inexperienced therapist to learn in a conjoint environment.
=== A respected role model ===
Although therapists can and have been seen to role-model for each other, they are simultaneously acting as examples of good practice for the clients themselves. Researchers Peck & Schroeder suggested that co-therapists could act as alternative powers where necessary. An example is absent parents. This would benefit clients greatly as they can relate to situations created by therapists and discover healthy ways to react and process. Bowers & Gauron furthered this by mentioning that a healthy relationship between co-therapists can act as an effective role model to patients. This is extremely beneficial in situations such as couples therapy, for example. Therapists must also be actively aware of the notion that they are constantly being watched and act accordingly. Natalie Shainess described this situation as 'do as I tell you, but not as I do', suggesting that clients need to also be aware of the imperfect representation that could occur, signalling that they should copy what is said, rather than what they see.
== Disadvantages of co-therapy ==
Although advantages exist (as above), the disadvantages of co-therapy and the issues that may arise for both clients and therapists have also been explored. Dangers can impact clients, therapists and spouses of therapists alike. Fabrizio Napolitani described co-therapy as not only lacking advantages, but also not being free of hazards. The requirement for therapists is ever-increasing, with some suggesting that using two therapists when not extremely necessary is a waste of resources and adds to the expense of therapy provision. Therapists are less likely to be paired thoughtfully, and are usually randomly placed together. This could increase the likeliness of tension during sessions, and could create unnecessary competition. Alternatively, if the therapists form an amicable relationship, there is also the risk of their attention being diverted from the client, which leads to a negative impact on the session where the treatment of the patient is compromised.
=== Spouse involvement ===
A widely debated topic within co-therapy is the involvement of spouses. This could refer to both a spouse of a therapist or a co-therapy relationship that consists of spouses themselves. Many issues can arise as a result of this, for example, jealousy of a third-party relationship. Dickes and Dunn suggested that voyeurism was an intricate part of co-therapy, where therapists gain sexual attraction to their partner as a result of competition in diagnoses. Bowers & Gauron go into more detail on the issue, describing how a therapist and their spouse may disagree about the amount of time one spends with their co-therapist, and how their spouse may become insecure about this as they feel they are not of primary importance. Co-therapists are required to spend a lot of time together outside of therapy sessions to discuss diagnoses and analyses of patients which, although seen in one sense as an advantage, can cause issues in the personal relationships of the therapists themselves.
== References ==
== External links ==
"Summary of Overview of the Cotherapy Model (Overview of Conjoint Therapy with Couples, An Approach with High Conflict Couples". campus.educadium.com.
"Co-Therapy for Couples". Therapy Duo. | Wikipedia/Co-therapy |
Cognitive therapy (CT) is a psychotherapeutic approach developed by American psychiatrist Aaron T. Beck, which aims to change unhelpful or inaccurate thought patterns. CT is one therapeutic approach within the larger group of cognitive behavioral therapies (CBT) and was first expounded by Beck in the 1960s. Cognitive therapy is based on the cognitive model, which states that thoughts, feelings and behavior are all connected, and that individuals can move toward overcoming difficulties and meeting their goals by identifying and changing unhelpful or inaccurate thinking, problematic behavior, and distressing emotional responses. This involves the individual working with the therapist to develop skills for testing and changing beliefs, identifying distorted thinking, relating to others in different ways, and changing behaviors. A cognitive case conceptualization is developed by the cognitive therapist as a guide to understand the individual's internal reality, select appropriate interventions and identify areas of distress.
== History ==
Precursors of certain aspects of cognitive therapy have been identified in various ancient philosophical traditions, particularly Stoicism. For example, Beck's original treatment manual for depression states, "The philosophical origins of cognitive therapy can be traced back to the Stoic philosophers".
Albert Ellis worked on cognitive treatment methods from the 1950s (Ellis, 1956). He called his approach Rational Therapy (RT) at first, then Rational Emotive Therapy (RET) and later Rational Emotive Behavior Therapy (REBT).
Becoming disillusioned with long-term psychodynamic approaches based on gaining insight into unconscious emotions, in the late 1950s Aaron T. Beck came to the conclusion that the way in which his patients perceived and attributed meaning in their daily lives—a process known as cognition—was a key to therapy.
Beck introduced his cognitive therapy approach in Depression: Causes and Treatment (1967), later expanding its application to include anxiety disorders in Cognitive Therapy and the Emotional Disorders (1976), and eventually addressing a wider range of psychological conditions. He also introduced a focus on the underlying "schema"—the underlying ways in which people process information about the self, the world or the future.
This new cognitive approach came into conflict with the behaviorism common at the time, which claimed that talk of mental causes was not scientific or meaningful, and that assessing stimuli and behavioral responses was the best way to practice psychology. However, the 1970s saw a general "cognitive revolution" in psychology. Behavioral modification techniques and cognitive therapy techniques became joined, giving rise to a common concept of cognitive behavioral therapy. Although cognitive therapy has often included some behavioral components, advocates of Beck's particular approach sought to maintain and establish its integrity as a distinct, standardized form of cognitive behavioral therapy in which the cognitive shift is the key mechanism of change.
Aaron and his daughter Judith S. Beck founded the Beck Institute for Cognitive Therapy and Research in 1994. This was later renamed the "Beck Institute for Cognitive Behavior Therapy."
In 1995, Judith released Cognitive Therapy: Basics and Beyond, a treatment manual endorsed by her father Aaron.
As cognitive therapy continued to grow in popularity, the non-profit "Academy of Cognitive Therapy" was created in 1998 to accredit cognitive therapists, create a forum for members to share research and interventions, and to educate the public about cognitive therapy and related mental health issues. The academy later changed its name to the "Academy of Cognitive & Behavioral Therapies".
The 2011 second edition of "Basics and Beyond" (also endorsed by Aaron T. Beck) was titled Cognitive Behavioral Therapy: Basics and Beyond, Second Edition, and adopted the name "CBT" for Aaron's therapy from its beginning. This further blurred the boundaries between the concepts of "CT" and "CBT".
== Basis ==
Therapy may consist of testing the assumptions which one makes and looking for new information that could help shift the assumptions in a way that leads to different emotional or behavioral reactions. Change may begin by targeting thoughts (to change emotion and behavior), behavior (to change feelings and thoughts), or the individual's goals (by identifying thoughts, feelings or behavior that conflict with the goals). Beck initially focused on depression and developed a list of "errors" (cognitive distortion) in thinking that he proposed could maintain depression, including arbitrary inference, selective abstraction, overgeneralization, and magnification (of negatives) and minimization (of positives).
As an example of how CT might work: Having made a mistake at work, a man may believe: "I'm useless and can't do anything right at work." He may then focus on the mistake (which he takes as evidence that his belief is true), and his thoughts about being "useless" are likely to lead to negative emotion (frustration, sadness, hopelessness). Given these thoughts and feelings, he may then begin to avoid challenges at work, which is behavior that could provide even more evidence for him that his belief is true. As a result, any adaptive response and further constructive consequences become unlikely, and he may focus even more on any mistakes he may make, which serve to reinforce the original belief of being "useless." In therapy, this example could be identified as a self-fulfilling prophecy or "problem cycle," and the efforts of the therapist and patient would be directed at working together to explore and change this cycle.
People who are working with a cognitive therapist often practice more flexible ways to think and respond, learning to ask themselves whether their thoughts are completely true, and whether those thoughts are helping them to meet their goals. Thoughts that do not meet this description may then be shifted to something more accurate or helpful, leading to more positive emotion, more desirable behavior, and movement toward the person's goals. Cognitive therapy takes a skill-building approach, where the therapist helps the person to learn and practice these skills independently, eventually "becoming their own therapist."
"Consistent with the cognitive theory of psychopathology, CT is designed to be structured, directive, active, and time-limited, with the express purpose of identifying, reality-testing, and correcting distorted cognition and underlying dysfunctional beliefs".
=== Cognitive model ===
Aaron Beck developed the cognitive model to explain how negative thinking patterns contribute to depression. The model identifies three levels of belief: automatic thoughts, intermediate beliefs, and core beliefs
Automatic thought
Intermediate belief
Core belief or basic belief
In 2014, an update of the cognitive model was proposed, called the Generic Cognitive Model (GCM). The GCM is an update of Beck's model that proposes that mental disorders can be differentiated by the nature of their dysfunctional beliefs. The GCM includes a conceptual framework and a clinical approach for understanding common cognitive processes of mental disorders while specifying the unique features of the specific disorders.
=== Cognitive restructuring (methods) ===
Cognitive restructuring consists of four main steps:
Identifying problematic thoughts, known as automatic thoughts (ATs), which often reflect negative views of the self, the world, or the future.
Identifying cognitive distortions in those thoughts.
Using the Socratic method to dispute the validity of these thoughts.
Developing rational, more balanced responses to replace these negative thoughts.
There are six types of automatic thoughts:
Self-evaluated thoughts
Thoughts about the evaluations of others
Evaluative thoughts about the other person with whom they are interacting
Thoughts about coping strategies and behavioral plans
Thoughts of avoidance
Any other thoughts that were not categorized
Other major techniques include:
Activity monitoring and activity scheduling
Behavioral experiments
Catching, checking, and changing thoughts
Collaborative empiricism: therapist and patient become investigators by examining the evidence to support or reject the patient's cognitions. Empirical evidence is used to determine whether particular cognitions serve any useful purpose.
Downward arrow technique
Exposure and response prevention
Cost benefit analysis
acting "as if"'
Guided discovery: therapist elucidates behavioral problems and faulty thinking by designing new experiences that lead to acquisition of new skills and perspectives. Through both cognitive and behavioral methods, the patient discovers more adaptive ways of thinking and coping with environmental stressors by correcting cognitive processing.
Mastery and pleasure technique
Problem solving
Socratic questioning: involves the creation of a series of questions to a) clarify and define problems, b) assist in the identification of thoughts, images and assumptions, c) examine the meanings of events for the patient, and d) assess the consequences of maintaining maladaptive thoughts and behaviors.
==== Socratic questioning ====
Socratic questions are the archetypal cognitive restructuring techniques. These kinds of questions are designed to challenge assumptions by:
Conceiving reasonable alternatives:
"What might be another explanation or viewpoint of the situation?
Why else did it happen?"
Evaluating those consequences:
"What's the effect of thinking or believing this?
What could be the effect of thinking differently and no longer holding onto this belief?"
Distancing:
"Imagine a specific friend/family member in the same situation or if they viewed the situation this way, what would I tell them?"
Examples of socratic questions are:
"Describe the way you formed your viewpoint originally."
"What initially convinced you that your current view is the best one available?"
"Think of three pieces of evidence that contradict this view, or that support the opposite view. Think about the opposite of this viewpoint and reflect on it for a moment. What's the strongest argument in favor of this opposite view?"
"Write down any specific benefits you get from holding this belief, such as social or psychological benefits. For example, getting to be part of a community of like-minded people, feeling good about yourself or the world, feeling that your viewpoint is superior to others", etc. Are there any reasons that you might hold this view other than because it's true?"
"For instance, does holding this viewpoint provide some peace of mind that holding a different viewpoint would not?"
"In order to refine your viewpoint so that it's as accurate as possible, it's important to challenge it directly on occasion and consider whether there are reasons that it might not be true. What do you think the best or strongest argument against this perspective is?"
"What would you have to experience or find out in order for you to change your mind about this viewpoint?"
"Given your thoughts so far, do you think that there may be a truer, more accurate, or more nuanced version of your original view that you could state right now?"
==== False assumptions ====
False assumptions are based on "cognitive distortions", such as:
Always Being Right: "We are continually on trial to prove that our opinions and actions are correct. Being wrong is unthinkable and we will go to any length to demonstrate our rightness. For example, 'I don't care how badly arguing with me makes you feel, I'm going to win this argument no matter what because I'm right.' Being right often is more important than the feelings of others around a person who engages in this cognitive distortion, even loved ones."
Heaven's Reward Fallacy: "We expect our sacrifice and self-denial to pay off, as if someone is keeping score. We feel bitter when the reward doesn't come."
==== Awfulizing and Must-ing ====
Rational emotive behavior therapy (REBT) includes awfulizing, when a person causes themselves disturbance by labeling an upcoming situation as "awful", rather than envisaging how the situation may actually unfold, and Must-ing, when a person places a false demand on themselves that something "must" happen (e.g. "I must get an A in this exam.")
== Application ==
=== Depression ===
According to Beck's theory of the etiology of depression, depressed people acquire a negative schema of the world in childhood and adolescence; children and adolescents who experience depression acquire this negative schema earlier. Depressed people acquire such schemas through the loss of a parent, rejection by peers, bullying, criticism from teachers or parents, the depressive attitude of a parent or other negative events. When a person with such schemas encounters a situation that resembles the original conditions of the learned schema, the negative schemas are activated.
Beck's negative triad holds that depressed people have negative thoughts about themselves, their experiences in the world, and the future. For instance, a depressed person might think, "I didn't get the job because I'm terrible at interviews. Interviewers never like me, and no one will ever want to hire me." In the same situation, a person who is not depressed might think, "The interviewer wasn't paying much attention to me. Maybe she already had someone else in mind for the job. Next time I'll have better luck, and I'll get a job soon." Beck also identified a number of other cognitive distortions, which can contribute to depression, including the following: arbitrary inference, selective abstraction, overgeneralization, magnification and minimization.
In 2008, Beck proposed an integrative developmental model of depression that aims to incorporate research in genetics and the neuroscience of depression. This model was updated in 2016 to incorporate multiple levels of analyses, new research, and key concepts (e.g., resilience) within the framework of an evolutionary perspective.
=== Other applications ===
Cognitive therapy has been applied to a very wide range of behavioral health issues including:
Academic achievement
Addiction
Anxiety disorders
Bipolar disorder
Low self-esteem
Phobia
Schizophrenia
Substance abuse
Suicidal ideation
Weight loss
== Criticisms ==
A common criticism of cognitive therapy studies is that they often lack double-blinding, meaning that both the participants and therapists are aware of the type of treatment being administered. They may be single-blinded, the rater may not know the treatment the patient received, but neither the patients nor the therapists are blinded to the type of therapy given (two out of three of the persons involved in the trial, i.e., all of the persons involved in the treatment, are unblinded). The patient is an active participant in correcting negative distorted thoughts, thus quite aware of the treatment group they are in.
== See also ==
Cognitive analytic therapy
Cognitive bias mitigation
Cognitive-shifting
David D. Burns
Debiasing
History of psychotherapy
Journal of Cognitive Psychotherapy
Recognition-primed decision
Schema therapy
== References ==
37. Beck, A.T., & Dozois, D.J.A. (2011). Cognitive Therapy: Current Status and Future Directions. Annual Review of Medicine, 62, 397-409.
38. Beck, A.T., & Haigh, E.A.P. (2014). Advances in Cognitive Theory and Therapy: The Generic Cognitive Model. Annual Review of Clinical Psychology, 10, 1-24.
39. Hofmann, S.G., Asnaani, A., Vonk, I.J., Sawyer, A.T., & Fang, A. (2012). The Efficacy of Cognitive Behavioral Therapy: A Review of Meta-analyses. Cognitive Therapy and Research, 36(5), 427-440.
40. Fuchs, T. (2015). Cognitive Behavioral Therapy and the Neurobiological Model of Depression. Psychiatry Research, 229(1-2), 87-93.
41. Ellis, A. (2004). Rational Emotive Behavior Therapy: It Works for Me – It Can Work for You. Impact Publishers.
42. Westbrook, D., Kennerley, H., & Kirk, J. (2011). An Introduction to Cognitive Behaviour Therapy: Skills and Applications. Sage Publications.
43. Leahy, R.L. (2003). Cognitive Therapy Techniques: A Practitioner's Guide. Guilford Press.
44. Craske, M.G., Kircanski, K., & Epstein, A. (2017). Cognitive Behavioral Therapy for Anxiety and Related Disorders: A Practice Manual. Wiley-Blackwell.
== External links ==
An Introduction to Cognitive Therapy & Cognitive Behavioural Approaches
What is Cognitive Therapy
Academy of Cognitive Therapy Archived 2019-03-13 at the Wayback Machine
International Association of Cognitive Psychotherapy | Wikipedia/Cognitive_therapy |
A residential treatment center (RTC), sometimes called a rehab, is a live-in health care facility providing therapy for substance use disorders, mental illness, or other behavioral problems. Residential treatment may be considered the "last-ditch" approach to treating abnormal psychology or psychopathology.
A residential treatment program encompasses any residential program which treats a behavioural issue, including milder psychopathology such as eating disorders (e.g. weight loss camp) or indiscipline (e.g. fitness boot camps as lifestyle interventions). Sometimes residential facilities provide enhanced access to treatment resources, without those seeking treatment considered residents of a treatment program, such as the sanatoriums of Eastern Europe. Controversial uses of residential programs for behavioural and cultural modification include conversion therapy and mandatory American and Canadian residential schools for indigenous populations. A common feature of residential programs is controlled social access to people outside the program, and limited access for outside parties to witness daily conditions within the program. Within psychiatry, it is understood that it can be almost impossible to change entrenched behaviour without impacting habitual relationships, at least in the short term, but the relatively closed nature of many residential programs also makes it possible to conceal abusive practice.
Upon discharge, the patient may be enrolled in an intensive outpatient program for follow-up outside the residential setting.
== Historical background in the United States ==
In the 1600s, Great Britain established the Poor Law that allowed poor children to become trained in apprenticeships by removing them from their families and forcing them to live in group homes. In the 1800s, the United States copied this system, but often mentally ill children were placed in jail with adults because society did not know what to do with them. There were no RTCs in place to provide the 24-hour care they needed, and they were placed in jail when they could not live in the home. In the 1900s, Anna Freud and her peers were part of the Vienna Psychoanalytic Society, and they worked on how to care for children. They worked to create residential treatment centers for children and adolescents with emotional and behavioral disorders.
The year 1944 marked the beginning of Bruno Bettelheim's work at the Orthogenic School in Chicago, and Fritz Redl and David Wineman's work at the Pioneer House in Detroit. Bettelheim helped increase awareness of staff attitudes on children in treatment. He reinforced the idea that a psychiatric hospital was a community, where staff and patients influenced each other and patients were shaped by each other's behaviors. Bettelheim also believed that families should not have frequent contact with their child while he or she was in treatment. This differs from community-based therapy and family therapy of recent years, in which the goal of treatment is for a child to remain in the home. Also, emphasis is placed on the family's role in improving long term outcomes after treatment in a RTC. The Pioneer House created a special-education program to help improve impulse control and sociability in children. After WWII, Bettelheim and the joint efforts of Redl and Wineman were instrumental in establishing residential facilities as therapeutic-treatment alternative for children and adolescents who can not live at home
In the 1960s, the second generation of psychoanalytical RTC was created. These programs continued the work of the Vienna Psychoanalytic Society in order to include families and communities in the child's treatment. One example of this is the Walker Home and School which was established by Dr. Albert Treischman in 1961 for adolescent boys with severe emotional or behavioral disorders. He involved families in order to help them develop relationships with their children within homes, public schools and communities. Family and community involvement made this program different from previous programs.
Beginning in the 1980s, cognitive behavioral therapy was more commonly used in child psychiatry, as a source of intervention for troubled youth, and was applied in RTCs to produce better long-term results. Attachment theory also developed in response to the rise of children admitted to RTCs who were abused or neglected. These children needed specialized care by caretakers who were knowledgeable about trauma.
In the 1990s, the number of children entering RTCs increased dramatically, leading to a policy shift from institution- based services to a family-centered community system of care. This also reflected the lack of appropriate treatment resources. However, residential treatment centers have continued to grow and today house over 50,000 children. The number of residential treatment centers treating individuals of all ages in the United States is currently estimated at 28,900 facilities.
== Children and teens ==
RTCs for adolescents, sometimes referred to as teen rehab centers if they also deal with addition, provide treatment for issues and disorders such as oppositional defiant disorder, conduct disorder, depression, bipolar disorder, attention deficit hyperactivity disorder (ADHD), educational issues, some personality disorders, and phase-of-life issues, as well as substance use disorders. Most use a behavior modification paradigm. Others are relationally oriented. Some utilize a community or positive peer-culture model. Generalist programs are usually large (80-plus clients and as many as 250) and level-focused in their treatment approach. That is, in order to manage clients' behavior, they frequently put systems of rewards and punishments in place. Specialist programs are usually smaller (less than 100 clients and as few as 10 or 12). Specialist programs typically are not as focused on behavior modification as generalist programs are.
Different RTCs work with different types of problems, and the structure and methods of RTCs vary. Some RTCs are lock-down facilities; that is, the residents are locked inside the premises. In a locked residential treatment facility, clients' movements are restricted. By comparison, an unlocked residential treatment facility allows them to move about the facility with relative freedom, but they are only allowed to leave the facility under specific conditions. Residential treatment centers should not be confused with residential education programs, which offer an alternative environment for at-risk children to live and learn together outside their homes.
Residential treatment centers for children and adolescents treat multiple conditions from drug and alcohol addictions to emotional and physical disorders as well as mental illnesses. Various studies of youth in residential treatment centers have found that many have a history of family-related issues, often including physical or sexual abuse. Some facilities address specialized disorders, such as reactive attachment disorder (RAD).
Residential treatment centers generally are clinically focused and primarily provide behavior management and treatment for adolescents with serious issues. In contrast, therapeutic boarding schools provide therapy and academics in a residential boarding school setting, employing a staff of social workers, psychologists, and psychiatrists to work with the students on a daily basis. This form of treatment has a goal of academic achievement as well as physical and mental stability in children, adolescents, and young adults. Recent trends have ensured that residential treatment facilities have more input from behavioral psychologists to improve outcomes and lessen unethical practices.
== Behavioral interventions ==
Behavioral interventions have been very helpful in reducing problem behaviors in residential treatment centers. The type of clients receiving services in a facility (children with emotional or behavioral disorders versus intellectual disability versus psychiatric disorders) is a factor in the effectiveness of behavior modification. Behavioral intervention has been found to be successful even when medication interventions fail. However, there is evidence that certain populations may benefit more from interventions that fall outside of the behavior-modification paradigm. For instance, positive outcomes have been reported for neurosequential interventions targeting issues of early childhood trauma and attachment. (Perry, 2006). Although the majority of children who receive services in RTCs present emotional and behavioral disorders (EBDs), such as attention deficit hyperactivity disorder (ADHD), Oppositional Defiant Disorder (ODD), and Conduct Disorder (CD), behavior-modification techniques can be an effective way of decreasing the maladaptive behavior of these clients. Interventions such as response cost, token economies, social skills training groups, and the use of positive social reinforcement can be used to increase prosocial behavior in children (Ormrod, 2009).
Behavioral interventions are successful in treating children with behavioral disorders in part because they incorporate two principles that make up the core of how children learn: conceptual understanding and building on their pre-existing knowledge. Research by Resnick (1989) shows that even infants are able to develop basic quantitative frameworks. New information is incorporated into the framework and serves as the basis for the problem-solving skills a child develops as she or he is exposed to different types of stimuli (e.g., new situations, people, or environments). The experiences and environment that a child is exposed to can have either a positive or negative outcome, which, in turn, impacts how he or she remembers, reasons, and adapts when encountering aversive stimuli. Furthermore, when children have acquired extensive knowledge, it affects what they notice and how they organize, represent, and interpret information in their current environment (Bransford, Brown, & Cocking, 2000). Many of the children housed in RTCs have been exposed to negative environmental factors that have contributed to the behavior problems that they are exhibiting.
Many interventions build on children's prior knowledge of how reward works. Reinforcing children for pro-social behaviors (i.e., using token economies, in which children earn tokens for appropriate behaviors; response cost (losing previously earned tokens following inappropriate behavior; and implementing social-skills training groups, where participants observe and participate in modeling appropriate social behaviors help them develop a deeper understanding of the positive results of pro=social behavior.
Wolfe, Dattilo, & Gast (2003) found that using a token economy in concert with cooperative games increased pro-social behaviors (e.g. statements of encouragement, praise, or appreciation, shaking hands, and giving high fives) while decreasing anti-social ones (swearing, threatening peers with physical harm, name-calling, and physical aggression). The use of a response-cost system has been efficacious in reducing problem behaviors. A single-subject withdrawal design employing non-contingent reinforcement with response cost was used to reduce maladaptive verbal and physical behaviors exhibited by a post-institutional student with ADHD (Nolan & Filter, 2012). Wilhite & Bullock (2012) implemented a social-skills training group to increase the social competence of students with EBDs. Results showed significant differences between pre- and post-intervention disciplinary referrals, as well as several other elements of behavioral-ratings scales. Evidence also exists for the usefulness of social reinforcement as a part of behavioral interventions for children with ADHD. A study by Kohls, Herpertz-Dahlmann, & Kerstin (2009) found that both social and monetary rewards increased inhibition control in both the control and experimental groups. However, results showed that children with ADHD benefitted more from social reinforcement than typical children, indicating that social reinforcement can significantly improve cognitive control in ADHD children. The techniques listed are only a few of the many types of behavioral interventions that can be used to treat children with EBDs. Additional information regarding types of behavioral interventions can be found in the 2003 book Behavioral, Social, and Emotional Assessment of Children and Adolescents by Kenneth Merrell.
Types of Family Therapy used in Residential Treatment Center
Narrative Therapy: Narrative therapy has shown an increase in popularity in the field of family therapy. Narrative therapy developed out from the postmodern viewpoint, which is expressed in its principles: (a) not one universal reality exists, but socially constructed reality; (b) reality is created by language; (c) narrative maintains reality (d) not all narratives are equivalent (Freedman and Combs, 1996).
Narrative family therapy views human issues from those roots as emerging and being sustained by dominant stories that control the life of an individual. Problems arise when individual stories do not match with their experience of living. According to the narrative viewpoint, by offering a new and distinct perspective
In a problem-saturated narrative, therapy is a process of rewriting personal narratives. The process of rewriting the narrative of the client involves (a) expressing the problem(s) they are experiencing; (b) breaking down narratives that trigger problems through questioning; (c) recognizing special outcomes or occasions where a person has not been constrained by their situation; (d) connecting specific results to the future and providing an alternate and desired narrative; (e) inviting supports among the community to spectate the new narrative and (f) logging new document Since postmodern viewpoints prioritize concepts rather than techniques, in narrative therapy, formal methods are restricted. However, some researchers have described techniques that are useful in helping an individual rewrite a specific experience, like retelling stories and writing letters.
Children admitted to a residential treatment center have behavior problems so extreme that residential treatment is their last hope. Parents seem to think the child is the problem needed to be fixed, and everything will be okay; on the other hand, the child generally sees themselves as a victim. Narrative therapy enables these perspectives to be broken down and troubling behaviors of the child to be externalized, which could encourage both the child and the family members to achieve a new perspective no one feels prosecuted or blamed.
Multi Systemic Therapy:
The model has shown success in sustaining long-standing improvements in children's and adolescents' antisocial behaviors. Families in MST have demonstrated improved family stability and post-treatment adaptability and growing support, and reduced conflict- hostility
The method's ultimate objectives include a) eliminating behavior problems, b) enhancing family functioning, c) strengthening the adolescents' ability to perform better at school and other community settings, and d) decreasing out-of-home placement
== Controversy ==
Disability rights organizations, such as the Bazelon Center for Mental Health Law, oppose placement in RTC programs, calling into question the appropriateness and efficacy of such placements, noting the failure of such programs to address problems in the child's home and community environment, and calling attention to the limited mental-health services offered and substandard educational programs. Concerns specifically related to a specific type of residential treatment center called therapeutic boarding schools include:
inappropriate discipline techniques,
medical neglect,
restricted communication such as lack of access to child protection and advocacy hotlines, and
lack of monitoring and regulation.
Bazelon promotes community-based services on the basis that they are more effective and less costly than residential placement.
A 2007 Report to Congress by the Government Accountability Office (GAO) found cases involving serious abuse and neglect at some of these programs.
From late 2007 through 2008, a broad coalition of grass-roots efforts, as well as prominent medical and psychological organizations such as the Alliance for the Safe, Therapeutic and Appropriate use of Residential Treatment (ASTART) and the Community Alliance for the Ethical Treatment of Youth (CAFETY), provided testimony and support that led to the creation of the Stop Child Abuse in Residential Programs for Teens Act of 2008 by the United States Congress Committee on Education and Labor.
Jon Martin-Crawford and Kathryn Whitehead of CAFETY testified at a hearing of the United States Congressional Committee on Education and Labor on April 24, 2008, and described abusive practices they had experienced at the Family Foundation School and Mission Mountain School, both therapeutic boarding schools. In recent years, many states have enacted regulation and oversight of most programs.
Due to the absence of regulation of these programs by the federal government and because at that time many were not subject to state licensing or monitoring, the Federal Trade Commission has issued a guide for parents considering such placement.
Residential treatment programs are often caught in the cross-fire during custody battles, as parents who are denied custody try to discredit the opposing spouse and the treatment program.
== Research on effectiveness ==
Studies of different treatment approaches have found that residential treatment is effective for individuals with a long history of addictive behavior or criminal activity. RTCs offer a variety of structured programs designed to address the specific need of the inmates. Despite the controversy surrounding the efficacy of (RTCs), recent research has revealed that community-based residential treatment programs have positive long-term effects for children and youth with behavioral problems.
Participants in a pilot program employing family-driven care and positive peer modeling displayed no incidence of elopement, self-injurious behaviors, or physical aggression, and just one case of property destruction when compared to a control group (Holstead, 2010). The success of treatment for children in RTCs depends heavily on their background i.e., their state, situation, circumstances and behavioral status before commencement of treatment. Children who displayed lower rates of internalizing and externalizing behavior problems at intake and had a lower level of exposure to negative environmental factors (e.g., domestic violence, parental substance use, high crime rates), showed better results than children whose symptoms were more severe (den Dunnen, 2012).
Additional research demonstrates that planned treatment, or knowing the expected duration of treatment, is strongly correlated with positive treatment outcomes. Long-term results for children using planned treatment showed that they are 21% less likely to engage in criminal behavior and 40% less likely to need hospitalization for mental-health problems (Lindqvist, 2010). Further evidence exists supporting the long-term effectiveness of RTCs for children exhibiting severe mental health issues. Preyde (2011) found that clients showed a statistically significant reduction in symptom severity 12–18 months after leaving an RTC, results which were maintained 36–40 months after their discharge from the facility.
However, although there is a great deal of research supporting the validity of RTCs as a way of treating children and youth with behavioral disorders, little is known about the outcomes-monitoring practices of such facilities. Those that track clients after they leave the RTC only do so for an average of six months. In order to continue to provide effective long-term treatment to at-risk populations, further efforts are needed to encourage the monitoring of outcomes after discharge from residential treatment (J.D. Brown, 2011).
One problem that hinders the effectiveness of RTCs is elopement or "running". A study by Kashubeck found that runaways from RTCs were "more likely to have a history of elopement, a suspected history of sexual abuse, an affective-disorder diagnosis, and parents whose rights had been terminated." By employing these characteristics of patients in the design of treatment, RTCs may be more successful in reducing elopement and otherwise improving the probability of clients' success.
== See also ==
== References ==
== Further reading ==
Kenneth R. Rosen (2021). Troubled: The Failed Promise of America's Behavioral Treatment Programs. Little A. ISBN 978-1542007887.
== External links ==
Learning materials related to Residential treatment center at Wikiversity
Residential Treatment Programs — Concerns Regarding Abuse and Death in Certain Programs for Troubled Youth - United States Government Accountability Office
Residential Facilities — State and Federal Oversight Gaps May Increase Risk to Youth Well-Being - United States Government Accountability Office
Residential Programs — Selected Cases of Death, Abuse, and Deceptive Marketing - United States Government Accountability Office | Wikipedia/Residential_treatment_center |
A Doctor of Physical Therapy or Doctor of Physiotherapy (DPT) degree is a qualifying degree in physical therapy. In the United States, it is considered a graduate-level first professional degree or doctorate degree for professional practice. In the United Kingdom, the training includes advanced professional training and doctoral-level research.
A Transitional Doctor of Physical Therapy degree is available in the US for those who already hold a professional Bachelor or Master of Physical Therapy (BPT or MPT) degree; as of 2015, all accredited and developing physical therapist programs in the US are DPT programs. Master's degrees in physical therapy are no longer offered in the US, and physical therapists beginning their education now study towards the Doctor of Physical Therapy degree.
== History ==
In 1992, the University of Southern California initiated the first post-professional "transitional" (DPT) program in the United States. This "transitional" DPT takes into account a physical therapist's current level of knowledge and skill and purports to offer programs that upgrade clinical skills to meet the needs of the current health care environment. Creighton University followed by initiating the first entry-level DPT program in 1993.
The Doctor of Physiotherapy has since been adopted in other countries such as the United Kingdom, Australia, and Taiwan. In the United Kingdom and Australia, the PhD or Professional Doctorate in Physiotherapy is offered by a number of universities. These programs are usually professional entry master's level programs, with the opportunity to undertake research to lead to a doctorate degree. Alternatively, these programs are master's pre-qualifying physiotherapy courses with an enhanced research element in the final phase of the course that leads to undertaking a doctorate. The first full pre-qualifying Doctorate in Physiotherapy program in the United Kingdom was accredited in 2017 at Glasgow Caledonian University in Glasgow.
== United States ==
The DPT degree prepares students to be eligible for the physical therapy license examination in all 50 US states. Along with the license examination, some states do require physical therapists to take a law exam and a criminal background check. As of March 2017, there are 222 accredited Doctor of Physical Therapy programs in the United States. After completing a DPT program, the doctor of physical therapy may continue training in a residency and then fellowship. As of December 2013, there are 178 credentialed physical therapy residencies and 34 fellowships in the US, and 63 additional developing residencies and fellowships. Credentialed residencies are between 9 and 36 months while credentialed fellowships are between 6 and 36 months.
In 2000, the American Physical Therapy Association (APTA) passed its Vision 2020 statement, which states (in part):By 2020, physical therapy will be provided by physical therapists who are doctors of physical therapy, recognized by consumers and other health care professionals as the practitioners of choice to whom consumers have direct access for the diagnosis of, interventions for, and prevention of impairments, functional limitations, and disabilities related to movement, function, and health.As this statement highlights, the DPT program is an integral part of the APTA's continued advocacy for legislation granting consumers (i.e. patients and clients) direct access to physical therapists, rather than requiring physician referral. Direct access is said to decrease wait times for access to care and even help reduce both costs to consumers and overall healthcare costs. As of January 1, 2015, all 50 states and the District of Columbia allow some form of direct access to physical therapists.
=== Time frame ===
The typical time frame for completion of a Doctor of Physical Therapy is 3 to 4 years after earning a bachelor's degree. Depending on residency and fellowship training, if undertaken, the doctor may have completed several years of additional training. Obtaining a DPT could also be done by accelerated programs offered by some universities that can be applied to by freshmen. These programs allow students to receive a bachelor's degree and DPT in 6 to 7 years. With these programs, there are various admission points over the course of their curriculum. Various programs allow students to apply directly out of high school and they will automatically matriculate into the professional phase of the program after completing the required undergraduate courses.
=== Admission ===
Admission to a Doctor of Physical Therapy program in the United States is highly competitive. According to the Aggregate Program Data Report from CAPTE from 2016 to 2017, the average grade point average for enrolling students was 3.6 out of 4 with a range of 3.20–3.88 for all programs. On average, there were 1,000 applicants per program with an average of 46 students enrolled. A bachelor's degree generally is required before beginning a Doctor of Physical Therapy program, but there is no requirement on the degree earned as long as all prerequisite course requirements are met. Obtaining a DPT could also be done by accelerated programs by some universities that can be applied by freshman. Through these programs students can receive a bachelor's degree and a DPT in 6 to 7 years. During the admission process into schools, one must fulfill the course prerequisites of the program. Students also must obtain physical therapy experience from clinics, with hours that might have to be verified by a physical therapist, depending on the school they are applying to. The Graduate Record Examination (GRE) is required for most programs, and must be taken and submitted to the school.
=== Transition Doctor of Physical Therapy degree ===
The t-DPT degree is conferred upon completion of a structured post-professional educational experience that results in the augmentation of knowledge, skills, and behaviors to a level consistent with the current professional (entry-level) DPT standards. The t-DPT degree enables the US-licensed physical therapist to attain degree parity with therapists who hold the professional DPT by filling in any gaps between their professional baccalaureate or master's degree PT education and the current professional DPT degree education.
The post-professional DPT (Transitional) degree is designed to provide the doctoral credential to those who currently holding a master's or bachelor's degree in the field. Post-professional DPT (Transitional) degree programs are typically offered on a primarily online learning model and are often one year in length.
=== Controversies ===
The use of the title doctor by physical therapists and other non-physician health care professionals has been debated. In a letter to The New York Times, the president of the American Physical Therapy Association responded:To provide accurate information to consumers, the American Physical Therapy Association has taken a proactive approach and provides clear guidelines for physical therapists regarding the use of the title "Doctor." These guidelines state that physical therapists, in all clinical settings, who hold a Doctor of Physical Therapy degree (DPT) shall indicate they are physical therapists when using the title "Doctor" or "Dr," and shall use the titles in accord with jurisdictional law.In 2007, the DPT degree has been described as an example of "credential creep" or degree inflation in The Chronicle of Higher Education. Citing concerns that the DPT, and similar professional doctorates in areas such as occupational therapy, do not meet the standards of traditional doctorate degrees, the journal states: "The six-and-a-half-year doctor of physical therapy, or DPT, is rapidly replacing a six-year master's degree ... The American Physical Therapy Association ... has not set separate requirements for doctoral programs. To be accredited they need only to meet the same requirements as master's programs."
Critics in the 1990s questioned whether the rigor of the physical therapy curriculum and the scope of practice warranted the conferral of a professional degree similar to that characteristic of medicine, dentistry, or nursing. Proponents countered that the existing curricula are "victims of 'curricular inflation'." Rothstein and Moffat noted, the previous master's and even baccalaureate curricula rivaled those of most other professional doctorate programs, and these curricula often required more than the typical 72 credits mandated for a doctoral degree. The 2000 Fact Sheet from APTA reported that the mean number of credits required for the professional phase of the typical baccalaureate program was 83.0 credits and that the typical master's degree program required 95.5 credits. As of 2009 the typical number of prerequisite credits was 114.2 and the total number of professional credits was 116.5 for a total of 230.7 credit hours. Additional credit hours may also be earned in residency and fellowship. Threlkeld et al. suggested that the scope of existing physical therapy curricula already (in 1999) matched that of a professional doctorate, further submitting that students of a well-defined DPT program "will have earned the right to be recognized by the doctoral title".
=== Professional degree (entry-level) ===
The professional (entry-level) DPT degree is currently the degree conferred by all physical therapist professional programs upon successful completion of a three- to four-year post-baccalaureate degree program in the United States, preparing the graduate to enter the practice of physical therapy. Admission requirements for the program include completion of an undergraduate degree that includes specific prerequisite coursework, volunteer experience (or other exposure to the profession), and completion of a standardized graduate examination (e.g., GRE).
Typical prerequisite courses may include two semesters of anatomy and physiology with labs, two semesters of physics with labs, two semesters of chemistry with labs, a general course in psychology, another course in psychology, statistics, two semesters of biology, and may include other courses required by specific schools.
The physical therapist curriculum consists of foundational sciences (i.e., gross anatomy, cellular histology, embryology, neurology, neuroscience, kinesiology, physiology, exercise physiology, pathology, pharmacology, radiology/imaging, medical screening), behavioral sciences (communication, social and psychologic factors, ethics and values, law, business and management sciences, clinical reasoning and evidence-based practice), and clinical sciences (cardiovascular/pulmonary, endocrine and metabolic, gastrointestinal and genitourinary, integumentary, musculoskeletal, neuromuscular). Coursework also includes material specific to the practice of physical therapy (patient/client management model, prevention, wellness, and health promotion, practice management, management of care delivery, social responsibility, advocacy, and core values). Additionally, students have to engage in full-time clinical practice under the supervision of licensed physical therapists with an expectation of providing safe, competent, and effective physical therapy.
=== Continuing Education ===
Post-graduation, licensed physical therapists have the ability to pursue a clinical residency or fellowship to expand their knowledge and experience. Clinical residencies are designed to further a physical therapy resident's knowledge in a specific area of clinical practice. A clinical fellowship is a program for physical therapists in an area of the specific focus.
Physical therapists also have the ability to pursue specialty certifications, where they become board certified clinical specialists. Becoming a certified specialist allows the therapist to earn credentials that represents further dedication to patient care. It gives the opportunity for professional growth and positions in leadership and service. This specialization is done by building a broad foundation of professional education then building a skill set related to the particular specialization area. The certifications given in the specific areas are: cardiovascular and pulmonary, clinical electrophysiology, geriatrics, neurology, orthopedics, pediatrics, sports physical therapy, wound care, and women's health.
Physical therapists can provide various modalities of treatment for the patient. The modalities include: ultrasound, electrical stimulation, traction, joint mobilization, massage, heat, ice, kinesiology taping, and many more. Forms of treatment depends on the therapist's preference of treatment and the clinics equipment availability. Based on the patient, specific types of treatment might be better suited. Therapists might also find different modalities not as effective as others. However, some modalities might not be possible due to the clinics restrictions on space and equipment availability.
=== Advanced clinical science degree ===
The "advanced clinical science" doctorate (e.g., DPTSc or DScPT, DHSc, ScDPT) is one of several degrees conferred by academic institutions upon successful completion of a post-professional physical therapist education program. This program is intended to provide an experienced clinician with advanced knowledge, clinical skills, and professional behavior, usually in a specific specialty practice area. These programs typically culminate work that contributes new knowledge to clinical practice in the profession. Completion of these advanced clinical science doctoral programs may include credentialed clinical residencies and lead to ABPTS clinical specialization or other advanced certifications.
== United Kingdom ==
Some universities, such as Glasgow Caledonian University and Robert Gordon University, offer a 3.5 year DPT program, including both professional training and research, which leads to qualification as a physiotherapist eligible to register with the Health and Care Professions Council. In general, the qualifying degree for physiotherapy in the UK is a Bachelor or Master of Science in Physiotherapy.
In 2013 the United Kingdom gave physiotherapists the power to prescribe medication.
== See also ==
Physical therapy education
== References ==
== External links ==
Information for Prospective Students from the American Physical Therapy Association | Wikipedia/Doctor_of_Physical_Therapy |
The Nordoff–Robbins approach to music therapy is a method developed to help children with psychological, physical, or developmental disabilities. It originated from the collaboration of Paul Nordoff and Clive Robbins, which began in 1958, with early influences from Rudolph Steiner and anthroposophical philosophy and teachings. Nordoff–Robbins music therapy asserts that music therapy can improve communication, support change, and help people live more resourcefully and creatively. Nordoff–Robbins music therapy training programs exist in various countries such as the United Kingdom, United States, Australia, Germany, New Zealand, South Africa, and Asia.
== United Kingdom ==
Nordoff and Robbins is a registered charity in the United Kingdom and Scotland. The charity runs the Nordoff Robbins Music Therapy Centre in London and music therapy outreach projects. The charity runs postgraduate training courses in music therapy and a research program, with public courses and conferences.
Nordoff Robbins runs the annual Silver Clef Awards that raise money for the charity. In 2024, they launched the Northern Music Awards as a further fundraising initiative, holding the inaugural ceremony in Manchester.
== United States ==
Founded by Clive Robbins and his wife Carol Robbins, the Nordoff–Robbins Center for Music Therapy at New York University, Steinhardt School of Culture, Education, and Human Development, opened in 1989. The center is affiliated with New York University's Graduate Music Therapy Program. The mission of the center has six main components:
Providing music therapy services to people with disabilities, including autism spectrum disorders, behavioral disorders, developmental delays, sensory impairments, and psychiatric disorders.
Offering advanced music therapy training.
Conducting and publishing research. The center maintains an extensive archive that includes recordings and documentation of the work of Nordoff and Robbins (1959–1976). The archive is updated by contemporary clinical work. Ongoing research in clinical practice focuses on the role of improvisational music therapy in addressing the needs of clients with different areas of disability including autism spectrum disorder, stroke, and hearing impairment.
Presenting lectures, workshops, and symposia to professional audiences.
Publishing musical and instructional materials to in the clinical process and improvisation.
Disseminating information and resources; serving as a resource for music therapists, students, the media, and the public. It provides consultant services, organizes seminars and workshops, and hosts over 150 visitors annually.
The Nordoff–Robbins training at Molloy College, established in 2010, is an approved Nordoff–Robbins program in the US. It is located at the Rebecca Center for Music Therapy at Molloy College, an outpatient center serving children and adults in the Long Island and metropolitan New York area.
Both training programs include assessment, archival coursework, clinical work, group music therapy, and clinical improvisation instruction.
== References ==
== External links ==
Nordoff Robbins website
EEUU: Nordoff - Robbins Center For Music Therapy
History of Nordoff-Robbins Music Therapy, The Steinhardt School, New York University
Osbournes win Silver Clef honour, BBC News, June 16, 2006 | Wikipedia/Nordoff-Robbins_music_therapy |
In music, a method is a kind of textbook for a specified musical instrument or a selected problem of playing a certain instrument.
A method usually contains fingering charts or tablatures, etc., scales and numerous different exercises, sometimes also simple etudes, in different keys, in ascending order as to difficulty (= in methodical progression) or with a focus on isolated aspects like fluency, rhythm, dynamics, articulation and the like. Sometimes there are even recital pieces, also with accompaniment. Such methods differ from etude books in that they are meant as a linear course for a student to follow, with consistent guidance, whereas volumes of etudes are not as comprehensive.
As typical instrumental methods are meant to function as textbooks supporting an instrumental teacher (rather than to facilitate self-teaching), usually no basic or special playing techniques are covered in any depth. Detailed instructions in this respect are only found in special, autodidactical methods.
Some methods are especially tailored for students on certain skill levels or stages of psychosocial development. In contrast, a 'complete' method (sometimes in multiple volumes) is meant to accompany the student until he or she becomes an advanced player.
Methods of certain authors or editors have achieved the status of standard works (reflecting regional and cultural differences) and are published or reissued by different publishing companies and in diverse (new) arrangements. The Suzuki Method is probably the most well known example of this.
The following is a list of various methods of historical interest.
== Woodwinds ==
=== Flute ===
Altes, Henry. Method for the Boehm flute.
Berbiguier, Benoit Tranquille. Flute method.
De Lorenzo, Leonardo. L'Indispensabile – A complete modern school for the flute. (1912)
Dressler, Rafael. New and complete instructions for the flute, Op. 68. (1827)
Drouet, Louis. Method of flute playing. (1830)
Fürstenau, Anton Bernhard. Flöten-schule, Op. 42. (1826)
Fürstenau, Anton Bernhard. Die kunst des flötenspiels, Op.138. (1844)
Hugot and Wunderlich. Méthode de flûte. (1804)
Lindsay, Thomas. The elements of flute-playing. (1830)
Monzani, Tebaldo. Instructions for the german flute. (1801)
Peraut, Mathieu. Méthode pour la flûte. (1800)
Quantz, Johann Joachim. Versuch einer anweisung die flöte traversiere zu spielen. (1752)
Soussmann, Heinrich. Complete method for flute.
Tromlitz, Johann George. Unterricht der flöte zu spielen. (1791)
Wagner, Ernest. Foundation to flute playing.
=== Oboe ===
Andraud, Albert. Practical and progressive oboe method.
Barret, Apollon Marie-Rose. Complete method for oboe. (1850)
Langley, Robin. Tutor for oboe.
Niemann, Theodor. Method for the oboe.
=== Clarinet ===
Bärmann, Karl. Volständige clarinett-schule. (1864)
Klosé, Hyacinthe Eléonore. Conservatory Method For The Clarinet. (1879)
Lazarus, Henry. New and modern method for the Albert- and Boehm-system clarinet. (1881)
Robert_Stark|Stark, Robert. Grosse theoretische-praktische clarinett-schule, Op. 49. (1892)
Magnani, Aurelio. Methode complete de clarinette. (1900)
Gabler, Maximillian. Tutor. (1906)
Mimart, Prospere. Methode nouvelle de clarinette. (1911)
Bading, Heinrich and Lange, Hermann. Tutor. (1911)
Gräfe, Richard. Method. (1912)
Langenus, Gustav. Complete method for the Boehm clarinet. (1913)
Reinecke, Carl. Foundation to clarinet playing : an elementary method. (1919)
=== Bassoon ===
Langey, Otto. Tutor for bassoon.
Mackintosh, George. New and improved bassoon tutor. (1840)
Weissenborn, Julius. Practical method for the bassoon. (1887)
Weissenborn, Julius and Spaniol, Douglas. The New Weissenborn Method for Bassoon. (2010, revised 2013)
=== Saxophone ===
Klosé, Hyacinthe and Gay, Eugene. Methode complete pour saxophone.
Iasilli, Gerardo. Modern conservatory method for saxophone.
Mayeur, Louis-Adolphe. Method for saxophone.
Vereecken, Ben. Foundation to saxophone playing.
Vereecken, Ben. The saxophone virtuoso.
Ville, Paul de. Universal method for the saxophone.
== Brass ==
=== Trumpet/Cornet ===
Araldi, Giuseppe. Methodo per tromba a chiavi et a macchina. (1835)
Arban, Jean-Baptiste. Method for the cornet. (1864)
Arbuckle, Matthew. Complete cornet method. (1866)
Brett, Harry. The cornet. (1888)
Brulon, Adolphe. Méthode de cornet à deux et à trois pistons. (1854)
Canti, Antonio. Metodo per cornetto flugelhorn in si bemòlle e per flugel basso. (1892)
Clarke, Herbert Lincoln. Clarke's Elementary Studies for Cornet. (1909)
Clarke, Herbert Lincoln. Clarke's Technical Studies for Cornet. (1912)
Clarke, Herbert Lincoln. Clarke's Characteristic Studies for Cornet. (1915)
Clodomir, Pierre François. Méthode élémetaire de cornet à pistons. (1870)
Dauverné, François Georges Auguste. Methode de trompette à pistons. (1835)
Dauverné, François Georges Auguste. Méthode théorique et pratique du cornet à pistons ou cylindres. (1846)
Dauverné, François Georges Auguste. Méthode pour la trompette. (1857)
Foraboschi Giuseppe. A new and complete instruction book for the trumpet. (1828)
Forestier, Joseph. Method pour le cornet à pistons. (1834)
Guichard, Michel. Grande méthode. (1864)
Hoch, Theodor. Tutor for the cornet. (1880)
Hofmann, Richard. Schule für althorn oder es-cornet. (1885)
Howe, Elias. New cornet instructor. (1860)
Kastner, Jean Georges. Metodo elementare per cornetto (o flicorno) a due e tre pistoni. (1892)
Kosleck, Julius. Grosse Schule für cornet a piston.
Kresser, Joseph Gebhardt. Methode pour la trompette.
Kueffner, Joseph. Principes elementaires.
Langey, Otto. Celebrated tutors: b-flat cornet. (1899)
Mariscotti, Luigi. Nouvelle méthode complete de cornet à pistons. (1837)
Roy, Eugene. Methode de trompette sans clefs et avec clefs divisee en deux parties. (1824)
Ryan, Sidney. True cornet instructor. (1874)
Saint-Jacome, Louis. Grand method for the cornet. (1894)
Schlossberg, Max. Daily Drills and Technical Studies for Trumpet. (1937)
Sedgwick, Alfred. Complete method for the cornet. (1873)
Sinsolliez, Ainé. Méthode complète de cornet à trois pistons. (1848)
Sussmann, Heinrich. Neue theoretisch practische trompeten-schule. (1859)
Weber, Carl. The premier method for cornet. (1896)
Winner, Septimus. The ideal method for the cornet. (1882)
Wurm, Wilhelm. Method for cornet à pistons. (1893)
=== Horn ===
Dauprat, Louis-François. Methode de cor alto et cor basse. (1824)
Domnich, Heinrich. Methode de premier et de deuxieme cor. (1807)
Duvernoy, Frederic. Methode pour le cor. (1803)
Franz, Oscar. Grosse theoretisch-practische waldhorn-schule. (1880)
Frohlich, Joseph. Hornschule. (1810)
Gallay, Jacques-François. Méthode pour le cor. (1842)
Göroldt, Johann Heinrich. Hornschule. (1822)
Holyoke, Samuel. The instrumental assistant. (1807)
Kling, Henri. Hornschule. (1865)
Meifred, Joseph. Methode pour le cor chromatique ou a pistons. (1840)
Pepper, James Welsh. Self instructor for french horn. (1882)
Simpson, John. The complete tutor for the horn. (1746)
Tully, Charles. Tutor for the French horn. (1840)
=== Trombone ===
Arban, Jean-Baptiste. Grande méthode complete pour cornet a pistons et de saxhorn. (1864)
Braun, André. Gamme et méthode pour les trombonnes alto, ténor, et basse. (1795)
Brulon, Adolphe. Methode de trombone a coulisse. (1851)
Cornette, Victor. Method for the trombone. (1838)
Dieppo, Antoine. Méthode complete pour le trombone. (1836)
Fillmore, Henry. Jazz trombonist. (1919)
Fröhlich, Joseph. Vollständige theoretisch-practische musikschule. (1811)
Gebauer, Francois Rene. 50 lecons pour la trombonne basse, alto, et tenor. (1800)
Hampe, Carl. Method for the slide trombone with an appendix for the trombone with valves. (1916)
Kastner, Jean Georges. Method for the trombone. (1845)
Lafosse, André. Methode complete pour le trombone. (1921)
Langey, Otto. Tutor for the tenor slide trombone. (1885)
Müller, Robert. Schule für zugposaune. (1902)
Nemetz, Andreas. Neueste posaun-schule. (1828)
Sordillo, Fortunato. Art of jazzing for the trombone. (1920)
Ville, Paul de. Universal method for slide and valve trombone in bass and treble clef. (1900)
Vobaron, Edmond. Methode de trombone. (1853)
Wirth, Adam. Posaune schule für alto, tenor, und bass posaune. (1870)
== Voice ==
Abt, Franz. Praktische gesangschule, Op.474.
Concone, Giuseppe. The school of sight-singing.
García, Manuel. Ecole de Garcia: traité complet de l'art du chant.
Lamperti, Francesco. The art of singing. (1890)
Marchesi, Mathilde. The art of singing, Op. 21. (1890)
Marchesi, Mathilde. Vocal method, Op. 31. (1900)
Panofka, Heinrich. Abécédaire vocal.
Panseron, Auguste. Méthode complète de vocalisation. (1898)
Shakespeare, William. The art of singing. (1910)
Vaccai, Nicola. Metodo pratico de canto. (1832)
== Keyboards ==
=== Piano ===
Clementi, Muzio. Introduction to the art of playing on the pianoforte. (1801)
Cramer, Johann Baptist. Instructions for the pianoforte. (1810)
Czerny, Carl. Complete theoretical and practical pianoforte school, Op. 500. (1838)
Dussek, Jan Ladislav. Instructions on the art of playing the piano forte or harpsichord. (1796)
Herz, Henri. Méthode. (1838)
Hummel, Johann Nepomuk. Ausführliche theoretisch-practische Anweisung zum Piano-Forte-Spiel. (1828)
Philipp, Isidore. Complete school of Technic for the Piano.
Pollini, Francesco. Methodo del clavicembelo. (1811)
Rimbault, Edward Francis. A child's first instruction book for the piano forte. (1839)
Safonov, Vasily Ilyich. New Formula for the Piano Teacher and Piano Student. (1916)
Daniel Gottlob Türk. Klavier-Schule. (1789)
=== Harpsichord/Clavichord ===
Bach, Carl Philipp Emanuel. Versuch über die wahre art das clavier zu spielen. (1753)
Baumgarten, Charles Frederick. The ladies companion, or a complete tutor for the forte, piano forte, or harpsichord. (1784)
Couperin, François. L'art de toucher le clavecin.] (1716)
Gasparini, Francesco. L'Armonico Pratico al Cimbalo.
=== Organ ===
Neukomm, Sigismund. An elementary method for the organ in general. (1859)
Rinck, Johann Christian Heinrich. Praktische orgelschule, Op. 55. (1819)
== Strings ==
=== Guitar ===
Aguado, Dionisio. Nuevo metodo de guitarra, Op. 6.
Carcassi, Matteo. Methode complete pour la guitare, Op. 59.
Carulli, Ferdinando. Méthode complette. (1815)
Chabran, Carlo Francesco. Complete instructions for the Spanish guitar. (1795)
Giuliani, Mauro. Studio per la chitarra, Op. 1. (1812)
Justin Holland. Comprehensive Method for the Guitar and Modern Method for the Guitar (1874)
Jauralde, Nicario. A complete preceptor for the Spanish guitar. (1827)
Legnani, Luigi. Metodo par imparare e conoscere la musica e suonare la chitarra, Op. 250. (1847)
Shaeffer, Arling. Elite guitar instructor. (1895)
Sor, Fernando. Complete method for the guitar. (1851)
=== Harp ===
Barthélemon, François-Hippolyte. Tutor for the harp. (1795)
Bochsa, Nicholas Charles. The first 6 weeks, or daily precepts & examples for the harp. (1840)
Dizi, François Joseph. "École de harpe", a complete treatise on the harp. (1827)
Weippert, John Erhardt. The pedal harp rotula. (1800)
=== Mandolin or mandolin-banjo or banjolin ===
Edgar Bara. Méthode de Mandoline et Banjoline (1903). Still in print.
Bartolomeo Bortolazzi. Method in German, Anweisung die Mandoline von selbst zu erlernen nebst einigen Uebungsstucken von Bortolazzi (1805)
Giuseppe Bellenghi. Method for the mandolin in three parts (Pub in French, English, Italian, German), La ginnastica del mandolino, Ascending and descending major and minor scales in all positions for the mandolin
Giuseppe Branzoli. A Theoretical and practical method for the mandolin (1875, 2nd edition 1890)
Ferdinando de Cristofaro. Méthode de mandolin (Paris, 1884) English, French, Italian, Portuguese, and Spanish versions
Giovanni Cifolelli. Method for the mandolin (date unknown, estimated 1760s)
Carlo Curti. Complete Method for the Mandolin (1896) English.
Pietro Denis. Méthode pour apprendre à jouer de la mandoline sans Maître (1768, French)
George H. Hucke. Forty Progressive Studies for the Mandoline (London, 1893, English)
Carmine de Laurentiis. Method for the Mandolin (Milan, 1869 or 1874)
Salvador Leonardi. Méthode pour Banjoline ou Mandoline-Banjo (1921, book has sections in English, French and Spanish)
Carlo Munier. Scuola del mandolino (1895)
Jean Pietrapertosa. Méthode de mandolin (1892) In French and English sections in same book
Janvier Pietrapertosa Fils Méthode de mandolin ou banjoline (1903)
Giuseppe Pettine. Pettine's Modern Mandolin School (c. 1900)
Silvio Ranieri. L'Art de la Mandoline in 4 Bänden, Die Kunst des Mandolinspiels, in 5 Sprachen (französich, deutsch, englisch, italienisch und holländisch)
Samuel Siegel. Special Mandolin Studies (1901).
=== Violin ===
Bang, Maia. Violin method.
Bériot, Charles de. Method for violin.
Casorti, August. The Techniques of Bowing (Technik des Bogens und des rechten Handgelenks für Violine; Bogen-Technik für Violine), Op. 50 (published 1880)
Geminiani, Francesco. The art of playing on the violin. (1751)
Hohmann, Christian Heinrich. Practical violin method.
Laoureux, Nicolaus. A practical method for violin.
Mozart, Leopold. Gründliche Violinschule. (1756)
Schradieck, Henry. The school of violin technics. (1900)
Sevcik, Otakar. Violin school for beginners, Op.6. (1903)
Sevcik, Otakar. School of violin technique, Op. 1. (1905)
Sevcik, Otakar. School of bowing technique, Op. 2. (1905)
Spohr, Louis. Violinschule. (1832)
Wohlfahrt, Franz. Easiest elementary method for beginners, Op. 38.
=== Viola ===
Cavallini, Eugenio. Viola method. (1845)
Giorgetti, Ferdinando. Viola method, Op.34. (1840)
=== Violoncello ===
Davydov, Karl. School of violoncello playing. (1888)
Dotzauer, Justus Johann Friedrich. The violoncello method for elementary teaching, Op. 126. (1836)
Dotzauer, Justus Johann Friedrich. The method of playing harmonics, Op. 147. (1837)
Dotzauer, Justus Johann Friedrich. The practical method of violoncello playing, Op. 155.
Dotzauer, Justus Johann Friedrich. The violoncello method, Op. 165. (1832)
Duport, Jean-Louis. Instruction on the fingering and bowing of the violoncello. (1806)
Gunn, John. The theory and practice of fingering the violoncello. (1789)
Kummer, Friedrich August. Cello method, Op. 60. (1839)
Lee, Sebastian. Method for the violoncello. (1845)
Popper, David. High school of cello playing, Op. 73. (1901-1905)
Romberg, Bernhard. A Complete Theoretical and Practical School for the Violoncello. (1839)
Schroeder, Alwin (Schröder, Alvin). 170 Foundation Studies for Violoncello.
Schoeder, Carl II (Schröder, Karl II). Practical Method for the Violoncello.
Sevcik, Otakar. School of bowing technique. (1905)
Werner, Josef. Practical method for cello, Op. 12. (1882)
=== Contrabass ===
Fröhlich, Friedrich Theodor. Contrabass-schule. (1830)
Simandl, Franz. New method for double bass. (1881)
== See also ==
Music education
== References ==
"International Music Score Library Project." (Website)
"Slide trombone methods." (List)
"The trill in the classical period (1750-1820)." (Article)
Anzenberger, Friedrich. Trumpet method books of the 19th century. (List at archive.org)
Ginsburg, Lev. History of the violoncello. Neptune City, New Jersey: Paganiniana Publications, 1983. (Relevant excerpt)
Hoeprich, Eric. The clarinet. New Haven: Yale University Press, 2008.
Kimball, Will. Trombone. 2008. (19th century)
Nelson, Kayla. "hornhistory.com". 2007. (Website)
Rosen, Lawrence, Ed. CD Sheet Music. Verona, NJ: Subito Music, Corp., 2000–2009.
Schwartz, Richard. The cornet compendium: The history and development of the nineteenth-century cornet. 2000. (Chapter 5: Tutors)
Spaniol, Doug. "A history of the Weissenborn Practical method for bassoon" in Ewell, Terry. Celebrating Double Reeds: A Festschrift for William Waterhouse and Philip Bate. 2009.
Westbury Park Strings. Romberg and the history of the violoncello. (Article)
Westphal, Frederick. Guide to teaching woodwinds. Sacramento, CA: McGraw-Hill, 1990. | Wikipedia/Method_(music) |
Reality therapy (RT) is an approach to psychotherapy and counseling developed by William Glasser in the 1960s. It differs from conventional psychiatry, psychoanalysis and medical model schools of psychotherapy in that it focuses on what Glasser calls "psychiatry's three Rs" – realism, responsibility, and right-and-wrong – rather than mental disorders. Reality therapy maintains that most people suffer from socially universal human conditions rather than individual mental illnesses, and that failure to attain basic needs leads to a person's behavior moving away from the norm. Since fulfilling essential needs is part of a person's present life, reality therapy does not concern itself with a person's past. Neither does this type of therapy deal with unconscious mental processes.
The reality therapy approach to counseling and problem-solving focuses on here-and-now actions and the ability to create and choose a better future. Typically, counseled people seek to discover what they really want and how they are currently choosing to behave in order to achieve these goals. According to Glasser, the social component of psychological disorders has been overlooked in the rush to label the population as sick or mentally ill. If a social problem causes distress to a person, it is not always because of a labelled sickness, it may sometimes just be the inability to satisfy one's psychological needs. Reality therapy attempts to separate the person from their behavior.
== History ==
Reality therapy was developed at the Veterans Administration hospital in Los Angeles in the early 1960s by William Glasser and his mentor and teacher, psychiatrist G. L. Harrington. In 1965, Glasser published the book Reality Therapy in the United States. The term refers to a process that is people-friendly and people-centered and has nothing to do with giving people a dose of reality (as a threat or punishment), but rather helps people to recognize how fantasy can distract them from their choices they control in life. Glasser posits that the past is not something to be dwelled upon but rather to be resolved and moved past in order to live a more fulfilling and rewarding life.
By the 1970s, the concepts were extended into what Glasser then called "control theory", a term used in the title of several of his books. By the mid-1990s, the still evolving concepts were described as "choice theory", a term conceived and proposed by the Irish reality therapy practitioner Christine O'Brien Shanahan at the 1995 IRTI Conference in Waterford, Ireland and subsequently adopted by Glasser . The practice of reality therapy remains a cornerstone of the larger body of his work. Choice theory asserts that each of us is a self-determining being who can choose (many of our) future behaviors and hold ourselves consciously responsible for how we are acting, thinking, feeling, and also for our physiological states. Choice theory attempts to explain, or give an account of, how each of us attempts to control our world and those within that world.
== Approach ==
According to Glasser, human beings have four basic psychological needs after survival: the most important need being to love and be loved by another person or group for a feeling of belonging; the need for power, through learning, achieving, feeling worthwhile, winning and through being competent; the need for freedom, including independence and autonomy while simultaneously exercising personal responsibility; the need for fun, pleasure seeking enjoyment and relaxation is also a very important need for good psychological health.
One of the core principles of reality therapy is that, whether people are aware of it or not, they are always trying to meet these essential human needs. These needs must all be balanced and met for a person to function most effectively. However, people don't necessarily act effectively at achieving these goals. Socializing with others is one effective way of meeting the need to belong. But how a person chooses to interact with and gain attention and love from others is most often at the root of their psychological dismay. Reality therapy stresses one major point—people are in control of what they are currently doing in their lives whether or not it is working in their favor toward meeting their basic psychological needs for power, belonging, fun and freedom. And it is through an individual's choices that he or she makes change happen for the better or worse.
For most people in the United States, the survival need is normally being met. It is then in how people meet the remaining four psychological needs that they typically run into trouble. Reality therapy holds that the key to behavior is to remain aware of what an individual presently wants and make choices that will ensure that goal. Reality therapy maintains that what really drives human beings is their need to belong and to be loved. What also drives humans is their yearnings to be free, and with that freedom comes great responsibility (one cannot exist without the other). Reality therapy is very much a therapy of decision (or choice) and change, based upon the conviction that, even though human persons often have let themselves become products of their past's powerful influences, they need not be held forever hostage by those earlier influences.
== Role played by the therapist ==
Reality therapy seeks to treat patients who face difficulty in working out a relationship with others. So, the formation of a connection of the patient with the therapist is regarded as an important milestone at the start of the therapy. According to the therapists, bonding of the patients with their therapists is the most crucial dynamic that would facilitate the healing process. As soon as this bonding is stable, it can help to form a fulfilling connection outside the therapeutic environment.
Patients receiving this kind of therapeutic treatment will learn various ways to strengthen relationships in the most suitable manner possible and that too in the absence of their therapists' safe relationship. Moreover, they will be able to use their newfound skills in their personal lives.
Reality therapists say that when patients are able to use the skills, behaviors, actions, and methods learned through the therapy in their personal lives, then they will be able to successfully work out external relationships as well. This will provide them with the satisfaction of leading a more fulfilling life.
== Core ideas ==
=== Action ===
Glasser believes that there are five basic needs of all human beings: survival, love and belonging, power, freedom or independence, and fun/pleasure. Reality therapy maintains that the main reason a person is in pain and acting out is because he/she lacks that one important 'other being' to connect with, or lacks another basic need for survival. Glasser believes the need for love and belonging is the most primary need because we all need other people in our lives in order to satisfy the rest of our needs. Therefore, in a cooperative therapeutic relationship, the therapist must create an environment where it is possible for the client to feel connected to another 'responsible' person (the therapist) whom they actually like and might choose as a friend in their real life.
Reality therapy maintains that the core problem of psychological distress is that one or more of the client's essential needs are not being met thereby causing the client to act irresponsibly or make poor choices. The therapist then addresses this issue and asserts that the client assume responsibility for their behavior. Reality therapy asserts that we learn responsibility through involvement with other responsible people. We can learn and re-learn responsibility at any time in life". The therapist focuses on realistic attainable goals in order to remedy the real life issues that are causing discomfort.
William Glasser's choice theory is composed of four aspects: thinking, acting, feeling, and physiology. We can directly choose our thoughts and our actions; we have great difficulty in directly choosing our feelings and our physiology (physical effects such as sweaty palms, headaches, nervous tics, racing pulse, etc.).
A critical first step is the client learning how to use their emotions and feelings to self-evaluate. The client must realize that something must change; realize and accept that change is, in fact, possible and can lead to a plan for making better choices. The therapist helps the client create a workable plan to reach a goal; this is at the heart of successful reality therapy. It must be the client's plan, not the counselor's. The essence of a workable plan is that the client can implement it. It is based on factors under the client's control. Reality therapy strives to empower people by emphasizing the power of doing what is under their control. Doing is at the heart of reality therapy.
=== Behavior ===
Behavior is an immediate and live source of information about whether we are happy with what is going on in our lives. It is very hard to choose to change our emotions directly, thus it is much easier to change our thinking, which will lead to more positive emotions. The client must make a conscious decision to alter his/her thought process. For example, to consciously decide that we will no longer think of ourselves as victims, or to decide that in our thoughts we will concentrate on what we can do rather than what we think everybody else ought to do. Reality therapists approach changing "what we do" as a key to changing how we feel and how we will work to obtain what we want. These ideas are similar to those in other therapy movements such as Re-evaluation Counseling and person-centered psychotherapy, although the former emphasizes emotional release as a method of clearing emotional hurt.
=== Control ===
Control is a key issue in reality therapy. Human beings need control to meet their needs: one person seeks control through position and money, and another wants to control their physical space. Control gets a client into trouble in two primary ways: when he or she tries to control other people, and when he or she uses drugs and alcohol to give him or her a false sense of control. At the very heart of choice theory is the core belief that the only person the client can really control is him or herself. If the client thinks he or she can control others, then he or she is moving in the direction of frustration. If the client thinks others can control him or her and follows up by blaming them for all that goes on in his or her life, then he or she tends to do nothing and heads for frustration. There may be events that happen to the client which is out of his or her control, but ultimately, it is up to the client to choose how to respond to these events. Trying to control other people is a vain naive hope, from the point of view of reality therapy. It is a never-ending battle which alienates the client from others and causes endless pain and frustration. This is why it is vital for the client to stick to what is in his or her own control and to respect the rights of other people to meet their needs. The client can, of course, get an instant sense of control from alcohol and some other drugs. This method of control, however, is false, and skews the true level of control the client has over him or herself. This creates an inconsistent level of control which creates even more dissonance and frustration.
=== Focus on the present ===
While traditional psychoanalysis and counseling often focuses on past events, reality therapy and choice theory solutions lie in the client's present and future. Practitioners of reality therapy may visit the past but never dwell on it. In reality therapy, the past is seen as the source of the client's wants and his/her ways of behaving. Supposedly each person from birth has ‘taken pictures,’ stored mental images which comprise ones’ “Quality World.” A client's 'Quality World' is examined as to what this person wants in his life and is it realistic. Each person strives to attain things which have given them pleasure in the past. Everyone's ‘quality world’ is different, so naturally when people enter into a relationship their ‘quality world’ most likely will not match up with that of their new partner.
== Process ==
=== Involvement ===
Establishing a relationship with the client is believed to be the most important factor in all types of therapy. Without this relationship, the other steps will not be effective. This is also known as developing a good rapport with the client. In extreme cases, the therapist may be the only person in the client's life who is willing to put up with the client's behavior long enough to establish a relationship, which can require a great deal of patience from the therapist. In other cases, the client is a part of many relationships, but just needs a relationship with a more consistently positive emphasis. According to Glasser, the client needs to feel that the therapist is someone that he would want in his "Quality World.”
=== Evaluating current behavior ===
The therapist must emphasize the here and now with the client, focusing on the current behaviors and attitudes. The therapist asks the client to make a value judgment about his or her current behavior (which presumably is not beneficial, otherwise the client may not have negative consequences from behavior motivating enough to seek therapy). In many cases the therapist must press the client to examine the effects of his or her behavior, but it is important that the judgment be made by the client and not the therapist. According to Glasser, it is important for the client to feel that he is in control of his own life.
=== Planning possible behavior ===
Plan some behavior that is likely to work better. The client is likely to need some suggestions and prompting from the therapist, but it helps if the plan itself comes from the client. It is important that the initial steps be small enough that the client is almost certain to succeed, in order to build confidence.
In many cases, the client's problem is the result of a bad relationship with someone, and since the client cannot change anyone else's behavior, the therapist will focus on things the client can do unilaterally. The client may be concerned that the other person will take advantage of this and not reciprocate, but in most cases a change in behavior will ease the tension enough that the other person also backs off. If this does not happen, the therapist will also encourage the client to build more positive relationships with other people. The relationship with the therapist sustains the client long enough for them to establish these other relationships.
=== Commitment to the plan ===
The participant must make a commitment to carry out the plan. This is important because many clients will do things for the therapist that they would not do just for themselves. In some cases it can be helpful to make the commitment in writing.
=== "No Excuses, No Punishment, Never Give Up" ===
If there is no punishment, then there is no reason to accept excuses (note that punishment can be ineffective with clients who expect to fail, see Learned helplessness). The therapist insists that the client either carries out the plan, or comes up with a more feasible plan. If the therapist maintains a good relationship with the client, it can be very hard to resist carrying out a plan that the client has agreed would be feasible. If the plan is too ambitious for the client's current abilities, then the therapist and the client work out a different plan.
== Principles ==
There are several basic principles of reality therapy that must be applied to make this technique most successful.
Focus on the present and avoid discussing the past because all human problems are caused by unsatisfying present relationships.
Avoid discussing symptoms and complaints as much as possible since these are often the ineffective ways that clients choose to deal with (and hold on to) unsatisfying relationships.
Understand the concept of total behavior, which means focus on what clients can do, directly act, and think.
Spend less time on what they cannot do directly such as changing their feelings and physiology. Feelings and physiology can be changed indirectly, but only if there is a change in the acting and thinking.
Avoid criticizing, blaming and/or complaining and help clients do the same. By doing this, they learn to avoid these extremely harmful external control behaviors that destroy relationships.
Help changing the 7 deadly behaviours - Criticising, Blaming, Complaining, Nagging, Threatening, Punishing, Bribing.
Remain non-judgmental and non-coercive, but encourage people to judge all they are doing by the Choice Theory axiom: Is what I am doing getting me closer to the people I need? If the choice of behaviors is not getting people closer, then the therapist works to help the client find new behaviors that lead to a better connection.
Teach clients that legitimate or not, excuses stand directly in the way of their ability to make needed connections.
Focus on specifics. Find out as soon as possible who clients are disconnected from and work to help them choose reconnecting behaviors. If they are completely disconnected, focus on helping them find a new connection.
Help them make specific, workable plans to reconnect with the people they need, and then follow through on what was planned by helping them evaluate their progress. Based on their experience, therapists may suggest plans, but should not give the message that there is only one plan. A plan is always open to revision or rejection by the client.
Be patient and supportive but keep focusing on the source of the problem: disconnectedness. Clients who have been disconnected for a long time will find it difficult to reconnect. They are often so involved in the harmful behavior that they have lost sight of the fact that they need to reconnect. Help them to understand Choice Theory and explain that whatever their complaint, reconnecting is the best possible solution to their problem.
== Applications ==
In education, reality therapy can be used as a basis for the school's classroom management plan. Reality therapy has been shown to be effective in improving underachieving junior high school students' internal perception of control. Their internal perception of control refers to their locus of control being internal or external. Reality therapy can be used to help school psychologists improve students with emotional and behavioral disturbances. Cynthia Palmer Mason and Jill Duba, professors at Western Kentucky University, have proposed reality therapy techniques be applied to school counseling programs. They propose using reality therapy methods will help school counselors develop positive therapeutic relationships and improve students' self esteem.
Reality therapy has also been found effective with improving the self concept of elementary school students. Many at risk and alternative schools across the nation have implemented reality therapy techniques and methods to improve school functioning and the learning and social environment. Other areas of application have been used in athletic coaching, childhood obesity, and post-traumatic stress disorder (PTSD). Ken Klug has looked at different coaching techniques and has found that many successful coaches use some aspects of reality therapy. According to Klug, reality therapy in coaching helps build relationships, a healthy teaching environment and brings a definitive purpose to goal setting. Reality therapy can also be used to prevent or control childhood obesity. It is suggested that applied reality therapy methods may help children evaluate their eating behaviors, set realistic goals and integrate effective self-evaluation. Sheryl Prenzlau, a social worker in Israel, has found empirical evidence to suggest that reality therapy can reduce somatization and rumination behaviors associated with PTSD.
== Criticisms ==
The main limitation regarding reality therapy is that it primarily and exclusively deals with the current and the present problems of the individuals. Not looking to unlock trauma or recurring dreams, reality therapy's only workable arena is the present and going forward in the best possible way, while remembering the importance of taking responsibility for one's own actions and realizing that the only person one can control is oneself. In that realization of personal responsibility, one is given great freedom and happiness. Some people find fault with Glasser's notion that people chose the behaviors that afflict them by choosing chronic depressive thought patterns and choosing profound psychosis. Apart from specific brain pathology, Glasser argues that mental illness is a result of unsatisfying present relationships or general unhappiness.
An opposing view to this is, that many other schools of therapy (especially cognitive approaches) focus on the present rather than the past, and that the concept of disconnection (or failure to correctly perceive how motive and inner need/intent are linked), is in some form or other, at the root of dysfunction is also considered not unusual, according to several other accepted schools of therapy, from transpersonal psychology to transactional analysis.
== Footnotes ==
== External links ==
William Glasser website | Wikipedia/Reality_therapy |
Group psychotherapy or group therapy is a form of psychotherapy in which one or more therapists treat a small group of clients together as a group. The term can legitimately refer to any form of psychotherapy when delivered in a group format, including art therapy, cognitive behavioral therapy or interpersonal therapy, but it is usually applied to psychodynamic group therapy where the group context and group process is explicitly utilized as a mechanism of change by developing, exploring and examining interpersonal relationships within the group.
The broader concept of group therapy can be taken to include any helping process that takes place in a group, including support groups, skills training groups (such as anger management, mindfulness, relaxation training or social skills training), and psychoeducation groups. The differences between psychodynamic groups, activity groups, support groups, problem-solving and psychoeducational groups have been discussed by psychiatrist Charles Montgomery. Other, more specialized forms of group therapy would include non-verbal expressive therapies such as art therapy, dance therapy, or music therapy.
== History ==
The founders of group psychotherapy in the United States were Joseph H. Pratt, Trigant Burrow and Paul Schilder. All three of them were active and working at the East Coast in the first half of the 20th century. In 1932 Jacob L. Moreno presented his work on group psychotherapy to the American Psychiatric Association, and co-authored a monograph on the subject. After World War II, group psychotherapy was further developed by Moreno, Samuel Slavson, Hyman Spotnitz, Irvin Yalom, and Lou Ormont. Yalom's approach to group therapy has been very influential not only in the USA but across the world.
An early development in group therapy was the T-group or training group (sometimes also referred to as sensitivity-training group, human relations training group or encounter group), a form of group psychotherapy where participants (typically, between eight and 15 people) learn about themselves (and about small group processes in general) through their interaction with each other. They use feedback, problem solving, and role play to gain insights into themselves, others, and groups. It was pioneered in the mid-1940s by Kurt Lewin and Carl Rogers and his colleagues as a method of learning about human behavior in what became the National Training Laboratories (also known as the NTL Institute) that was created by the Office of Naval Research and the National Education Association in Bethel, Maine, in 1947.
Moreno developed a specific and highly structured form of group therapy known as psychodrama (although the entry on psychodrama claims it is not a form of group therapy). Another recent development in the theory and method of group psychotherapy based on an integration of systems thinking is Yvonne Agazarian's systems-centered therapy (SCT), which sees groups functioning within the principles of system dynamics. Her method of "functional subgrouping" introduces a method of organizing group communication so it is less likely to react counterproductively to differences. SCT also emphasizes the need to recognize the phases of group development and the defenses related to each phase in order to best make sense and influence group dynamics.
In the United Kingdom group psychotherapy initially developed independently, with pioneers S. H. Foulkes and Wilfred Bion using group therapy as an approach to treating combat fatigue in the Second World War. Foulkes and Bion were psychoanalysts and incorporated psychoanalysis into group therapy by recognising that transference can arise not only between group members and the therapist but also among group members. Furthermore, the psychoanalytic concept of the unconscious was extended with a recognition of a group unconscious, in which the unconscious processes of group members could be acted out in the form of irrational processes in group sessions. Foulkes developed the model known as group analysis and the Institute of Group Analysis, while Bion was influential in the development of group therapy at the Tavistock Clinic.
Bion's approach is comparable to social therapy, first developed in the United States in the late 1970s by Lois Holzman and Fred Newman, which is a group therapy in which practitioners relate to the group, not its individuals, as the fundamental unit of development. The task of the group is to "build the group" rather than focus on problem solving or "fixing" individuals.
In Argentina an independent school of group analysis stemmed from the work and teachings of Swiss-born Argentine psychoanalyst Enrique Pichon-Rivière. This thinker conceived of a group-centered approach which, although not directly influenced by Foulkes' work, was fully compatible with it.
== Therapeutic principles ==
Irvin Yalom proposed a number of therapeutic factors (originally termed curative factors but renamed therapeutic factors in the 5th edition of The Theory and Practice of Group Psychotherapy (1st edition 1970, 5th edition 2005).
Universality
The recognition of shared experiences and feelings among group members and that these may be widespread or universal human concerns, serves to remove a group member's sense of isolation, validate their experiences, and raise self-esteem
Altruism
The group is a place where members can help each other, and the experience of being able to give something to another person can lift the member's self esteem and help develop more adaptive coping styles and interpersonal skills.
Instillation of hope
In a mixed group that has members at various stages of development or recovery, a member can be inspired and encouraged by another member who has overcome the problems with which they are still struggling.
Imparting information
While this is not strictly speaking a psychotherapeutic process, members often report that it has been very helpful to learn factual information from other members in the group. For example, about their treatment or about access to services.
Corrective recapitulation of the primary family experience
Members often unconsciously identify the group therapist and other group members with their own parents and siblings in a process that is a form of transference specific to group psychotherapy. The therapist's interpretations can help group members gain understanding of the impact of childhood experiences on their personality, and they may learn to avoid unconsciously repeating unhelpful past interactive patterns in present-day relationships.
Development of socializing techniques
The group setting provides a safe and supportive environment for members to take risks by extending their repertoire of interpersonal behaviour and improving their social skills
Imitative behaviour
One way in which group members can develop social skills is through a modeling process, observing and imitating the therapist and other group members. For example, sharing personal feelings, showing concern, and supporting others.
Cohesiveness
It has been suggested that this is the primary therapeutic factor from which all others flow. Humans are herd animals with an instinctive need to belong to groups, and personal development can only take place in an interpersonal context. A cohesive group is one in which all members feel a sense of belonging, acceptance, and validation.
Existential factors
Learning that one has to take responsibility for one's own life and the consequences of one's decisions.
Catharsis
Catharsis is the experience of relief from emotional distress through the free and uninhibited expression of emotion. When members tell their story to a supportive audience, they can obtain relief from chronic feelings of shame and guilt.
Interpersonal learning
Group members achieve a greater level of self-awareness through the process of interacting with others in the group, who give feedback on the member's behaviour and impact on others.
Self-understanding
This factor overlaps with interpersonal learning but refers to the achievement of greater levels of insight into the genesis of one's problems and the unconscious motivations that underlie one's behaviour.
== Settings ==
Group therapy can form part of the therapeutic milieu of a psychiatric in-patient unit or ambulatory psychiatric partial hospitalization (also known as day hospital treatment).
In addition to classical "talking" therapy, group therapy in an institutional setting can also include group-based expressive therapies such as drama therapy, psychodrama, art therapy, and non-verbal types of therapy such as music therapy and dance/movement therapy.
Group psychotherapy is a key component of milieu therapy in a therapeutic community. The total environment or milieu is regarded as the medium of therapy, all interactions and activities regarded as potentially therapeutic and are subject to exploration and interpretation, and are explored in daily or weekly community meetings. However, interactions between the culture of group psychotherapeutic settings and the more managerial norms of external authorities may create 'organizational turbulence' which can critically undermine a group's ability to maintain a safe yet challenging 'formative space'. Academics at the University of Oxford studied the inter-organizational dynamics of a national democratic therapeutic community over a period of four years; they found external steering by authorities eroded the community's therapeutic model, produced a crisis, and led to an intractable conflict which resulted in the community's closure.
A form of group therapy has been reported to be effective in psychotic adolescents and recovering addicts. Projective psychotherapy uses an outside text such as a novel or motion picture to provide a "stable delusion" for the former cohort and a safe focus for repressed and suppressed emotions or thoughts in the latter. Patient groups read a novel or collectively view a film. They then participate collectively in the discussion of plot, character motivation and author motivation. In the case of films, sound track, cinematography and background are also discussed and processed. Under the guidance of the therapist, defense mechanisms are bypassed by the use of signifiers and semiotic processes. The focus remains on the text rather than on personal issues. It was popularized in the science fiction novel, Red Orc's Rage.
Group therapy is now often utilized in private practice settings.
Group analysis has become widespread in Europe, and especially the United Kingdom, where it has become the most common form of group psychotherapy. Interest from Australia, the former Soviet Union and the African continent is also growing.
Psychedelic-assisted group psychotherapy can be more cost-efficient in comparison to individual psychedelic-assisted psychotherapy models, because therapists can split the costs among all participants of the group.
== Research on effectiveness ==
A 2008 meta-analysis found that individual therapy may be slightly more effective than group therapy initially, but this difference seems to disappear after 6 months. There is clear evidence for the effectiveness of group psychotherapy for depression: a meta-analysis of 48 studies showed an overall effect size of 1.03, which is clinically highly significant. Similarly, a meta-analysis of five studies of group psychotherapy for adult sexual abuse survivors showed moderate to strong effect sizes, and there is also good evidence for effectiveness with chronic traumatic stress in war veterans.
There is less robust evidence of good outcomes for patients with borderline personality disorder, with some studies showing only small to moderate effect sizes. The authors comment that these poor outcomes might reflect a need for additional support for some patients, in addition to the group therapy. This is borne out by the impressive results obtained using mentalization-based treatment, a model that combines dynamic group psychotherapy with individual psychotherapy and case management.
Most outcome research is carried out using time-limited therapy with diagnostically homogenous groups. However, long-term intensive interactional group psychotherapy assumes diverse and diagnostically heterogeneous group membership, and an open-ended time scale for therapy. Good outcomes have also been demonstrated for this form of group therapy.
== Computer-supported group therapy ==
Research on computer-supported and computer-based interventions has increased significantly since the mid-1990s. For a comprehensive overview of current practices see: Computer-supported psychotherapy.
Several feasibility studies examined the impact of computer-, app- and media-support on group interventions. Most investigated interventions implemented short rationales, which usually were based on principles of cognitive behaviour therapy (CBT). Most research focussed on:
Anxiety disorders (e.g. social phobia, generalised anxiety disorder )
Depression (e.g. mild to moderate Major Depression)
Other disorders (e.g. hoarding)
While the evidence base for group therapy is very limited, preliminary research in individual therapy suggests possible increases of treatment efficiency or effectiveness. Further, the use of app- or computer-based monitoring has been investigated several times. Reported advantages of the modern format include improved between-session transfer and patient-therapist-communication, as well as increased treatment transparency and intensity. Negative effects may occur in terms of dissonance due to non-compliance with online tasks, or the constriction of in-session group interaction. Last but not least, group phenomena might influence the motivation to engage with online tasks.
== See also ==
Family therapy
Henry Ezriel
Self-help groups for mental health
Twelve-step program
Impact therapy
American Group Psychotherapy Association, founded in 1943
== Notes ==
== Further reading ==
Luchins AS (1964). Group Therapy - A Guide (3rd ed.). New York: Random House. OCLC 599119917.
Yalom ID, Leszcz M (2005). The theory and practice of group psychotherapy (5th ed.). New York: Basic Books. p. 272. ISBN 978-0-465-09284-0. | Wikipedia/Group_psychotherapy |
For patients with Alzheimer's disease, music therapy provides a beneficial interaction between a patient and an individualized musical regimen and has been shown to increase cognition and slow the deterioration of memory loss. Music therapy is a clinical and evidence-based intervention that involves music in some capacity and includes both a participant and a music therapist who have completed an accredited music therapy program.
The forms of music therapy are broad in nature, and can range from individual or group singing sessions, to active participation in music making, to listening to songs individually. Alzheimer's disease (AD) is a fatal condition that continuously deteriorates brain chemistry over time. Accounting for more than 60% of the dementia in older people, AD gradually leads to detrimental effects on cognitive function, linguistic abilities, and memory. Within populations living with Alzheimer's, music therapy is sometimes used to assist in palliating the behavioral and psychological symptoms of this disease. Music therapy is based in scientific findings and can elicit change in individuals as well as groups through music. Personalized music therapy has been shown in some cases to be able to lessen certain symptoms, including behavioral symptoms, such as physical or verbal outbursts and hallucinations, and cognitive symptoms related to dementia.
This personalized treatment approach has also been utilized in music therapy which, in comparison to pharmacological treatments, is a very low-cost solution to help manage aspects of the disease throughout the progression of the disease. It is also a preferable way of additional treatment over medications for behavioral symptoms (i.e. anti-depressants), as side effects are avoided. Because of the recognized decreases in behavioral outbursts, music therapy has been recognized as a care plan that is beneficial to the patient as well as the caretaker. However, the effects of music therapy on individuals with Alzheimer's disease have proven to be short-term, lasting a maximum of three months after the discontinuation of treatment.
== Recent research ==
Currently, there is no known cure for Alzheimer's disease, though a range of medications and alternative therapy options have shown to be effective in mitigating the progression of the disease. Medication-based treatments such as the use of low dose Leuco-Methyltioninium (LMTM) were shown to be most effective at higher doses of around 100 mg daily; unfortunately, these higher doses were shown to cause severe gastrointestinal and urinary reactions. Due to the physiologically destructive nature of Alzheimer's disease, many medications that are able to slow physical deteriorations also offer many unwanted side effects because of their harsh nature. While cognitive interventions such as the use of primarily computer-based training programs have shown high efficacy in improving delayed memory, recognition, clock-drawing, digit forward and digit backward tests, these programs can be expensive and present learning barriers to the senior populations who prefer the traditional pencil-and-paper methods. Aside from medication and cognition based treatments, music therapy offers a cheaper intervention that reduces the stress and side effects associated with other treatment options.
Music therapy has been studied in the psychological community and has been found to be effective in reducing behavioral symptoms as well as positively influencing emotional and cognitive well-being. In one study, Alzheimer's patients in 98 nursing homes were exposed to music therapy and the results were compared to 98 controls that were not exposed to music therapy. The results suggested that this program helped reduce the use of medication, in combination with the reduction of behavioral and physiological symptoms of dementia. This is the first empirical study to show that the Music and Memory program, described below, exhibits efficacy in reducing antipsychotic and anxiolytic medicine use behavioral and psychological symptoms of dementia. Additionally, another study found that music therapy demonstrated sedative and relaxing effects on patients. Certain neurotransmitter levels, such as for norepinephrine and epinephrine, significantly increased after four weeks of music therapy. Music therapy has also been found to help slow down deterioration of linguistic ability.
Similarly, a study was conducted that had Alzheimer's patients in nursing home facilities assigned to one of three activities: play puzzles, paint, or listen to music from the patients' youth. When tested six months later, those who listened to music were more alert and in better moods and had greater recall of their own personal events when compared to the groups who painted or played with puzzles.
Research has even suggested that Alzheimer's patients may be capable of learning entirely new music. Alzheimer's patients were taught an original song by a group leader and over the course of three sessions, there was visible improvements and increased alertness among Alzheimer's patients. Alzheimer's patients have experienced growth in alertness, as well as, the remarkable retrieval of memories that they attach to whatever song they are being exposed to. Music makes physical and emotional connections that trigger memories that wouldn't have otherwise been retrieved if it weren't for the rhythm, melody, and melodic phrasing of the given musical piece; in many cases, music has been said to be one of the last things that an Alzheimer's patient forgets how to do (usually attributed to muscle memory). Learning new songs was possible. Additionally, another study found similar results and found that Alzheimer's patients when prompted could remember new songs and tunes that were taught to them. They found that with regular practice, Alzheimer's patients could, in fact, learn new music.
Additionally, a qualitative study summarized 21 studies conducted since 1985 on those with Alzheimer's disease and the effect of music therapy. These studies varied in nature, but the authors concluded that music therapy can be a successful intervention and can improve both cognitive and emotional behaviors, as well as decrease some of the behavioral issues associated with Alzheimer's disease. While the methods were varied in nature, the converging evidence in the various experiments lend optimism for the validity of music therapy in this subset of the population.
One dissertation published in 2021 included 43 elderly female Americans in 3 East coast rehabilitation centers for dementia who went through a 5-week individualized singing intervention. The results of the study highlight the importance of individualization when it comes to music-based interventions (MBIs) and dementia, as high participation individuals showed much greater cognitive, functional, and narrative improvements than those that participated less. The individualized selection of familiar, nostalgic, and new songs combined with a personalized tonal key and rhythm that best fit the patient's vocal and physical abilities was vital to realizing these improvements.
=== Constraints with research ===
However, some of the current research does not support the fact that all musical memory is preserved in patients with Alzheimer's disease. A paper reviewing eight case studies and three group studies, found that certain kinds of musical memory, such as remembering familiar music from one's youth, might not be preserved. However, of the Alzheimer's patients that were musicians, greater retention of musical memory was preserved compared to those without prior musical experience. This research suggests that music therapy may not be effective in the same capacity for every patient affected by Alzheimer's disease, and that differences may be highly variable in nature. While it is logical that treatment works on a case-by-case basis, it is important to remember that individuals react differently to any treatment, and results vary. This study also highlights that there are huge methodological differences in the different kinds of studies, providing difficulty for synthesizing across information and study designs. This research highlights an important understanding with these studies, since most of them lack a control group so drawing causal conclusions becomes impossible. A more recent review of music training-induced neuroplasticity marks the inconsistencies as a result of varying study designs and between-subject comparisons, emphasizing the importance of longitudinal within-subject designs for future research.
=== Future Research ===
Recently, new combinations of MBIs with other noninvasive methods have been proposed to bolster the efficacy of treatment. One such proposal published in 2020 presented a new clinical framework of combining MBIs with more recent Gamma-frequency sensory stimulation approaches as to noninvasively treat neurodegenerative disorders. They suggest that such a combination could enhance MBIs through the targeting of multiple biomarkers of dementia while activating auditory-reward networks. They suggest that such interventions would be most effective at the mild cognitive impairment (MCI) stage for slowing/reversing cognitive decline, as the resting-state connectivity of auditory and reward systems in the brain have been shown to be heightened in MCI patients compared to AD and even healthy control patients.
== Music & Memory program ==
Music programs in general have been newly investigated as a more formal and structured way to alleviate cognitive impairments associated with Alzheimer's disease and other related dementias. Providing five sessions of music-based therapy has been found to generally improve behavioral problems, reduce anxiety, and enhance the emotional well-being. In contrast, however, no clear evidence about music's effect on aggression or agitation was observed. The MUSIC & MEMORY® program has been recognized as the most widely used music treatment strategy and its efficacy has been studied by psychologists and noted positively in several formal studies, including a 2018 study by the University of Utah Health in Salt Lake City. The MUSIC & MEMORY® Program, developed by Dan Cohen, MSW, in 2006 has helped increase awareness and efficacy of music therapy in relation to Alzheimer's and other related dementias. This specific program trains nursing home staff and other elder care professionals, as well as family caregivers, how to create and provide personalized playlists using iPods/mp3 players and related digital audio systems. The utilization of these technologies enable those struggling with Alzheimer's, dementia and other cognitive and physical challenges to reconnect with the world through music-triggered memories. By providing access and education, and by creating a network of MUSIC & MEMORY® Certified organizations, the institution aims to make this form of personalized therapeutic music a standard of care throughout the health care industry. As of 2020, Music & Memory has certified over 5,300 organizations in the United States, including state-sponsored projects in California, Texas, Ohio, and Wisconsin. The programs wide range of influence is estimated to have affected over 75,000 patients to date.
At the Columbia Health Care Center in Wyocena, Wisconsin, the efficacy of the MUSIC & MEMORY® program was tested as a means of treating dysphagia in those with advanced dementia. Patients were instructed to listen to an individualized playlist half an hour daily before supper. They found indications of enhanced swallowing mechanism, less choking during supper, improved overall nutritional status and reduced weight loss, reduced need for speech interventions, and an enhanced quality of life. One patient had to be removed from the study due to over stimulation with the iPod, and thus these results stemmed from four participants.
== Power of music ==
Music influences many regions of the brain including those associated with emotional and creative areas. Because of this, music has the power to evoke emotion and memories from deep in the past, so it is logical that Alzheimer's patients have the ability to recall musical memories from many decades prior given the richness and vividness of these memories. Music memory can be preserved for those living with Alzheimer's disease and brought forth through various techniques of music therapy. Initial research has suggested that reminiscence music can be associated with daily tasks to aid in their successful completion and reduce the burden on caregivers. Areas of the brain that are influenced by music are one of the last regions to degenerate due to the progression of Alzheimer's disease.
Music therapy has a positive effect on immediate and delayed word recall in mild AD patients. In a clinical setting, short and long-lasting stimulations by music were shown to have a positive effect on both category fluency in verbal tasks as well as fluency and speech content. Music therapy was also found to be effective in controlling the psychiatric and behavioral side-effects of AD, causing a decrease in caregiver distress as well as an increase in quality of life.
Alzheimer's patients can often remember songs from their youth even when far along in the progression of their disease. Dementia facilities the use music as a means of entertainment, since it often brings joy and elicits memories. Alive Inside describes that music activates more parts of the brain than any other stimulus and records itself in our motions and emotions. The movie describes that these are the last parts of the brain touched by Alzheimer's.
Music therapists have the capability to develop relationships and bonds with their patients, especially through repeated sessions. Music can help with an adjustment toward unfamiliar environments as well as settings that trigger memories from deep in the past. These sessions can often lead to uncontrollable emotions, as evidenced in the patients Dan Cohen describes and highlights in nursing homes in Alive Inside. One patient documented in Alive Inside had used a walker for many years, and while listening to a song from her youth was able to dance without the use of her walker.
== Popular Media ==
Alzheimer's Disease has been discussed in popular media outlets. The 2014 film Alive Inside follows patients with Alzheimer's disease and demonstrates how music can be used as a means for music therapy to alleviate some suffering and pain. This film highlights the impact that music can have on those who can not communicate in traditional ways, and the power that music can play, particularly that from one's youth. Alive Inside won the Audience Award for U.S. Documentaries, which was screened at the Sundance Film Festival. In response to the film, the Alive Inside Foundation, founded in 2010, rose in popularity. The foundation's motto is the "Empathy Revolution" and aims to connect youth and older adults with Alzheimer's disease, specifically through music. The goal of the foundation is to administer music via the form of iPods to every nursing home across the United States.
Additionally, The Alzheimer's Association gives a list of caregivers tips for people with Alzheimer's relatives and friends. They state that music therapy has been found to enhance cognition and can help caregivers better take care of those affected by Alzheimer's.
== References == | Wikipedia/Music_therapy_for_Alzheimer's_disease |
A therapy or medical treatment is the attempted remediation of a health problem, usually following a medical diagnosis. Both words, treatment and therapy, are often abbreviated tx, Tx, or Tx.
As a rule, each therapy has indications and contraindications. There are many different types of therapy. Not all therapies are effective. Many therapies can produce unwanted adverse effects.
Treatment and therapy are often synonymous, especially in the usage of health professionals. However, in the context of mental health, the term therapy may refer specifically to psychotherapy.
== Semantic field ==
The words care, therapy, treatment, and intervention overlap in a semantic field, and thus they can be synonymous depending on context. Moving rightward through that order, the connotative level of holism decreases and the level of specificity (to concrete instances) increases. Thus, in health-care contexts (where its senses are always noncount), the word care tends to imply a broad idea of everything done to protect or improve someone's health (for example, as in the terms preventive care and primary care, which connote ongoing action), although it sometimes implies a narrower idea (for example, in the simplest cases of wound care or postanesthesia care, a few particular steps are sufficient, and the patient's interaction with the provider of such care is soon finished). In contrast, the word intervention tends to be specific and concrete, and thus the word is often countable; for example, one instance of cardiac catheterization is one intervention performed, and coronary care (noncount) can require a series of interventions (count). At the extreme, the piling on of such countable interventions amounts to interventionism, a flawed model of care lacking holistic circumspection—merely treating discrete problems (in billable increments) rather than maintaining health. Therapy and treatment, in the middle of the semantic field, can connote either the holism of care or the discreteness of intervention, with context conveying the intent in each use. Accordingly, they can be used in both noncount and count senses (for example, therapy for chronic kidney disease can involve several dialysis treatments per week).
The words aceology and iamatology are obscure and obsolete synonyms referring to the study of therapies.
The English word therapy comes via Latin therapīa from Ancient Greek: θεραπεία and literally means "curing" or "healing". The term therapeusis is a somewhat archaic doublet of the word therapy.
== Types of therapies ==
Therapy as a treatment for physical or mental condition is based on knowledge usually from one of three separate fields (or a combination of them): conventional medicine (allopathic, Western biomedicine, relying on scientific approach and evidence-based practice), traditional medicine (age-old cultural practices), and alternative medicine (healthcare procedures "not readily integrated into the dominant healthcare model").
=== By chronology, priority, or intensity ===
==== Levels of care ====
Levels of care classify health care into categories of chronology, priority, or intensity, as follows:
Urgent care handles health issues that need to be handled today but are not necessarily emergencies; the urgent care venue can send a patient to the emergency care level if it turns out to be needed.
In the United States (and possibly various other countries), urgent care centers also serve another function as their other main purpose: U.S. primary care practices have evolved in recent decades into a configuration whereby urgent care centers provide portions of primary care that cannot wait a month, because getting an appointment with the primary care practitioner is often subject to a waitlist of 2 to 8 weeks.
Emergency care handles medical emergencies and is a first point of contact or intake for less serious problems, which can be referred to other levels of care as appropriate. This therapy is often given to patients before a definitive diagnosis is made.
Intensive care, also called critical care, is care for extremely ill or injured patients. It thus requires high resource intensity, knowledge, and skill, as well as quick decision making.
Ambulatory care is care provided on an outpatient basis. Typically patients can walk into and out of the clinic under their own power (hence "ambulatory"), usually on the same day. This care type also involves surgery which, according to recent research, offers "generally superior 30-day outcomes relative to inpatient-based care".
Home care is care at home, including care from providers (such as physicians, nurses, and home health aides) making house calls, care from caregivers such as family members, and patient self-care.
Primary care is meant to be the main kind of care in general, and ideally a medical home that unifies care across referred providers. The current trend in this area is digitalization aiming to ensure open access to information about therapy, issues, and recent progress on biomedical research.
Secondary care is care provided by medical specialists and other health professionals who generally do not have first contact with patients, for example, cardiologists, urologists and dermatologists. A patient reaches secondary care as a next step from primary care, typically by provider referral although sometimes by patient self-initiative. According to a systematic review, fields for development secondary care from patients' viewpoint may be classified into four domains that should usefully guide future improvement of this care stage: "barriers to care, communication, coordination, and relationships and personal value".
Tertiary care is specialized consultative care, usually for inpatients and on referral from a primary or secondary health professional, in a facility that has personnel and facilities for advanced medical investigation and treatment, such as a tertiary referral hospital.
Follow-up care is additional care during or after convalescence. Aftercare is generally synonymous with follow-up care. One of the key areas of development–Telehealth, including non-clinical services: provider training, administrative meetings, and continuing medical education–offers opportunities to improve access to care, increase provider and patient productivity through reduced travel, potential expenses savings, and the ability to expand services.
End-of-life care is care near the end of one's life. It often includes the following:
Palliative care is supportive care, most especially (but not necessarily) near the end of life.
Hospice care is palliative care very near the end of life when cure is very unlikely. Its main goal is comfort, both physical and mental. A systematic meta review showed that the most cost-efficient one relates to home-based end-of-life care, including reduced overall "resource use and improved patient and carer outcomes".
==== Lines of therapy ====
Treatment decisions often follow formal or informal algorithmic guidelines. Treatment options can often be ranked or prioritized into lines of therapy: first-line therapy, second-line therapy, third-line therapy, and so on. First-line therapy (sometimes referred to as induction therapy, primary therapy, or front-line therapy) is the first therapy that will be tried. Its priority over other options is usually either: (1) formally recommended on the basis of clinical trial evidence for its best-available combination of efficacy, safety, and tolerability or (2) chosen based on the clinical experience of the physician. If a first-line therapy either fails to resolve the issue or produces intolerable side effects, additional (second-line) therapies may be substituted or added to the treatment regimen, followed by third-line therapies, and so on.
An example of a context in which the formalization of treatment algorithms and the ranking of lines of therapy is very extensive is chemotherapy regimens. Because of the great difficulty in successfully treating some forms of cancer, one line after another may be tried. In oncology the count of therapy lines may reach 10 or even 20.
Often multiple therapies may be tried simultaneously (combination therapy or polytherapy). Thus combination chemotherapy is also called polychemotherapy, whereas chemotherapy with one agent at a time is called single-agent therapy or monotherapy. Single-agent therapy is a care algorithm that focuses on one specific drug or procedure. It utilizes a single therapeutic agent rather than combining multiple ones. Multiagent Therapy is a treatment by two or more drugs or procedures. Comprehensive therapy combines various forms of medical treatment to provide the most effective care for patients.
Adjuvant therapy is therapy given in addition to the primary, main, or initial treatment, but simultaneously (as opposed to second-line therapy). Neoadjuvant therapy is therapy that is begun before the main therapy. Thus one can consider surgical excision of a tumor as the first-line therapy for a certain type and stage of cancer even though radiotherapy is used before it; the radiotherapy is neoadjuvant (chronologically first but not primary in the sense of the main event). Premedication is conceptually not far from this, but the words are not interchangeable; cytotoxic drugs to put a tumor "on the ropes" before surgery delivers the "knockout punch" are called neoadjuvant chemotherapy, not premedication, whereas things like anesthetics or prophylactic antibiotics before dental surgery are called premedication.
Step therapy or stepladder therapy is a specific type of prioritization by lines of therapy. It is controversial in American health care because unlike conventional decision-making about what constitutes first-line, second-line, and third-line therapy, which in the U.S. reflects safety and efficacy first and cost only according to the patient's wishes, step therapy attempts to mix cost containment by someone other than the patient (third-party payers) into the algorithm.
Therapy freedom refers to prescription for use of an unlicensed medicine (without a marketing authorization issued by the licensing authority of the country) and the negotiation between individual and group rights are involved. A comprehensive research in Australia, Czech Republic, India, Israel, Italy, Netherlands, Spain, Serbia, Sweden, UK, and USA showed that the rate of the unlicensed medicine prescription has been reported to range from 0.3 to 35% depending on the country. In many jurisdictions, therapy freedom is limited to cases of no treatment existing that is both well-established and more efficacious.
=== By intent ===
=== By intervention ===
Invasive therapy is achieved either through surgery or through the use of drugs. Medical invasive treatments can be divided into two main categories: pharmacotherapy and surgery.
Noninvasive therapies are medical treatments that do not involve entry into the body. It can be classified into five main categories: neurotherapy, physical therapy, occupational therapy, radiation therapy, and psychotherapy.
=== By therapy composition ===
Treatments can be classified according to the method of treatment:
==== By matter ====
by drugs: pharmacotherapy, chemotherapy (also, medical therapy often means specifically pharmacotherapy)
by medical devices: implantation
cardiac resynchronization therapy
by specific molecules: molecular therapy (although most drugs are specific molecules, molecular medicine refers in particular to medicine relying on molecular biology)
by specific biomolecular targets: targeted therapy
molecular chaperone therapy
by chelation: chelation therapy
by specific chemical elements:
by metals:
by heavy metals:
by gold: chrysotherapy (aurotherapy)
by platinum-containing drugs: platin therapy
by biometals
by lithium: lithium therapy
by potassium: potassium supplementation
by magnesium: magnesium supplementation
by chromium: chromium supplementation; phonemic neurological hypochromium therapy
by copper: copper supplementation
by nonmetals:
by diatomic oxygen: oxygen therapy, hyperbaric oxygen therapy (hyperbaric medicine)
transdermal continuous oxygen therapy
by triatomic oxygen (ozone): ozone therapy
by fluoride: fluoride therapy
by other gases: medical gas therapy
by water:
hydrotherapy
aquatic therapy
rehydration therapy
oral rehydration therapy
water cure (therapy)
by biological materials (biogenic substances, biomolecules, biotic materials, natural products), including their synthetic equivalents: biotherapy
by whole organisms
by viruses: virotherapy
by bacteriophages: phage therapy
by animal interaction: see animal interaction section
by constituents or products of organisms
by plant parts or extracts (but many drugs are derived from plants, even when the term phytotherapy is not used)
scientific type: phytotherapy
traditional (prescientific) type: herbalism
by animal parts: quackery involving shark fins, tiger parts, and so on, often driving threat or endangerment of species
by genes: gene therapy
gene therapy for epilepsy
gene therapy for osteoarthritis
gene therapy for color blindness
gene therapy of the human retina
gene therapy in Parkinson's disease
by epigenetics: epigenetic therapy
by proteins: protein therapy (but many drugs are proteins despite not being called protein therapy)
by enzymes: enzyme replacement therapy
by hormones: hormone therapy
hormonal therapy (oncology)
hormone replacement therapy
estrogen replacement therapy
androgen replacement therapy
hormone replacement therapy (menopause)
transgender hormone therapy
feminizing hormone therapy
masculinizing hormone therapy
antihormone therapy
androgen deprivation therapy
by whole cells: cell therapy (cytotherapy)
by stem cells: stem cell therapy
by immune cells: see immune system products below
by immune system products: immunotherapy, host modulatory therapy
by immune cells:
T-cell vaccination
cell transfer therapy
autologous immune enhancement therapy
TK cell therapy
by humoral immune factors: antibody therapy
by whole serum: serotherapy, including antiserum therapy
by immunoglobulins: immunoglobulin therapy
by monoclonal antibodies: monoclonal antibody therapy
by urine: urine therapy (some scientific forms; many prescientific or pseudoscientific forms)
by food and dietary choices:
medical nutrition therapy
grape therapy (quackery)
by salts (but many drugs are the salts of organic acids, even when drug therapy is not called by names reflecting that)
by salts in the air
by natural dry salt air: "taking the cure" in desert locales (especially common in prescientific medicine; for example, one 19th-century way to treat tuberculosis)
by artificial dry salt air:
low-humidity forms of speleotherapy
negative air ionization therapy
by moist salt air:
by natural moist salt air: seaside cure (especially common in prescientific medicine)
by artificial moist salt air: water vapor forms of speleotherapy
by salts in the water
by mineral water: spa cure ("taking the waters") (especially common in prescientific medicine)
by seawater: seaside cure (especially common in prescientific medicine)
by aroma: aromatherapy
by other materials with mechanism of action unknown
by occlusion with duct tape: duct tape occlusion therapy
==== By energy ====
by electric energy as electric current: electrotherapy, electroconvulsive therapy
Transcranial magnetic stimulation
Vagus nerve stimulation
by magnetic energy:
magnet therapy
pulsed electromagnetic field therapy
magnetic resonance therapy
by electromagnetic radiation (EMR):
by light: light therapy (phototherapy)
ultraviolet light therapy
PUVA therapy
photodynamic therapy
photothermal therapy
cytoluminescent therapy
blood irradiation therapy
by darkness: dark therapy
by lasers: laser therapy
low level laser therapy
by gamma rays: radiosurgery
Gamma Knife radiosurgery
stereotactic radiation therapy
cobalt therapy
by radiation generally: radiation therapy (radiotherapy)
intraoperative radiation therapy
by EMR particles:
particle therapy
proton therapy
electron therapy
intraoperative electron radiation therapy
Auger therapy
neutron therapy
fast neutron therapy
neutron capture therapy of cancer
by radioisotopes emitting EMR:
by nuclear medicine
by brachytherapy
quackery type: electromagnetic therapy (alternative medicine)
by mechanical: manual therapy as massotherapy and therapy by exercise as in physical therapy
inversion therapy
by sound:
by ultrasound:
ultrasonic lithotripsy
extracorporeal shockwave therapy
sonodynamic therapy
by music: music therapy
by temperature
by heat: heat therapy (thermotherapy)
by moderately elevated ambient temperatures: hyperthermia therapy
by dry warm surroundings: Waon therapy
by dry or humid warm surroundings: sauna, including infrared sauna, for sweat therapy
by cold:
by extreme cold to specific tissue volumes: cryotherapy
by ice and compression: cold compression therapy
by ambient cold:
hypothermia therapy for neonatal encephalopathy (in newborns)
targeted temperature management (therapeutic hypothermia, protective hypothermia)
by hot and cold alternation: contrast bath therapy
==== By procedure and human interaction ====
Surgery
by counseling, such as psychotherapy (see also: list of psychotherapies)
systemic therapy
by group psychotherapy
by cognitive behavioral therapy
by cognitive therapy
by behaviour therapy
by dialectical behavior therapy
by cognitive emotional behavioral therapy
by cognitive rehabilitation therapy
by family therapy
by education
by psychoeducation
by information therapy
by speech therapy, physical therapy, occupational therapy, vision therapy, massage therapy, chiropractic or acupuncture
by lifestyle modifications, such as avoiding unhealthy food or maintaining a predictable sleep schedule
by coaching
==== By animal interaction ====
by pets, assistance animals, or working animals: animal-assisted therapy
by horses: equine therapy, hippotherapy
by dogs: pet therapy with therapy dogs, including grief therapy dogs
by cats: pet therapy with therapy cats
by fish: ichthyotherapy (wading with fish), aquarium therapy (watching fish)
by maggots: maggot therapy
by worms:
by internal worms: helminthic therapy
by leeches: leech therapy
by immersion: animal bath
==== By meditation ====
by mindfulness: mindfulness-based cognitive therapy
==== By reading ====
by bibliotherapy
==== By creativity ====
by expression: expressive therapy
by writing: writing therapy
journal therapy
by play: play therapy
by art: art therapy
sensory art therapy
comic book therapy
by gardening: horticultural therapy
by dance: dance therapy
by drama: drama therapy
by recreation: recreational therapy
by music: music therapy
==== By sleeping and waking ====
by deep sleep: deep sleep therapy
by sleep deprivation: wake therapy
== See also ==
== References ==
== External links ==
The dictionary definition of therapy at Wiktionary
"Chapter Nine of the Book of Medicine Dedicated to Mansur, with the Commentary of Sillanus de Nigris" is a Latin book by Rhazes, from 1483, that is known for its ninth chapter, which is about therapeutics | Wikipedia/Treatment_modality |
Cognitive behavioral therapy (CBT) is a form of psychotherapy that aims to reduce symptoms of various mental health conditions, primarily depression, PTSD, and anxiety disorders.
Cognitive behavioral therapy focuses on challenging and changing cognitive distortions (thoughts, beliefs, and attitudes) and their associated behaviors in order to improve emotional regulation and help the individual develop coping strategies to address problems.
Though originally designed as an approach to treat depression, CBT is often prescribed for the evidence-informed treatment of many mental health and other conditions, including anxiety, substance use disorders, marital problems, ADHD, and eating disorders. CBT includes a number of cognitive or behavioral psychotherapies that treat defined psychopathologies using evidence-based techniques and strategies.
CBT is a common form of talk therapy based on the combination of the basic principles from behavioral and cognitive psychology. It is different from other approaches to psychotherapy, such as the psychoanalytic approach, where the therapist looks for the unconscious meaning behind the behaviors and then formulates a diagnosis. Instead, CBT is a "problem-focused" and "action-oriented" form of therapy, meaning it is used to treat specific problems related to a diagnosed mental disorder. The therapist's role is to assist the client in finding and practicing effective strategies to address the identified goals and to alleviate symptoms of the disorder. CBT is based on the belief that thought distortions and maladaptive behaviors play a role in the development and maintenance of many psychological disorders and that symptoms and associated distress can be reduced by teaching new information-processing skills and coping mechanisms.
When compared to psychoactive medications, review studies have found CBT alone to be as effective for treating less severe forms of depression, and borderline personality disorder. Some research suggests that CBT is most effective when combined with medication for treating mental disorders, such as major depressive disorder. CBT is recommended as the first line of treatment for the majority of psychological disorders in children and adolescents, including aggression and conduct disorder. Researchers have found that other bona fide therapeutic interventions were equally effective for treating certain conditions in adults. Along with interpersonal psychotherapy (IPT), CBT is recommended in treatment guidelines as a psychosocial treatment of choice. It is recommended by the American Psychiatric Association, the American Psychological Association, and the British National Health Service.
== History ==
=== Philosophy ===
Precursors of certain fundamental aspects of CBT have been identified in various ancient philosophical traditions, particularly Stoicism. Stoic philosophers, particularly Epictetus, believed logic could be used to identify and discard false beliefs that lead to destructive emotions, which has influenced the way modern cognitive-behavioral therapists identify cognitive distortions that contribute to depression and anxiety. Aaron T. Beck's original treatment manual for depression states, "The philosophical origins of cognitive therapy can be traced back to the Stoic philosophers". Another example of Stoic influence on cognitive theorists is Epictetus on Albert Ellis. A key philosophical figure who influenced the development of CBT was John Stuart Mill through his creation of Associationism, a predecessor of classical conditioning and behavioral theory.
Principles originating from Buddhism have significantly impacted the evolution of various new forms of CBT, including dialectical behavior therapy, mindfulness-based cognitive therapy, spirituality-based CBT, and compassion-focused therapy.
The modern roots of CBT can be traced to the development of behavior therapy in the early 20th century, the development of cognitive therapy in the 1960s, and the subsequent merging of the two.
=== Behavioral therapy ===
Groundbreaking work in behaviorism began with John B. Watson and Rosalie Rayner's studies of conditioning in 1920. Behaviorally-centered therapeutic approaches appeared as early as 1924 with Mary Cover Jones' work dedicated to the unlearning of fears in children. These were the antecedents of the development of Joseph Wolpe's behavioral therapy in the 1950s. It was the work of Wolpe and Watson, which was based on Ivan Pavlov's work on learning and conditioning, that influenced Hans Eysenck and Arnold Lazarus to develop new behavioral therapy techniques based on classical conditioning.
During the 1950s and 1960s, behavioral therapy became widely used by researchers in the United States, the United Kingdom, and South Africa. Their inspiration was by the behaviorist learning theory of Ivan Pavlov, John B. Watson, and Clark L. Hull.
In Britain, Joseph Wolpe, who applied the findings of animal experiments to his method of systematic desensitization, applied behavioral research to the treatment of neurotic disorders. Wolpe's therapeutic efforts were precursors to today's fear reduction techniques. British psychologist Hans Eysenck presented behavior therapy as a constructive alternative.
At the same time as Eysenck's work, B. F. Skinner and his associates were beginning to have an impact with their work on operant conditioning. Skinner's work was referred to as radical behaviorism and avoided anything related to cognition. However, Julian Rotter in 1954 and Albert Bandura in 1969 contributed to behavior therapy with their works on social learning theory by demonstrating the effects of cognition on learning and behavior modification. The work of Claire Weekes in dealing with anxiety disorders in the 1960s is also seen as a prototype of behavior therapy.
The emphasis on behavioral factors has been described as the "first wave" of CBT.
=== Cognitive therapy ===
One of the first therapists to address cognition in psychotherapy was Alfred Adler, notably with his idea of basic mistakes and how they contributed to creation of unhealthy behavioral and life goals.Abraham Low believed that someone's thoughts were best changed by changing their actions. Adler and Low influenced the work of Albert Ellis, who developed the earliest cognitive-based psychotherapy called rational emotive behavioral therapy, or REBT. The first version of REBT was announced to the public in 1956.
In the late 1950s, Aaron T. Beck was conducting free association sessions in his psychoanalytic practice. During these sessions, Beck noticed that thoughts were not as unconscious as Freud had previously theorized, and that certain types of thinking may be the culprits of emotional distress. It was from this hypothesis that Beck developed cognitive therapy, and called these thoughts "automatic thoughts". He first published his new methodology in 1967, and his first treatment manual in 1979. Beck has been referred to as "the father of cognitive behavioral therapy".
It was these two therapies, rational emotive therapy, and cognitive therapy, that started the "second wave" of CBT, which emphasized cognitive factors.
=== Merger of behavioral and cognitive therapies ===
Although the early behavioral approaches were successful in many so-called neurotic disorders, they had little success in treating depression. Behaviorism was also losing popularity due to the cognitive revolution. The therapeutic approaches of Albert Ellis and Aaron T. Beck gained popularity among behavior therapists, despite the earlier behaviorist rejection of mentalistic concepts like thoughts and cognitions. Both of these systems included behavioral elements and interventions, with the primary focus being on problems in the present.
In initial studies, cognitive therapy was often contrasted with behavioral treatments to see which was most effective. During the 1980s and 1990s, cognitive and behavioral techniques were merged into cognitive behavioral therapy. Pivotal to this merging was the successful development of treatments for panic disorder by David M. Clark in the UK and David H. Barlow in the US.
Over time, cognitive behavior therapy came to be known not only as a therapy, but as an umbrella term for all cognitive-based psychotherapies. These therapies include, but are not limited to, REBT, cognitive therapy, acceptance and commitment therapy, dialectical behavior therapy, metacognitive therapy, metacognitive training, reality therapy/choice theory, cognitive processing therapy, EMDR, and multimodal therapy.
This blending of theoretical and technical foundations from both behavior and cognitive therapies constituted the "third wave" of CBT. The most prominent therapies of this third wave are dialectical behavior therapy and acceptance and commitment therapy. Despite the increasing popularity of third-wave treatment approaches, reviews of studies reveal there may be no difference in the effectiveness compared with non-third wave CBT for the treatment of depression.
== Medical uses ==
In adults, CBT has been shown to be an effective part of treatment plans for anxiety disorders, body dysmorphic disorder, depression, eating disorders, chronic low back pain, personality disorders, psychosis, schizophrenia, substance use disorders, and bipolar disorder. It is also effective as part of treatment plans in the adjustment, depression, and anxiety associated with fibromyalgia, and as part of the treatment after spinal cord injuries.
In children or adolescents, CBT is an effective part of treatment plans for anxiety disorders, body dysmorphic disorder, depression and suicidality, eating disorders and obesity, obsessive–compulsive disorder (OCD), and post-traumatic stress disorder (PTSD), tic disorders, trichotillomania, and other repetitive behavior disorders. CBT has also been used to help improve a variety of childhood disorders, including depressive disorders and various anxiety disorders. CBT has shown to be the most effective intervention for people exposed to adverse childhood experiences in the form of abuse or neglect.
Criticism of CBT sometimes focuses on implementations (such as the UK IAPT) which may result initially in low quality therapy being offered by poorly trained practitioners. However, evidence supports the effectiveness of CBT for anxiety and depression.
Evidence suggests that the addition of hypnotherapy as an adjunct to CBT improves treatment efficacy for a variety of clinical issues.
The United Kingdom's National Institute for Health and Care Excellence (NICE) recommends CBT in the treatment plans for a number of mental health difficulties, including PTSD, OCD, bulimia nervosa, and clinical depression.
=== Depression and anxiety disorders ===
Cognitive behavioral therapy has been shown as an effective treatment for clinical depression. Among psychotherapeutic approaches for major depressive disorder, cognitive behavioral therapy and interpersonal psychotherapy are recommended by clinical practice guidelines including The American Psychiatric Association Practice (APA) Guidelines (April 2000), and the APA endorsed Veteran Affairs clinical practice guideline.
CBT has been shown to be effective in the treatment of adults with anxiety disorders. There is also evidence that using CBT to treat children and adolescents with anxiety disorders was probably more effective (in the short term) than wait list or no treatment and more effective than attention control treatment approaches. Some meta-analyses find CBT more effective than psychodynamic therapy and equal to other therapies in treating anxiety and depression. A 2013 meta-analysis suggested that CBT, interpersonal therapy, and problem-solving therapy outperformed psychodynamic psychotherapy and behavioral activation in the treatment of depression. According to a 2004 review by INSERM of three methods, cognitive behavioral therapy was either proven or presumed to be an effective therapy on several mental disorders. This included depression, panic disorder, post-traumatic stress, and other anxiety disorders.
A systematic review of CBT in depression and anxiety disorders concluded that "CBT delivered in primary care, especially including computer- or Internet-based self-help programs, is potentially more effective than usual care and could be delivered effectively by primary care therapists."
A 2024 systematic review found that exposure and response prevention (ERP), a specific form of cognitive behavioral therapy, is considered a first-line treatment for pediatric obsessive–compulsive disorder (OCD). Research indicates that ERP is effective in both in-person and remote settings, providing flexibility in treatment delivery without compromising efficacy.
In CBT you work on reducing fear by changing how you think and act. Instead of thinking of the fear object (for example, a spider) as an imminent threat or danger, you're taught to reevaluate the fear object as less threatening to your safety and well-being. Instead of avoiding or running from the fear, you're encouraged to face the fear.
==== Theoretical approaches ====
One etiological theory of depression is Aaron T. Beck's cognitive theory of depression. His theory states that depressed people think the way they do because their thinking is biased towards negative interpretations. Beck's theory rests on the aspect of cognitive behavioral therapy known as schemata. Schemata are the mental maps used to integrate new information into memories and to organize existing information in the mind. An example of a schema would be a person hearing the word "dog" and picturing different versions of the animal that they have grouped together in their mind. According to this theory, depressed people acquire a negative schema of the world in childhood and adolescence as an effect of stressful life events, and the negative schema is activated later in life when the person encounters similar situations.
Beck also described a negative cognitive triad. The cognitive triad is made up of the depressed individual's negative evaluations of themselves, the world, and the future. Beck suggested that these negative evaluations derive from the negative schemata and cognitive biases of the person. According to this theory, depressed people have views such as "I never do a good job", "It is impossible to have a good day", and "things will never get better". A negative schema helps give rise to the cognitive bias, and the cognitive bias helps fuel the negative schema. Beck further proposed that depressed people often have the following cognitive biases: arbitrary inference, selective abstraction, overgeneralization, magnification, and minimization. These cognitive biases are quick to make negative, generalized, and personal inferences of the self, thus fueling the negative schema.
On the other hand, a positive cognitive triad relates to a person's positive evaluations of themself, the world, and the future. More specifically, a positive cognitive triad requires self-esteem when viewing oneself and hope for the future. A person with a positive cognitive triad has a positive schema used for viewing themself in addition to a positive schema for the world and for the future. Cognitive behavioral research suggests a positive cognitive triad bolsters resilience, or the ability to cope with stressful events. Increased levels of resilience is associated with greater resistance to depression.
Another major theoretical approach to cognitive behavioral therapy treatment is the concept of Locus of Control outlined in Julian Rotter's Social Learning Theory. Locus of control refers to the degree to which an individual's sense of control is either internal or external. An internal locus of control exists when an individual views an outcome of a particular action as being reliant on themselves and their personal attributes whereas an external locus of control exists when an individual views other's or some outside, intangible force such as luck or fate as being responsible for the outcome of a particular action.
A basic concept in some CBT treatments used in anxiety disorders is in vivo exposure. CBT-exposure therapy refers to the direct confrontation of feared objects, activities, or situations by a patient. For example, a woman with PTSD who fears the location where she was assaulted may be assisted by her therapist in going to that location and directly confronting those fears. Likewise, a person with a social anxiety disorder who fears public speaking may be instructed to directly confront those fears by giving a speech. This "two-factor" model is often credited to O. Hobart Mowrer. Through exposure to the stimulus, this harmful conditioning can be "unlearned" (referred to as extinction and habituation).
CBT for children with phobias is normally delivered over multiple sessions, but one-session treatment has been shown to be equally effective and is cheaper.
==== Specialized forms of CBT ====
CBT-SP, an adaptation of CBT for suicide prevention (SP), was specifically designed for treating youths who are severely depressed and who have recently attempted suicide within the past 90 days, and was found to be effective, feasible, and acceptable.
Acceptance and commitment therapy (ACT) is a specialist branch of CBT (sometimes referred to as contextual CBT). ACT uses mindfulness and acceptance interventions and has been found to have a greater longevity in therapeutic outcomes. In a study with anxiety, CBT and ACT improved similarly across all outcomes from pre- to post-treatment. However, during a 12-month follow-up, ACT proved to be more effective, showing that it is a highly viable lasting treatment model for anxiety disorders.
Computerized CBT (CCBT) has been proven to be effective by randomized controlled and other trials in treating depression and anxiety disorders, including children. Some research has found similar effectiveness to an intervention of informational websites and weekly telephone calls. CCBT was found to be equally effective as face-to-face CBT in adolescent anxiety.
==== Combined with other treatments ====
Studies have provided evidence that when examining animals and humans, that glucocorticoids may lead to a more successful extinction learning during exposure therapy for anxiety disorders. For instance, glucocorticoids can prevent aversive learning episodes from being retrieved and heighten reinforcement of memory traces creating a non-fearful reaction in feared situations. A combination of glucocorticoids and exposure therapy may be a better-improved treatment for treating people with anxiety disorders.
==== Prevention ====
For anxiety disorders, use of CBT with people at risk has significantly reduced the number of episodes of generalized anxiety disorder and other anxiety symptoms, and also given significant improvements in explanatory style, hopelessness, and dysfunctional attitudes. In another study, 3% of the group receiving the CBT intervention developed generalized anxiety disorder by 12 months postintervention compared with 14% in the control group. Individuals with subthreshold levels of panic disorder significantly benefitted from use of CBT. Use of CBT was found to significantly reduce social anxiety prevalence.
For depressive disorders, a stepped-care intervention (watchful waiting, CBT and medication if appropriate) achieved a 50% lower incidence rate in a patient group aged 75 or older. Another depression study found a neutral effect compared to personal, social, and health education, and usual school provision, and included a comment on potential for increased depression scores from people who have received CBT due to greater self recognition and acknowledgement of existing symptoms of depression and negative thinking styles. A further study also saw a neutral result. A meta-study of the Coping with Depression course, a cognitive behavioral intervention delivered by a psychoeducational method, saw a 38% reduction in risk of major depression.
=== Bipolar disorder ===
Many studies show CBT, combined with pharmacotherapy, is effective in improving depressive symptoms, mania severity and psychosocial functioning with mild to moderate effects, and that it is better than medication alone.
INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including bipolar disorder. This included schizophrenia, depression, bipolar disorder, panic disorder, post-traumatic stress, anxiety disorders, bulimia, anorexia, personality disorders and alcohol dependency.
=== Psychosis ===
In long-term psychoses, CBT is used to complement medication and is adapted to meet individual needs. Interventions particularly related to these conditions include exploring reality testing, changing delusions and hallucinations, examining factors which precipitate relapse, and managing relapses. Meta-analyses confirm the effectiveness of metacognitive training (MCT) for the improvement of positive symptoms (e.g., delusions).
For people at risk of psychosis, in 2014 the UK National Institute for Health and Care Excellence (NICE) recommended preventive CBT.
=== Schizophrenia ===
INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including schizophrenia.
A Cochrane review reported CBT had "no effect on long‐term risk of relapse" and no additional effect above standard care. A 2015 systematic review investigated the effects of CBT compared with other psychosocial therapies for people with schizophrenia and determined that there is no clear advantage over other, often less expensive, interventions but acknowledged that better quality evidence is needed before firm conclusions can be drawn.
=== Addiction and substance use disorders ===
==== Pathological and problem gambling ====
CBT is also used for pathological and problem gambling. The percentage of people who problem gamble is 1–3% around the world. Cognitive behavioral therapy develops skills for relapse prevention and someone can learn to control their mind and manage high-risk cases. There is evidence of efficacy of CBT for treating pathological and problem gambling at immediate follow up, however the longer term efficacy of CBT for it is currently unknown.
==== Smoking cessation ====
CBT looks at the habit of smoking cigarettes as a learned behavior, which later evolves into a coping strategy to handle daily stressors. Since smoking is often easily accessible and quickly allows the user to feel good, it can take precedence over other coping strategies, and eventually work its way into everyday life during non-stressful events as well. CBT aims to target the function of the behavior, as it can vary between individuals, and works to inject other coping mechanisms in place of smoking. CBT also aims to support individuals with strong cravings, which are a major reported reason for relapse during treatment.
A 2008 controlled study out of Stanford University School of Medicine suggested CBT may be an effective tool to help maintain abstinence. The results of 304 random adult participants were tracked over the course of one year. During this program, some participants were provided medication, CBT, 24-hour phone support, or some combination of the three methods. At 20 weeks, the participants who received CBT had a 45% abstinence rate, versus non-CBT participants, who had a 29% abstinence rate. Overall, the study concluded that emphasizing cognitive and behavioral strategies to support smoking cessation can help individuals build tools for long term smoking abstinence.
Mental health history can affect the outcomes of treatment. Individuals with a history of depressive disorders had a lower rate of success when using CBT alone to combat smoking addiction.
A 2019 Cochrane review was unable to find sufficient evidence to differentiate effects between CBT and hypnosis for smoking cessation and highlighted that a review of the current research showed variable results for both modalities.
==== Substance use disorders ====
Studies have shown CBT to be an effective treatment for substance use disorders. For individuals with substance use disorders, CBT aims to reframe maladaptive thoughts, such as denial, minimizing and catastrophizing thought patterns, with healthier narratives. Specific techniques include identifying potential triggers and developing coping mechanisms to manage high-risk situations. Research has shown CBT to be particularly effective when combined with other therapy-based treatments or medication.
INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including alcohol dependency.
==== Internet addiction ====
Research has identified Internet addiction as a new clinical disorder that causes relational, occupational, and social problems. Cognitive behavioral therapy (CBT) has been suggested as the treatment of choice for Internet addiction, and addiction recovery in general has used CBT as part of treatment planning.
=== Eating disorders ===
Though many forms of treatment can support individuals with eating disorders, CBT is proven to be a more effective treatment than medications and interpersonal psychotherapy alone. CBT aims to combat major causes of distress such as negative cognitions surrounding body weight, shape and size. CBT therapists also work with individuals to regulate strong emotions and thoughts that lead to dangerous compensatory behaviors. CBT is the first line of treatment for bulimia nervosa, and non-specific eating disorders. While there is evidence to support the efficacy of CBT for bulimia nervosa and binging, the evidence is somewhat variable and limited by small study sizes. INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including bulimia and anorexia nervosa.
=== With autistic adults ===
Emerging evidence for cognitive behavioral interventions aimed at reducing symptoms of depression, anxiety, and obsessive-compulsive disorder in autistic adults without intellectual disability has been identified through a systematic review. While the research was focused on adults, cognitive behavioral interventions have also been beneficial to autistic children. A 2021 Cochrane review found limited evidence regarding the efficacy of CBT for obsessive-compulsive disorder in adults with Autism Spectrum Disorder stating a need for further study.
=== Dementia and mild cognitive impairment ===
A Cochrane review in 2022 found that adults with dementia and mild cognitive impairment (MCI) who experience symptoms of depression may benefit from CBT, whereas other counselling or supportive interventions might not improve symptoms significantly. Across 5 different psychometric scales, where higher scores indicate severity of depression, adults receiving CBT reported somewhat lower mood scores than those receiving usual care for dementia and MCI overall. In this review, a sub-group analysis found clinically significant benefits only among those diagnosed with dementia, rather than MCI.
The likelihood of remission from depression also appeared to be 84% higher following CBT, though the evidence for this was less certain. Anxiety, cognition and other neuropsychiatric symptoms were not significantly improved following CBT, however this review did find moderate evidence of improved quality of life and daily living activity scores in those with dementia and MCI.
=== Post-traumatic stress ===
Cognitive behavioral therapy interventions may have some benefits for people who have post-traumatic stress related to surviving rape, sexual abuse, or sexual assault. There is strong evidence that CBT-exposure therapy can reduce PTSD symptoms and lead to the loss of a PTSD diagnosis. In addition, CBT has also been shown to be effective for post-traumatic stress disorder in very young children (3 to 6 years of age). There is lower quality evidence that CBT may be more effective than other psychotherapies in reducing symptoms of posttraumatic stress disorder in children and adolescents.
=== Other uses ===
Evidence suggests a possible role for CBT in the treatment of attention deficit hyperactivity disorder (ADHD), hypochondriasis, and bipolar disorder, but more study is needed and results should be interpreted with caution. Moderate evidence from a 2024 systematic review supports the effectiveness of CBT and neurofeedback as part of psychosocial interventions for improving ADHD symptoms in children and adolescents.
CBT has been studied as an aid in the treatment of anxiety associated with stuttering. Initial studies have shown CBT to be effective in reducing social anxiety in adults who stutter, but not in reducing stuttering frequency.
There is some evidence that CBT is superior in the long-term to benzodiazepines and the nonbenzodiazepines in the treatment and management of insomnia. Computerized CBT (CCBT) has been proven to be effective by randomized controlled and other trials in treating insomnia. Some research has found similar effectiveness to an intervention of informational websites and weekly telephone calls. CCBT was found to be equally effective as face-to-face CBT in insomnia.
A Cochrane review of interventions aimed at preventing psychological stress in healthcare workers found that CBT was more effective than no intervention but no more effective than alternative stress-reduction interventions.
Cochrane Reviews have found no convincing evidence that CBT training helps foster care providers manage difficult behaviors in the youths under their care, nor was it helpful in treating people who abuse their intimate partners.
CBT has been applied in both clinical and non-clinical environments to treat disorders such as personality disorders and behavioral problems. INSERM's 2004 review found that CBT is an effective therapy for personality disorders.
CBT has been used with other researchers as well to minimize chronic pain and help relieve symptoms from those suffering from irritable bowel syndrome (IBS).
==== Individuals with medical conditions ====
In the case of people with metastatic breast cancer, data is limited but CBT and other psychosocial interventions might help with psychological outcomes and pain management. There is also some evidence that CBT may help reduce insomnia in cancer patients.
There is some evidence that using CBT for symptomatic management of non-specific chest pain is probably effective in the short term. However, the findings were limited by small trials and the evidence was considered of questionable quality. Cochrane reviews have found no evidence that CBT is effective for tinnitus, although there appears to be an effect on management of associated depression and quality of life in this condition. CBT combined with hypnosis and distraction reduces self-reported pain in children.
There is limited evidence to support CBT's use in managing the impact of multiple sclerosis, sleep disturbances related to aging, and dysmenorrhea, but more study is needed and results should be interpreted with caution.
Previously CBT has been considered as moderately effective for treating myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS), however a National Institutes of Health Pathways to Prevention Workshop stated that in respect of improving treatment options for ME/CFS that the modest benefit from cognitive behavioral therapy should be studied as an adjunct to other methods. The Centres for Disease Control advice on the treatment of ME/CFS makes no reference to CBT while the National Institute for Health and Care Excellence states that cognitive behavioral therapy (CBT) has sometimes been assumed to be a cure for ME/CFS, however, it should only be offered to support people who live with ME/CFS to manage their symptoms, improve their functioning and reduce the distress associated with having a chronic illness.
=== Age ===
CBT is used to help people of all ages, but the therapy should be adjusted based on the age of the patient with whom the therapist is dealing. Older individuals in particular have certain characteristics that need to be acknowledged and the therapy altered to account for these differences thanks to age. Of the small number of studies examining CBT for the management of depression in older people, there is currently no strong support.
== Description ==
Mainstream cognitive behavioral therapy assumes that changing maladaptive thinking leads to change in behavior and affect, but recent variants emphasize changes in one's relationship to maladaptive thinking rather than changes in thinking itself.
=== Cognitive distortions ===
Therapists use CBT techniques to help people challenge their patterns and beliefs and replace errors in thinking, known as cognitive distortions with "more realistic and effective thoughts, thus decreasing emotional distress and self-defeating behavior". Cognitive distortions can be either a pseudo-discrimination belief or an overgeneralization of something. CBT techniques may also be used to help individuals take a more open, mindful, and aware posture toward cognitive distortions so as to diminish their impact.
Mainstream CBT helps individuals replace "maladaptive... coping skills, cognitions, emotions and behaviors with more adaptive ones", by challenging an individual's way of thinking and the way that they react to certain habits or behaviors, but there is still controversy about the degree to which these traditional cognitive elements account for the effects seen with CBT over and above the earlier behavioral elements such as exposure and skills training.
=== Assumptions ===
Chaloult, Ngo, Cousineau and Goulet have attempted to identify the main assumptions of cognitive therapy used in CBT based on the research literature (Beck; Walen and Wessler; Beck, Emery and Greenberg, and Auger). They describe fourteen assumptions:
Human emotions are primarily caused by people's thoughts and perceptions rather than events.
Events, thoughts, emotions, behaviors, and physiological reactions influence each other.
Dysfunctional emotions are typically caused by unrealistic thoughts. Reducing dysfunctional emotions requires becoming aware of irrational thoughts and changing them.
Human beings have an innate tendency to develop irrational thoughts. This tendency is reinforced by their environment.
People are largely responsible for their own dysfunctional emotions, as they maintain and reinforce their own beliefs.
Sustained effort is necessary to modify dysfunctional thoughts, emotions, and behaviors.
Rational thinking usually causes a decrease in the frequency, intensity, and duration of dysfunctional emotions, rather than an absence of affect or feelings.
A positive therapeutic relationship is essential to successful cognitive therapy.
Cognitive therapy is based on a teacher-student relationship, where the therapist educates the client.
Cognitive therapy uses Socratic questioning to challenge cognitive distortions.
Homework is an essential aspect of cognitive therapy. It consolidates the skills learned in therapy.
The cognitive approach is active, directed, and structured.
Cognitive therapy is generally short.
Cognitive therapy is based on predictable steps.
These steps largely involve learning about the CBT model; making links between thoughts, emotions, behaviors, and physiological reactions; noticing when dysfunctional emotions occur; learning to question the thoughts associated with these emotions; replacing irrational thoughts with others more grounded in reality; modifying behaviors based on new interpretations of events; and, in some cases, learning to recognize and change the major beliefs and attitudes underlying cognitive distortions.
Chaloult, Ngo, Cousineau and Goulet have also described the assumptions of behavioral therapy as used in CBT. They refer to the work of Agras, Prochaska and Norcross, and Kirk. The assumptions are:
Behaviors play an essential role in the onset, perpetuation and exacerbation of psychopathology.
Learning theory is key in understanding the treatment of mental illness, as behaviors can be learned and unlearned.
A rigorous evaluation (applied behavior analysis) is essential at the start of treatment. It includes identifying behaviors; precipitating, moderating, and perpetuating factors; the consequences of the behaviors; avoidance, and personal resources.
The effectiveness of the treatment is monitored throughout its duration.
Behavior therapy is scientific and the different forms of treatment are evaluated with rigorous evidence.
Behavior therapy is active, directed, and structured.
Together, these sets of assumptions cover the cognitive and behavioral aspects of CBT.
=== Phases in therapy ===
CBT can be seen as having six phases:
Assessment or psychological assessment;
Reconceptualization;
Skills acquisition;
Skills consolidation and application training;
Generalization and maintenance;
Post-treatment assessment follow-up.
These steps are based on a system created by Kanfer and Saslow. After identifying the behaviors that need changing, whether they be in excess or deficit, and treatment has occurred, the psychologist must identify whether or not the intervention succeeded. For example, "If the goal was to decrease the behavior, then there should be a decrease relative to the baseline. If the critical behavior remains at or above the baseline, then the intervention has failed."
The steps in the assessment phase include:
Identify critical behaviors;
Determine whether critical behaviors are excesses or deficits;
Evaluate critical behaviors for frequency, duration, or intensity (obtain a baseline);
If excess, attempt to decrease frequency, duration, or intensity of behaviors; if deficits, attempt to increase behaviors.
The re-conceptualization phase makes up much of the "cognitive" portion of CBT.
=== Delivery protocols ===
There are different protocols for delivering cognitive behavioral therapy, with important similarities among them. Use of the term CBT may refer to different interventions, including "self-instructions (e.g. distraction, imagery, motivational self-talk), relaxation and/or biofeedback, development of adaptive coping strategies (e.g. minimizing negative or self-defeating thoughts), changing maladaptive beliefs about pain, and goal setting". Treatment is sometimes manualized, with brief, direct, and time-limited treatments for individual psychological disorders that are specific technique-driven. CBT is used in both individual and group settings, and the techniques are often adapted for self-help applications. Some clinicians and researchers are cognitively oriented (e.g. cognitive restructuring), while others are more behaviorally oriented (e.g. in vivo exposure therapy). Interventions such as imaginal exposure therapy combine both approaches.
=== Related techniques ===
CBT may be delivered in conjunction with a variety of diverse but related techniques such as exposure therapy, stress inoculation, cognitive processing therapy, cognitive therapy, metacognitive therapy, metacognitive training, relaxation training, dialectical behavior therapy, and acceptance and commitment therapy. Some practitioners promote a form of mindful cognitive therapy which includes a greater emphasis on self-awareness as part of the therapeutic process.
== Methods of access ==
=== Therapist ===
A typical CBT program would consist of face-to-face sessions between patient and therapist, made up of 6–18 sessions of around an hour each with a gap of 1–3 weeks between sessions. This initial program might be followed by some booster sessions, for instance after one month and three months. CBT has also been found to be effective if patient and therapist type in real time to each other over computer links.
Cognitive-behavioral therapy is most closely allied with the scientist–practitioner model in which clinical practice and research are informed by a scientific perspective, clear operationalization of the problem, and an emphasis on measurement, including measuring changes in cognition and behavior and the attainment of goals. These are often met through "homework" assignments in which the patient and the therapist work together to craft an assignment to complete before the next session. The completion of these assignments – which can be as simple as a person with depression attending some kind of social event – indicates a dedication to treatment compliance and a desire to change. The therapists can then logically gauge the next step of treatment based on how thoroughly the patient completes the assignment. Effective cognitive behavioral therapy is dependent on a therapeutic alliance between the healthcare practitioner and the person seeking assistance. Unlike many other forms of psychotherapy, the patient is very involved in CBT. For example, an anxious patient may be asked to talk to a stranger as a homework assignment, but if that is too difficult, he or she can work out an easier assignment first. The therapist needs to be flexible and willing to listen to the patient rather than acting as an authority figure.
=== Computerized or Internet-delivered (CCBT) ===
Computerized cognitive behavioral therapy (CCBT) has been described by NICE as a "generic term for delivering CBT via an interactive computer interface delivered by a personal computer, internet, or interactive voice response system", instead of face-to-face with a human therapist. It is also known as internet-delivered cognitive behavioral therapy or ICBT. CCBT has potential to improve access to evidence-based therapies, and to overcome the prohibitive costs and lack of availability sometimes associated with retaining a human therapist. In this context, it is important not to confuse CBT with 'computer-based training', which nowadays is more commonly referred to as e-Learning.
Although improvements in both research quality and treatment adherence is required before advocating for the global dissemination of CCBT, it has been found in meta-studies to be cost-effective and often cheaper than usual care, including for anxiety and PTSD. Studies have shown that individuals with social anxiety and depression experienced improvement with online CBT-based methods. A study assessing an online version of CBT for people with mild-to-moderate PTSD found that the online approach was as effective as, and cheaper than, the same therapy given face-to-face. A review of current CCBT research in the treatment of OCD in children found this interface to hold great potential for future treatment of OCD in youths and adolescent populations. Additionally, most internet interventions for post-traumatic stress disorder use CCBT. CCBT is also predisposed to treating mood disorders amongst non-heterosexual populations, who may avoid face-to-face therapy from fear of stigma. However presently CCBT programs seldom cater to these populations.
In February 2006 NICE recommended that CCBT be made available for use within the NHS across England and Wales for patients presenting with mild-to-moderate depression, rather than immediately opting for antidepressant medication, and CCBT is made available by some health systems. The 2009 NICE guideline recognized that there are likely to be a number of computerized CBT products that are useful to patients, but removed endorsement of any specific product.
=== Smartphone app-delivered ===
Another new method of access is the use of mobile app or smartphone applications to deliver self-help or guided CBT. Technology companies are developing mobile-based artificial intelligence chatbot applications in delivering CBT as an early intervention to support mental health, to build psychological resilience, and to promote emotional well-being. Artificial intelligence (AI) text-based conversational application delivered securely and privately over smartphone devices have the ability to scale globally and offer contextual and always-available support. Active research is underway including real-world data studies that measure effectiveness and engagement of text-based smartphone chatbot apps for delivery of CBT using a text-based conversational interface. Recent market research and analysis of over 500 online mental healthcare solutions identified 3 key challenges in this market: quality of the content, guidance of the user and personalisation.
A study compared CBT alone with a mindfulness-based therapy combined with CBT, both delivered via an app. It found that mindfulness-based self-help reduced the severity of depression more than CBT self-help in the short-term. Overall, NHS costs for the mindfulness approach were £500 less per person than for CBT.
=== Reading self-help materials ===
Enabling patients to read self-help CBT guides has been shown to be effective by some studies. However one study found a negative effect in patients who tended to ruminate, and another meta-analysis found that the benefit was only significant when the self-help was guided (e.g. by a medical professional).
=== Group educational course ===
Patient participation in group courses has been shown to be effective. In a meta-analysis reviewing evidence-based treatment of OCD in children, individual CBT was found to be more efficacious than group CBT.
== Types ==
=== Brief cognitive behavioral therapy ===
Brief cognitive behavioral therapy (BCBT) is a form of CBT which has been developed for situations in which there are time constraints on the therapy sessions and specifically for those struggling with suicidal ideation and/or making suicide attempts. BCBT was based on Rudd's proposed "suicidal mode", an elaboration of Beck's modal theory. BCBT takes place over a couple of sessions that can last up to 12 accumulated hours by design. This technique was first implemented and developed with soldiers on active duty by Dr. M. David Rudd to prevent suicide.
Breakdown of treatment
Orientation
Commitment to treatment
Crisis response and safety planning
Means restriction
Survival kit
Reasons for living card
Model of suicidality
Treatment journal
Lessons learned
Skill focus
Skill development worksheets
Coping cards
Demonstration
Practice
Skill refinement
Relapse prevention
Skill generalization
Skill refinement
=== Cognitive emotional behavioral therapy ===
Cognitive emotional behavioral therapy (CEBT) is a form of CBT developed initially for individuals with eating disorders but now used with a range of problems including anxiety, depression, obsessive compulsive disorder (OCD), post-traumatic stress disorder (PTSD) and anger problems. It combines aspects of CBT and dialectical behavioral therapy and aims to improve understanding and tolerance of emotions in order to facilitate the therapeutic process. It is frequently used as a "pretreatment" to prepare and better equip individuals for longer-term therapy.
=== Structured cognitive behavioral training ===
Structured cognitive-behavioral training (SCBT) is a cognitive-based process with core philosophies that draw heavily from CBT. Like CBT, SCBT asserts that behavior is inextricably related to beliefs, thoughts, and emotions. SCBT also builds on core CBT philosophy by incorporating other well-known modalities in the fields of behavioral health and psychology: most notably, Albert Ellis's rational emotive behavior therapy. SCBT differs from CBT in two distinct ways. First, SCBT is delivered in a highly regimented format. Second, SCBT is a predetermined and finite training process that becomes personalized by the input of the participant. SCBT is designed to bring a participant to a specific result in a specific period of time. SCBT has been used to challenge addictive behavior, particularly with substances such as tobacco, alcohol and food, and to manage diabetes and subdue stress and anxiety. SCBT has also been used in the field of criminal psychology in the effort to reduce recidivism.
=== Moral reconation therapy ===
Moral reconation therapy, a type of CBT used to help felons overcome antisocial personality disorder (ASPD), slightly decreases the risk of further offending. It is generally implemented in a group format because of the risk of offenders with ASPD being given one-on-one therapy reinforces narcissistic behavioral characteristics, and can be used in correctional or outpatient settings. Groups usually meet weekly for two to six months.
=== Stress inoculation training ===
This type of therapy uses a blend of cognitive, behavioral, and certain humanistic training techniques to target the stressors of the client. This is usually used to help clients better cope with their stress or anxiety after stressful events. This is a three-phase process that trains the client to use skills that they already have to better adapt to their current stressors. The first phase is an interview phase that includes psychological testing, client self-monitoring, and a variety of reading materials. This allows the therapist to individually tailor the training process to the client. Clients learn how to categorize problems into emotion-focused or problem-focused so that they can better treat their negative situations. This phase ultimately prepares the client to eventually confront and reflect upon their current reactions to stressors, before looking at ways to change their reactions and emotions to their stressors. The focus is conceptualization.
The second phase emphasizes the aspect of skills acquisition and rehearsal that continues from the earlier phase of conceptualization. The client is taught skills that help them cope with their stressors. These skills are then practiced in the space of therapy. These skills involve self-regulation, problem-solving, interpersonal communication skills, etc.
The third and final phase is the application and following through of the skills learned in the training process. This gives the client opportunities to apply their learned skills to a wide range of stressors. Activities include role-playing, imagery, modeling, etc. In the end, the client will have been trained on a preventive basis to inoculate personal, chronic, and future stressors by breaking down their stressors into problems they will address in long-term, short-term, and intermediate coping goals.
=== Activity-guided CBT: Group-knitting ===
A recently developed group therapy model, based on CBT, integrates knitting into the therapeutic process and has been proven to yield reliable and promising results. The foundation for this novel approach to CBT is the frequently emphasized notion that therapy success depends on how embedded the therapy method is in the patients' natural routine. Similar to standard group-based CBT, patients meet once a week in a group of 10 to 15 patients and knit together under the instruction of a trained psychologist or mental health professional. Central for the therapy is the patient's imaginative ability to assign each part of the wool to a certain thought. During the therapy, the wool is carefully knitted, creating a knitted piece of any form. This therapeutic process teaches the patient to meaningfully align thought, by (physically) creating a coherent knitted piece. Moreover, since CBT emphasizes the behavior as a result of cognition, the knitting illustrates how thoughts (which are tried to be imaginary tight to the wool) materialize into the reality surrounding us.
=== Mindfulness-based cognitive behavioral hypnotherapy ===
Mindfulness-based cognitive behavioral hypnotherapy (MCBH) is a form of CBT that focuses on awareness in a reflective approach, addressing subconscious tendencies. It is more the process that contains three phases for achieving wanted goals and integrates the principles of mindfulness and cognitive-behavioral techniques with the transformative potential of hypnotherapy.
=== Unified Protocol ===
The Unified Protocol for Transdiagnostic Treatment of Emotional Disorders (UP) is a form of CBT, developed by David H. Barlow and researchers at Boston University, that can be applied to a range of anxiety disorders. The rationale is that anxiety and depression disorders often occur together due to common underlying causes and can efficiently be treated together.
The UP includes a common set of components:
Psycho-education
Cognitive reappraisal
Emotion regulation
Changing behaviour
The UP has been shown to produce equivalent results to single-diagnosis protocols for specific disorders, such as OCD and social anxiety disorder.
Several studies have shown that the UP is easier to disseminate as compared to single-diagnosis protocols.
Culturally adapted CBT
The study of psychotherapy across races, religions, and cultures, or "ethno-psycho-therapy", is a relatively new discipline
== Criticisms ==
=== Relative effectiveness ===
The research conducted for CBT has been a topic of sustained controversy. While some researchers write that CBT is more effective than other treatments, many other researchers and practitioners have questioned the validity of such claims. For example, one study determined CBT to be superior to other treatments in treating anxiety and depression. However, researchers responding directly to that study conducted a re-analysis and found no evidence of CBT being superior to other bona fide treatments and conducted an analysis of thirteen other CBT clinical trials and determined that they failed to provide evidence of CBT superiority. In cases where CBT has been reported to be statistically better than other psychological interventions in terms of primary outcome measures, effect sizes were small and suggested that those differences were clinically meaningless and insignificant. Moreover, on secondary outcomes (i.e., measures of general functioning) no significant differences have been typically found between CBT and other treatments.
A major criticism has been that clinical studies of CBT efficacy (or any psychotherapy) are not double-blind (i.e., either the subjects or the therapists in psychotherapy studies are not blind to the type of treatment). They may be single-blinded, i.e. the rater may not know the treatment the patient received, but neither the patients nor the therapists are blinded to the type of therapy given (two out of three of the persons involved in the trial, i.e., all of the persons involved in the treatment, are unblinded). The patient is an active participant in correcting negative distorted thoughts, thus quite aware of the treatment group they are in.
The importance of double-blinding was shown in a meta-analysis that examined the effectiveness of CBT when placebo control and blindness were factored in. Pooled data from published trials of CBT in schizophrenia, major depressive disorder (MDD), and bipolar disorder that used controls for non-specific effects of intervention were analyzed. This study concluded that CBT is no better than non-specific control interventions in the treatment of schizophrenia and does not reduce relapse rates; treatment effects are small in treatment studies of MDD, and it is not an effective treatment strategy for prevention of relapse in bipolar disorder. For MDD, the authors note that the pooled effect size was very low.
=== Declining effectiveness ===
Additionally, a 2015 meta-analysis revealed that the positive effects of CBT on depression have been declining since 1977. The overall results showed two different declines in effect sizes: 1) an overall decline between 1977 and 2014, and 2) a steeper decline between 1995 and 2014. Additional sub-analysis revealed that CBT studies where therapists in the test group were instructed to adhere to the Beck CBT manual had a steeper decline in effect sizes since 1977 than studies where therapists in the test group were instructed to use CBT without a manual. The authors reported that they were unsure why the effects were declining but did list inadequate therapist training, failure to adhere to a manual, lack of therapist experience, and patients' hope and faith in its efficacy waning as potential reasons. The authors did mention that the current study was limited to depressive disorders only.
=== High drop-out rates ===
Furthermore, other researchers write that CBT studies have high drop-out rates compared to other treatments. One meta-analysis found that CBT drop-out rates were 17% higher than those of other therapies. This high drop-out rate is also evident in the treatment of several disorders, particularly the eating disorder anorexia nervosa, which is commonly treated with CBT. Those treated with CBT have a high chance of dropping out of therapy before completion and reverting to their anorexia behaviors.
Other researchers analyzing treatments for youths who self-injure found similar drop-out rates in CBT and DBT groups. In this study, the researchers analyzed several clinical trials that measured the efficacy of CBT administered to youths who self-injure. The researchers concluded that none of them were found to be efficacious.
=== Philosophical concerns with CBT methods ===
The methods employed in CBT research have not been the only criticisms; some individuals have called its theory and therapy into question.
Slife and Williams write that one of the hidden assumptions in CBT is that of determinism, or the absence of free will. They argue that CBT holds that external stimuli from the environment enter the mind, causing different thoughts that cause emotional states: nowhere in CBT theory is agency, or free will, accounted for.
Another criticism of CBT theory, especially as applied to major depressive disorder (MDD), is that it confounds the symptoms of the disorder with its causes.
=== Side effects ===
CBT is generally regarded as having very few if any side effects. Calls have been made by some for more appraisal of possible side effects of CBT. Many randomized trials of psychological interventions like CBT do not monitor potential harms to the patient. In contrast, randomized trials of pharmacological interventions are much more likely to take adverse effects into consideration.
A 2017 meta-analysis revealed that adverse events are not common in children receiving CBT and, furthermore, that CBT is associated with fewer dropouts than either placebo or medications. Nevertheless, CBT therapists do sometimes report 'unwanted events' and side effects in their outpatients with "negative wellbeing/distress" being the most frequent.
=== Socio-political concerns ===
The writer and group analyst Farhad Dalal questions the socio-political assumptions behind the introduction of CBT. According to one reviewer, Dalal connects the rise of CBT with "the parallel rise of neoliberalism, with its focus on marketization, efficiency, quantification and managerialism", and he questions the scientific basis of CBT, suggesting that "the 'science' of psychological treatment is often less a scientific than a political contest". In his book, Dalal also questions the ethical basis of CBT.
== Society and culture ==
The UK's National Health Service announced in 2008 that more therapists would be trained to provide CBT at government expense as part of an initiative called Improving Access to Psychological Therapies (IAPT). The NICE said that CBT would become the mainstay of treatment for non-severe depression, with medication used only in cases where CBT had failed. Therapists complained that the data does not fully support the attention and funding CBT receives. Psychotherapist and professor Andrew Samuels stated that this constitutes "a coup, a power play by a community that has suddenly found itself on the brink of corralling an enormous amount of money ... Everyone has been seduced by CBT's apparent cheapness."
The UK Council for Psychotherapy issued a press release in 2012 saying that the IAPT's policies were undermining traditional psychotherapy and criticized proposals that would limit some approved therapies to CBT, claiming that they restricted patients to "a watered-down version of cognitive behavioural therapy (CBT), often delivered by very lightly trained staff".
== References ==
== Further reading ==
== External links ==
Association for Behavioral and Cognitive Therapies (ABCT)
British Association for Behavioural and Cognitive Psychotherapies
National Association of Cognitive-Behavioral Therapists
International Association of Cognitive Psychotherapy
Information on Research-based CBT Treatments | Wikipedia/Cognitive_behavioral_therapy |
The neuroscience of music is the scientific study of brain-based mechanisms involved in the cognitive processes underlying music. These behaviours include music listening, performing, composing, reading, writing, and ancillary activities. It also is increasingly concerned with the brain basis for musical aesthetics and musical emotion. Scientists working in this field may have training in cognitive neuroscience, neurology, neuroanatomy, psychology, music theory, computer science, and other relevant fields.
The cognitive neuroscience of music represents a significant branch of music psychology, and is distinguished from related fields such as cognitive musicology in its reliance on direct observations of the brain and use of brain imaging techniques like functional magnetic resonance imaging (fMRI) and positron emission tomography (PET).
== Elements of music ==
=== Pitch ===
Sounds consist of waves of air molecules that vibrate at different frequencies. These waves travel to the basilar membrane in the cochlea of the inner ear. Different frequencies of sound will cause vibrations in different locations of the basilar membrane. We are able to hear different pitches because each sound wave with a unique frequency is correlated to a different location along the basilar membrane. This spatial arrangement of sounds and their respective frequencies being processed in the basilar membrane is known as tonotopy.
When the hair cells on the basilar membrane move back and forth due to the vibrating sound waves, they release neurotransmitters and cause action potentials to occur down the auditory nerve. The auditory nerve then leads to several layers of synapses at numerous clusters of neurons, or nuclei, in the auditory brainstem. These nuclei are also tonotopically organized, and the process of achieving this tonotopy after the cochlea is not yet well understood. This tonotopy is in general maintained up to primary auditory cortex in mammals.
A widely postulated mechanism for pitch processing in the early central auditory system is the phase-locking and mode-locking of action potentials to frequencies in a stimulus. Phase-locking to stimulus frequencies has been shown in the auditory nerve, the cochlear nucleus, the inferior colliculus, and the auditory thalamus. By phase- and mode-locking in this way, the auditory brainstem is known to preserve a good deal of the temporal and low-passed frequency information from the original sound; this is evident by measuring the auditory brainstem response using EEG. This temporal preservation is one way to argue directly for the temporal theory of pitch perception, and to argue indirectly against the place theory of pitch perception.
The right secondary auditory cortex has finer pitch resolution than the left. Hyde, Peretz and Zatorre (2008) used functional magnetic resonance imaging (fMRI) in their study to test the involvement of right and left auditory cortical regions in the frequency processing of melodic sequences. As well as finding superior pitch resolution in the right secondary auditory cortex, specific areas found to be involved were the planum temporale (PT) in the secondary auditory cortex, and the primary auditory cortex in the medial section of Heschl's gyrus (HG).
Many neuroimaging studies have found evidence of the importance of right secondary auditory regions in aspects of musical pitch processing, such as melody. Many of these studies such as one by Patterson, Uppenkamp, Johnsrude and Griffiths (2002) also find evidence of a hierarchy of pitch processing. Patterson et al. (2002) used spectrally matched sounds which produced: no pitch, fixed pitch or melody in an fMRI study and found that all conditions activated HG and PT. Sounds with pitch activated more of these regions than sounds without. When a melody was produced activation spread to the superior temporal gyrus (STG) and planum polare (PP). These results support the existence of a pitch processing hierarchy.
==== Absolute pitch ====
Absolute pitch (AP) is defined as the ability to identify the pitch of a musical tone or to produce a musical tone at a given pitch without the use of an external reference pitch. Neuroscientific research has not discovered a distinct activation pattern common for possessors of AP. Zatorre, Perry, Beckett, Westbury and Evans (1998) examined the neural foundations of AP using functional and structural brain imaging techniques. Positron emission tomography (PET) was utilized to measure cerebral blood flow (CBF) in musicians possessing AP and musicians lacking AP. When presented with musical tones, similar patterns of increased CBF in auditory cortical areas emerged in both groups. AP possessors and non-AP subjects demonstrated similar patterns of left dorsolateral frontal activity when they performed relative pitch judgments. However, in non-AP subjects activation in the right inferior frontal cortex was present whereas AP possessors showed no such activity. This finding suggests that musicians with AP do not need access to working memory devices for such tasks. These findings imply that there is no specific regional activation pattern unique to AP. Rather, the availability of specific processing mechanisms and task demands determine the recruited neural areas.
=== Melody ===
Studies suggest that individuals are capable of automatically detecting a difference or anomaly in a melody such as an out of tune pitch which does not fit with their previous music experience. This automatic processing occurs in the secondary auditory cortex. Brattico, Tervaniemi, Naatanen, and Peretz (2006) performed one such study to determine if the detection of tones that do not fit an individual's expectations can occur automatically. They recorded event-related potentials (ERPs) in nonmusicians as they were presented unfamiliar melodies with either an out of tune pitch or an out of key pitch while participants were either distracted from the sounds or attending to the melody. Both conditions revealed an early frontal error-related negativity independent of where attention was directed. This negativity originated in the auditory cortex, more precisely in the supratemporal lobe (which corresponds with the secondary auditory cortex) with greater activity from the right hemisphere. The negativity response was larger for pitch that was out of tune than that which was out of key. Ratings of musical incongruity were higher for out of tune pitch melodies than for out of key pitch. In the focused attention condition, out of key and out of tune pitches produced late parietal positivity. The findings of Brattico et al. (2006) suggest that there is automatic and rapid processing of melodic properties in the secondary auditory cortex. The findings that pitch incongruities were detected automatically, even in processing unfamiliar melodies, suggests that there is an automatic comparison of incoming information with long term knowledge of musical scale properties, such as culturally influenced rules of musical properties (common chord progressions, scale patterns, etc.) and individual expectations of how the melody should proceed.
=== Rhythm ===
The belt and parabelt areas of the right hemisphere are involved in processing rhythm. Rhythm is a strong repeated pattern of movement or sound. When individuals are preparing to tap out a rhythm of regular intervals (1:2 or 1:3) the left frontal cortex, left parietal cortex, and right cerebellum are all activated. With more difficult rhythms such as a 1:2.5, more areas in the cerebral cortex and cerebellum are involved. EEG recordings have also shown a relationship between brain electrical activity and rhythm perception. Snyder and Large (2005) performed a study examining rhythm perception in human subjects, finding that activity in the gamma band (20 – 60 Hz) corresponds to the beats in a simple rhythm. Two types of gamma activity were found by Snyder & Large: induced gamma activity, and evoked gamma activity. Evoked gamma activity was found after the onset of each tone in the rhythm; this activity was found to be phase-locked (peaks and troughs were directly related to the exact onset of the tone) and did not appear when a gap (missed beat) was present in the rhythm. Induced gamma activity, which was not found to be phase-locked, was also found to correspond with each beat. However, induced gamma activity did not subside when a gap was present in the rhythm, indicating that induced gamma activity may possibly serve as a sort of internal metronome independent of auditory input.
=== Tonality ===
Tonality describes the relationships between the elements of melody and harmony – tones, intervals, chords, and scales. These relationships are often characterized as hierarchical, such that one of the elements dominates or attracts another. They occur both within and between every type of element, creating a rich and time-varying perception between tones and their melodic, harmonic, and chromatic contexts. In one conventional sense, tonality refers to just the major and minor scale types – examples of scales whose elements are capable of maintaining a consistent set of functional relationships. The most important functional relationship is that of the tonic note (the first note in a scale) and the tonic chord (the first note in the scale with the third and fifth note) with the rest of the scale. The tonic is the element which tends to assert its dominance and attraction over all others, and it functions as the ultimate point of attraction, rest and resolution for the scale.
The right auditory cortex is primarily involved in perceiving pitch, and parts of harmony, melody and rhythm. One study by Petr Janata found that there are tonality-sensitive areas in the medial prefrontal cortex, the cerebellum, the superior temporal sulci of both hemispheres and the superior temporal gyri (which has a skew towards the right hemisphere). Hemispheric asymmetries in the processing of dissonant/consonant sounds have been demonstrated. ERP studies have shown larger evoked responses over the left temporal area in response to dissonant chords, and over the right one, in response to consonant chords.
== Music production and performance ==
=== Motor control functions ===
Musical performance usually involves at least three elementary motor control functions: timing, sequencing, and spatial organization of motor movements. Accuracy in timing of movements is related to musical rhythm. Rhythm, the pattern of temporal intervals within a musical measure or phrase, in turn creates the perception of stronger and weaker beats. Sequencing and spatial organization relate to the expression of individual notes on a musical instrument.
These functions and their neural mechanisms have been investigated separately in many studies, but little is known about their combined interaction in producing a complex musical performance. The study of music requires examining them together.
==== Timing ====
Although neural mechanisms involved in timing movement have been studied rigorously over the past 20 years, much remains controversial. The ability to phrase movements in precise time has been accredited to a neural metronome or clock mechanism where time is represented through oscillations or pulses. An opposing view to this metronome mechanism has also been hypothesized stating that it is an emergent property of the kinematics of movement itself. Kinematics is defined as parameters of movement through space without reference to forces (for example, direction, velocity and acceleration).
Functional neuroimaging studies, as well as studies of brain-damaged patients, have linked movement timing to several cortical and sub-cortical regions, including the cerebellum, basal ganglia and supplementary motor area (SMA). Specifically the basal ganglia and possibly the SMA have been implicated in interval timing at longer timescales (1 second and above), while the cerebellum may be more important for controlling motor timing at shorter timescales (milliseconds). Furthermore, these results indicate that motor timing is not controlled by a single brain region, but by a network of regions that control specific parameters of movement and that depend on the relevant timescale of the rhythmic sequence.
==== Sequencing ====
Motor sequencing has been explored in terms of either the ordering of individual movements, such as finger sequences for key presses, or the coordination of subcomponents of complex multi-joint movements. Implicated in this process are various cortical and sub-cortical regions, including the basal ganglia, the SMA and the pre-SMA, the cerebellum, and the premotor and prefrontal cortices, all involved in the production and learning of motor sequences but without explicit evidence of their specific contributions or interactions amongst one another. In animals, neurophysiological studies have demonstrated an interaction between the frontal cortex and the basal ganglia during the learning of movement sequences. Human neuroimaging studies have also emphasized the contribution of the basal ganglia for well-learned sequences.
The cerebellum is arguably important for sequence learning and for the integration of individual movements into unified sequences, while the pre-SMA and SMA have been shown to be involved in organizing or chunking of more complex movement sequences.
Chunking, defined as the re-organization or re-grouping of movement sequences into smaller sub-sequences during performance, is thought to facilitate the smooth performance of complex movements and to improve motor memory.
Lastly, the premotor cortex has been shown to be involved in tasks that require the production of relatively complex sequences, and it may contribute to motor prediction.
==== Spatial organization ====
Few studies of complex motor control have distinguished between sequential and spatial organization, yet expert musical performances demand not only precise sequencing but also spatial organization of movements. Studies in animals and humans have established the involvement of parietal, sensory–motor and premotor cortices in the control of movements, when the integration of spatial, sensory and motor information is required. Few studies so far have explicitly examined the role of spatial processing in the context of musical tasks.
=== Auditory-motor interactions ===
==== Feedforward and feedback interactions ====
An auditory–motor interaction may be loosely defined as any engagement of or communication between the two systems. Two classes of auditory-motor interaction are "feedforward" and "feedback". In feedforward interactions, it is the auditory system that predominately influences the motor output, often in a predictive way. An example is the phenomenon of tapping to the beat, where the listener anticipates the rhythmic accents in a piece of music. Another example is the effect of music on movement disorders: rhythmic auditory stimuli have been shown to improve walking ability in Parkinson's disease and stroke patients.
Feedback interactions are particularly relevant in playing an instrument such as a violin, or in singing, where pitch is variable and must be continuously controlled. If auditory feedback is blocked, musicians can still execute well-rehearsed pieces, but expressive aspects of performance are affected. When auditory feedback is experimentally manipulated by delays or distortions, motor performance is significantly altered: asynchronous feedback disrupts the timing of events, whereas alteration of pitch information disrupts the selection of appropriate actions, but not their timing. This suggests that disruptions occur because both actions and percepts depend on a single underlying mental representation.
==== Models of auditory–motor interactions ====
Several models of auditory–motor interactions have been advanced. The model of Hickok and Poeppel, which is specific for speech processing, proposes that a ventral auditory stream maps sounds onto meaning, whereas a dorsal stream maps sounds onto articulatory representations. They and others suggest that posterior auditory regions at the parieto-temporal boundary are crucial parts of the auditory–motor interface, mapping auditory representations onto motor representations of speech, and onto melodies.
==== Mirror/echo neurons and auditory–motor interactions ====
The mirror neuron system has an important role in neural models of sensory–motor integration. There is considerable evidence that neurons respond to both actions and the accumulated observation of actions. A system proposed to explain this understanding of actions is that visual representations of actions are mapped onto our own motor system.
Some mirror neurons are activated both by the observation of goal-directed actions, and by the associated sounds produced during the action. This suggests that the auditory modality can access the motor system. While these auditory–motor interactions have mainly been studied for speech processes, and have focused on Broca's area and the vPMC, as of 2011, experiments have begun to shed light on how these interactions are needed for musical performance. Results point to a broader involvement of the dPMC and other motor areas. The literature has shown a highly specialized cortical network in the skilled musician's brain that codes the relationship between musical gestures and their corresponding sounds. The data hint at the existence of an audiomotor mirror network involving the right superior temporal gyrus, the premotor cortex, the inferior frontal and inferior parietal areas, among other areas.
== Music and language ==
Certain aspects of language and melody have been shown to be processed in near identical functional brain areas. Brown, Martinez and Parsons (2006) examined the neurological structural similarities between music and language. Utilizing positron emission tomography (PET), the findings showed that both linguistic and melodic phrases produced activation in almost identical functional brain areas. These areas included the primary motor cortex, supplementary motor area, Broca's area, anterior insula, primary and secondary auditory cortices, temporal pole, basal ganglia, ventral thalamus and posterior cerebellum. Differences were found in lateralization tendencies as language tasks favoured the left hemisphere, but the majority of activations were bilateral which produced significant overlap across modalities.
Syntactical information mechanisms in both music and language have been shown to be processed similarly in the brain. Jentschke, Koelsch, Sallat and Friederici (2008) conducted a study investigating the processing of music in children with specific language impairments (SLI). Children with typical language development (TLD) showed ERP patterns different from those of children with SLI, which reflected their challenges in processing music-syntactic regularities. Strong correlations between the ERAN (Early Right Anterior Negativity—a specific ERP measure) amplitude and linguistic and musical abilities provide additional evidence for the relationship of syntactical processing in music and language.
However, production of melody and production of speech may be subserved by different neural networks. Stewart, Walsh, Frith and Rothwell (2001) studied the differences between speech production and song production using transcranial magnetic stimulation (TMS). Stewart et al. found that TMS applied to the left frontal lobe disturbs speech but not melody supporting the idea that they are subserved by different areas of the brain. The authors suggest that a reason for the difference is that speech generation can be localized well but the underlying mechanisms of melodic production cannot. Alternatively, it was also suggested that speech production may be less robust than melodic production and thus more susceptible to interference.
Language processing is a function more of the left side of the brain than the right side, particularly Broca's area and Wernicke's area, though the roles played by the two sides of the brain in processing different aspects of language are still unclear. Music is also processed by both the left and the right sides of the brain. Recent evidence further suggest shared processing between language and music at the conceptual level. It has also been found that, among music conservatory students, the prevalence of absolute pitch is much higher for speakers of tone language, even controlling for ethnic background, showing that language influences how musical tones are perceived.
== Musician vs. non-musician processing ==
=== Differences ===
Brain structure within musicians and non-musicians is distinctly different. Gaser and Schlaug (2003) compared brain structures of professional musicians with non-musicians and discovered gray matter volume differences in motor, auditory and visual-spatial brain regions. Specifically, positive correlations were discovered between musician status (professional, amateur and non-musician) and gray matter volume in the primary motor and somatosensory areas, premotor areas, anterior superior parietal areas and in the inferior temporal gyrus bilaterally. This strong association between musician status and gray matter differences supports the notion that musicians' brains show use-dependent structural changes. Due to the distinct differences in several brain regions, it is unlikely that these differences are innate but rather due to the long-term acquisition and repetitive rehearsal of musical skills.
Brains of musicians also show functional differences from those of non-musicians. Krings, Topper, Foltys, Erberich, Sparing, Willmes and Thron (2000) utilized fMRI to study brain area involvement of professional pianists and a control group while performing complex finger movements. Krings et al. found that the professional piano players showed lower levels of cortical activation in motor areas of the brain. It was concluded that a lesser amount of neurons needed to be activated for the piano players due to long-term motor practice which results in the different cortical activation patterns. Koeneke, Lutz, Wustenberg and Jancke (2004) reported similar findings in keyboard players. Skilled keyboard players and a control group performed complex tasks involving unimanual and bimanual finger movements. During task conditions, strong hemodynamic responses in the cerebellum were shown by both non-musicians and keyboard players, but non-musicians showed the stronger response. This finding indicates that different cortical activation patterns emerge from long-term motor practice. This evidence supports previous data showing that musicians require fewer neurons to perform the same movements.
Musicians have been shown to have significantly more developed left planum temporales, and have also shown to have a greater word memory. Chan's study controlled for age, grade point average and years of education and found that when given a 16 word memory test, the musicians averaged one to two more words above their non musical counterparts.
=== Similarities ===
Studies have shown that the human brain has an implicit musical ability. Koelsch, Gunter, Friederici and Schoger (2000) investigated the influence of preceding musical context, task relevance of unexpected chords and the degree of probability of violation on music processing in both musicians and non-musicians. Findings showed that the human brain unintentionally extrapolates expectations about impending auditory input. Even in non-musicians, the extrapolated expectations are consistent with music theory. The ability to process information musically supports the idea of an implicit musical ability in the human brain. In a follow-up study, Koelsch, Schroger, and Gunter (2002) investigated whether ERAN and N5 could be evoked preattentively in non-musicians. Findings showed that both ERAN and N5 can be elicited even in a situation where the musical stimulus is ignored by the listener indicating that there is a highly differentiated preattentive musicality in the human brain.
== Gender differences ==
Minor neurological differences regarding hemispheric processing exist between brains of males and females. Koelsch, Maess, Grossmann and Friederici (2003) investigated music processing through EEG and ERPs and discovered gender differences. Findings showed that females process music information bilaterally and males process music with a right-hemispheric predominance. However, the early negativity of males was also present over the left hemisphere. This indicates that males do not exclusively utilize the right hemisphere for musical information processing. In a follow-up study, Koelsch, Grossman, Gunter, Hahne, Schroger and Friederici (2003) found that boys show lateralization of the early anterior negativity in the left hemisphere but found a bilateral effect in girls. This indicates a developmental effect as early negativity is lateralized in the right hemisphere in men and in the left hemisphere in boys.
== Handedness differences ==
It has been found that subjects who are lefthanded, particularly those who are also ambidextrous, perform better than righthanders on short term memory for the pitch.
It was hypothesized that this handedness advantage is due to the fact that lefthanders have more duplication of storage in the two hemispheres than do righthanders. Other work has shown that there are pronounced differences between righthanders and lefthanders (on a statistical basis) in how musical patterns are perceived, when sounds come from different regions of space. This has been found, for example, in the Octave illusion and the Scale illusion.
== Musical imagery ==
Musical imagery refers to the experience of replaying music by imagining it inside the head. Musicians show a superior ability for musical imagery due to intense musical training. Herholz, Lappe, Knief and Pantev (2008) investigated the differences in neural processing of a musical imagery task in musicians and non-musicians. Utilizing magnetoencephalography (MEG), Herholz et al. examined differences in the processing of a musical imagery task with familiar melodies in musicians and non-musicians. Specifically, the study examined whether the mismatch negativity (MMN) can be based solely on imagery of sounds. The task involved participants listening to the beginning of a melody, continuation of the melody in his/her head and finally hearing a correct/incorrect tone as further continuation of the melody. The imagery of these melodies was strong enough to obtain an early preattentive brain response to unanticipated violations of the imagined melodies in the musicians. These results indicate similar neural correlates are relied upon for trained musicians imagery and perception. Additionally, the findings suggest that modification of the imagery mismatch negativity (iMMN) through intense musical training results in achievement of a superior ability for imagery and preattentive processing of music.
Perceptual musical processes and musical imagery may share a neural substrate in the brain. A PET study conducted by Zatorre, Halpern, Perry, Meyer and Evans (1996) investigated cerebral blood flow (CBF) changes related to auditory imagery and perceptual tasks. These tasks examined the involvement of particular anatomical regions as well as functional commonalities between perceptual processes and imagery. Similar patterns of CBF changes provided evidence supporting the notion that imagery processes share a substantial neural substrate with related perceptual processes. Bilateral neural activity in the secondary auditory cortex was associated with both perceiving and imagining songs. This implies that within the secondary auditory cortex, processes underlie the phenomenological impression of imagined sounds. The supplementary motor area (SMA) was active in both imagery and perceptual tasks suggesting covert vocalization as an element of musical imagery. CBF increases in the inferior frontal polar cortex and right thalamus suggest that these regions may be related to retrieval and/or generation of auditory information from memory.
== Emotion ==
Music is able to create an intensely pleasurable experience that can be described as "chills". Blood and Zatorre (2001) used PET to measure changes in cerebral blood flow while participants listened to music that they knew to give them the "chills" or any sort of intensely pleasant emotional response. They found that as these chills increase, many changes in cerebral blood flow are seen in brain regions such as the amygdala, orbitofrontal cortex, ventral striatum, midbrain, and the ventral medial prefrontal cortex. Many of these areas appear to be linked to reward, motivation, emotion, and arousal, and are also activated in other pleasurable situations. The resulting pleasure responses enable the release dopamine, serotonin, and oxytocin. Nucleus accumbens (a part of striatum) is involved in both music related emotions, as well as rhythmic timing.
According to the National Institute of Health, children and adults who are suffering from emotional trauma have been able to benefit from the use of music in a variety of ways. The use of music has been essential in helping children who struggle with focus, anxiety, and cognitive function by using music in therapeutic way. Music therapy has also helped children cope with autism, pediatric cancer, and pain from treatments.
Emotions induced by music activate similar frontal brain regions compared to emotions elicited by other stimuli. Schmidt and Trainor (2001) discovered that valence (i.e. positive vs. negative) of musical segments was distinguished by patterns of frontal EEG activity. Joyful and happy musical segments were associated with increases in left frontal EEG activity whereas fearful and sad musical segments were associated with increases in right frontal EEG activity. Additionally, the intensity of emotions was differentiated by the pattern of overall frontal EEG activity. Overall frontal region activity increased as affective musical stimuli became more intense.
When unpleasant melodies are played, the posterior cingulate cortex activates, which indicates a sense of conflict or emotional pain. The right hemisphere has also been found to be correlated with emotion, which can also activate areas in the cingulate in times of emotional pain, specifically social rejection (Eisenberger). This evidence, along with observations, has led many musical theorists, philosophers and neuroscientists to link emotion with tonality. This seems almost obvious because the tones in music seem like a characterization of the tones in human speech, which indicate emotional content. The vowels in the phonemes of a song are elongated for a dramatic effect, and it seems as though musical tones are simply exaggerations of the normal verbal tonality.
== Memory ==
=== Neuropsychology of musical memory ===
Musical memory involves both explicit and implicit memory systems. Explicit musical memory is further differentiated between episodic (where, when and what of the musical experience) and semantic (memory for music knowledge including facts and emotional concepts). Implicit memory centers on the 'how' of music and involves automatic processes such as procedural memory and motor skill learning – in other words skills critical for playing an instrument. Samson and Baird (2009) found that the ability of musicians with Alzheimer's Disease to play an instrument (implicit procedural memory) may be preserved.
=== Neural correlates of musical memory ===
A PET study looking into the neural correlates of musical semantic and episodic memory found distinct activation patterns. Semantic musical memory involves the sense of familiarity of songs. The semantic memory for music condition resulted in bilateral activation in the medial and orbital frontal cortex, as well as activation in the left angular gyrus and the left anterior region of the middle temporal gyri. These patterns support the functional asymmetry favouring the left hemisphere for semantic memory. Left anterior temporal and inferior frontal regions that were activated in the musical semantic memory task produced activation peaks specifically during the presentation of musical material, suggestion that these regions are somewhat functionally specialized for musical semantic representations.
Episodic memory of musical information involves the ability to recall the former context associated with a musical excerpt. In the condition invoking episodic memory for music, activations were found bilaterally in the middle and superior frontal gyri and precuneus, with activation predominant in the right hemisphere. Other studies have found the precuneus to become activated in successful episodic recall. As it was activated in the familiar memory condition of episodic memory, this activation may be explained by the successful recall of the melody.
When it comes to memory for pitch, there appears to be a dynamic and distributed brain network subserves pitch memory processes. Gaab, Gaser, Zaehle, Jancke and Schlaug (2003) examined the functional anatomy of pitch memory using functional magnetic resonance imaging (fMRI). An analysis of performance scores in a pitch memory task resulted in a significant correlation between good task performance and the supramarginal gyrus (SMG) as well as the dorsolateral cerebellum. Findings indicate that the dorsolateral cerebellum may act as a pitch discrimination processor and the SMG may act as a short-term pitch information storage site. The left hemisphere was found to be more prominent in the pitch memory task than the right hemispheric regions.
=== Therapeutic effects of music on memory ===
Musical training has been shown to aid memory. Altenmuller et al. studied the difference between active and passive musical instruction and found both that over a longer (but not short) period of time, the actively taught students retained much more information than the passively taught students. The actively taught students were also found to have greater cerebral cortex activation. The passively taught students weren't wasting their time; they, along with the active group, displayed greater left hemisphere activity, which is typical in trained musicians.
Research suggests we listen to the same songs repeatedly because of musical nostalgia. One major study, published in the journal Memory & Cognition, found that music enables the mind to evoke memories of the past, known as music-evoked autobiographical memories.
== Attention ==
Treder et al. identified neural correlates of attention when listening to simplified polyphonic music patterns. In a musical oddball experiment, they had participants shift selective attention to one out of three different instruments in music audio clips, with each instrument occasionally playing one or several notes deviating from an otherwise repetitive pattern. Contrasting attended versus unattended instruments, ERP analysis shows subject- and instrument-specific responses including P300 and early auditory components. The attended instrument could be classified offline with high accuracy. This indicates that attention paid to a particular instrument in polyphonic music can be inferred from ongoing EEG, a finding that is potentially relevant for building more ergonomic music-listing based brain-computer interfaces.
== Development ==
Musical four-year-olds have been found to have one greater left hemisphere intrahemispheric coherence. Musicians have been found to have more developed anterior portions of the corpus callosum in a study by Cowell et al. in 1992. This was confirmed by a study by Schlaug et al. in 1995 that found that classical musicians between the ages of 21 and 36 have significantly greater anterior corpora callosa than the non-musical control. Schlaug also found that there was a strong correlation of musical exposure before the age of seven, and a great increase in the size of the corpus callosum. These fibers join together the left and right hemispheres and indicate an increased relaying between both sides of the brain. This suggests the merging between the spatial- emotiono-tonal processing of the right brain and the linguistical processing of the left brain. This large relaying across many different areas of the brain might contribute to music's ability to aid in memory function.
== Impairment ==
=== Focal hand dystonia ===
Focal hand dystonia is a task-related movement disorder associated with occupational activities that require repetitive hand movements. Focal hand dystonia is associated with abnormal processing in the premotor and primary sensorimotor cortices. An fMRI study examined five guitarists with focal hand dystonia. The study reproduced task-specific hand dystonia by having guitarists use a real guitar neck inside the scanner as well as performing a guitar exercise to trigger abnormal hand movement. The dystonic guitarists showed significantly more activation of the contralateral primary sensorimotor cortex as well as a bilateral underactivation of premotor areas. This activation pattern represents abnormal recruitment of the cortical areas involved in motor control. Even in professional musicians, widespread bilateral cortical region involvement is necessary to produce complex hand movements such as scales and arpeggios. The abnormal shift from premotor to primary sensorimotor activation directly correlates with guitar-induced hand dystonia.
=== Music agnosia ===
Music agnosia, an auditory agnosia, is a syndrome of selective impairment in music recognition. Three cases of music agnosia are examined by Dalla Bella and Peretz (1999); C.N., G.L., and I.R.. All three of these patients suffered bilateral damage to the auditory cortex which resulted in musical difficulties while speech understanding remained intact. Their impairment is specific to the recognition of once familiar melodies. They are spared in recognizing environmental sounds and in recognizing lyrics. Peretz (1996) has studied C.N.'s music agnosia further and reports an initial impairment of pitch processing and spared temporal processing. C.N. later recovered in pitch processing abilities but remained impaired in tune recognition and familiarity judgments.
Musical agnosias may be categorized based on the process which is impaired in the individual. Apperceptive music agnosia involves an impairment at the level of perceptual analysis involving an inability to encode musical information correctly. Associative music agnosia reflects an impaired representational system which disrupts music recognition. Many of the cases of music agnosia have resulted from surgery involving the middle cerebral artery. Patient studies have surmounted a large amount of evidence demonstrating that the left side of the brain is more suitable for holding long-term memory representations of music and that the right side is important for controlling access to these representations. Associative music agnosias tend to be produced by damage to the left hemisphere, while apperceptive music agnosia reflects damage to the right hemisphere.
=== Congenital amusia ===
Congenital amusia, otherwise known as tone deafness, is a term for lifelong musical problems which are not attributable to intellectual disability, lack of exposure to music or deafness, or brain damage after birth. Amusic brains have been found in fMRI studies to have less white matter and thicker cortex than controls in the right inferior frontal cortex. These differences suggest abnormal neuronal development in the auditory cortex and inferior frontal gyrus, two areas which are important in musical-pitch processing.
Studies on those with amusia suggest different processes are involved in speech tonality and musical tonality. Congenital amusics lack the ability to distinguish between pitches and so are for example unmoved by dissonance and playing the wrong key on a piano. They also cannot be taught to remember a melody or to recite a song; however, they are still capable of hearing the intonation of speech, for example, distinguishing between "You speak French" and "You speak French?" when spoken.
=== Amygdala damage ===
Damage to the amygdala has selective emotional impairments on musical recognition. Gosselin, Peretz, Johnsen and Adolphs (2007) studied S.M., a patient with bilateral damage of the amygdala with the rest of the temporal lobe undamaged and found that S.M. was impaired in recognition of scary and sad music. S.M.'s perception of happy music was normal, as was her ability to use cues such as tempo to distinguish between happy and sad music. It appears that damage specific to the amygdala can selectively impair recognition of scary music.
=== Selective deficit in music reading ===
Specific musical impairments may result from brain damage leaving other musical abilities intact. Cappelletti, Waley-Cohen, Butterworth and Kopelman (2000) studied a single case study of patient P.K.C., a professional musician who sustained damage to the left posterior temporal lobe as well as a small right occipitotemporal lesion. After sustaining damage to these regions, P.K.C. was selectively impaired in the areas of reading, writing and understanding musical notation but maintained other musical skills. The ability to read aloud letters, words, numbers and symbols (including musical ones) was retained. However, P.K.C. was unable to read aloud musical notes on the staff regardless of whether the task involved naming with the conventional letter or by singing or playing. Yet despite this specific deficit, P.K.C. retained the ability to remember and play familiar and new melodies.
=== Auditory arrhythmia ===
Arrhythmia in the auditory modality is defined as a disturbance of rhythmic sense; and includes deficits such as the inability to rhythmically perform music, the inability to keep time to music and the inability to discriminate between or reproduce rhythmic patterns. A study investigating the elements of rhythmic function examined Patient H.J., who acquired arrhythmia after sustaining a right temporoparietal infarct. Damage to this region impaired H.J.'s central timing system which is essentially the basis of his global rhythmic impairment. H.J. was unable to generate steady pulses in a tapping task. These findings suggest that keeping a musical beat relies on functioning in the right temporal auditory cortex.
== References ==
== External links ==
MusicCognition.info - A Resource and Information Center | Wikipedia/Neuroscience_of_music |
Electrotherapy is the use of electrical energy as a medical treatment. In medicine, the term electrotherapy can apply to a variety of treatments, including the use of electrical devices such as deep brain stimulators for neurological disease. Electrotherapy is a part of neurotherapy aimed at changing the neuronal activity. The term has also been applied specifically to the use of electric current to speed up wound healing. The use of electromagnetic stimulation or EMS is also very wide for dealing with muscular pain. Additionally, the term "electrotherapy" or "electromagnetic therapy" has also been applied to a range of alternative medical devices and treatments. Evidence supporting the effectiveness of electrotherapy is limited (see section Medical uses below).
== Medical uses ==
Electrotherapy is primarily used in physical therapy for:
relaxation of muscle spasms
prevention and retardation of disuse atrophy
increase of local blood circulation
muscle rehabilitation and re-education
electrical muscle stimulation
maintaining and increasing range of motion
management of chronic and intractable pain including diabetic neuropathy
acute post-traumatic and post-surgical pain
post-surgical stimulation of muscles to prevent venous thrombosis
wound healing
drug delivery
There is limited evidence supporting electrotherapy, specifically in treating musculoskeletal, osteoarthritis, fibromyalgia, neck pain, lumbopelvic pain, and ulcer conditions. Some of the treatment effectiveness mechanisms are little understood. The natural neurostimulation hypothesis explains the therapeutic effect by the fact that energy stimuli induce mitochondrial stress and microvascular vasodilation. Since healthy neurostimulation should emulate the physical characteristics of a mother's care for her fetus during pregnancy scaled to the treatment parameters of the specific patient, but many techniques of electrotherapy do not consider this, the hypothesis claims that their effectiveness and some practices for their use still anecdotal.
=== Musculoskeletal conditions ===
In general, there is little evidence that electrotherapy is effective in the management of musculoskeletal conditions.
In particular, there is no evidence that electrotherapy is effective in the relief of pain arising from osteoarthritis,
and little to no evidence available to support electrotherapy for the management of fibromyalgia.
==== Neck and back pain ====
A 2016 review found that, "in evidence of no effectiveness," clinicians should not offer electrotherapy for the treatment of neck pain or associated disorders.
Earlier reviews found that no conclusions could be drawn about the effectiveness of electrotherapy for neck pain,
and that electrotherapy has limited effect on neck pain as measured by clinical results. A later 2023 review confirmed this conclusion that there is limited high-quality evidence for the use of electromagnetic stimulation for pain relief.
A 2015 review found that the evidence for electrotherapy in pregnancy-related lower back pain is "very limited".
==== Shoulder disorders ====
A 2014 Cochrane review found insufficient evidence to determine whether electrotherapy was better than exercise at treating adhesive capsulitis.
As of 2004, there is insufficient evidence to draw conclusions about any intervention for rotator cuff pathology, including electrotherapy;
furthermore, methodological problems precluded drawing conclusions about the efficacy of any rehabilitation method for impingement syndrome.
==== Other musculoskeletal disorders ====
There is limited, low quality evidence for a slight benefit of noxious-level electrotherapy in the treatment of epicondylitis.
A 2012 review found that "Small, single studies showed that some electrotherapy modalities may be beneficial" in rehabilitating ankle bone fractures, but the 2024 update of this review does not address electrotherapy. However, a 2008 review found it to be ineffective in healing long-bone fractures.
A 2012 review found that evidence that electrotherapy contributes to recovery from knee conditions is of "limited quality".
=== Chronic pain ===
A 2016 Cochrane review found that supporting evidence for electrotherapy as a treatment for complex regional pain syndrome is "absent or unclear."
=== Chronic wounds ===
A 2015 review found that the evidence supporting the use of electrotherapy in healing pressure ulcers was of low quality, and a 2015 Cochrane review found that no evidence that electromagnetic therapy, a subset of electrotherapy, was effective in healing pressure ulcers. Earlier reviews found that, because of low-quality evidence, it was unclear whether electrotherapy increases healing rates of pressure ulcers. By 2014 the evidence supported electrotherapy's efficacy for ulcer healing.
Another 2015 Cochrane review found no evidence supporting the user of electrotherapy for venous stasis ulcers.
=== Mental health and mood disorders ===
Since the 1950s, over 150 published articles have found a positive outcome in using cranial electrostimulation (CES) to treat depression, anxiety, and insomnia.
== Contraindications ==
Electrotherapy is contraindicated for people with:
medical implants or stimulators like a cardiac pacemaker
certain cardiovascular diseases
women who are pregnant
deep vein thrombosis
cognitive impairment
== History ==
The first recorded treatment of a patient by electricity was by Johann Gottlob Krüger in 1743. John Wesley promoted electrical treatment as a universal panacea in 1747 but was rejected by mainstream medicine. Giovanni Aldini treated insanity with static electricity 1823–1824.
The first recorded medical treatments with electricity in London were in 1767 at Middlesex Hospital in London using a special apparatus. The same apparatus was purchased for St. Bartholomew's Hospital ten years later. Guy's Hospital has a published list of cases from the early 19th century. Golding Bird at Guy's brought electrotherapy into the mainstream in the mid-19th century. In the second half of the 19th century the emphasis moved from delivering large shocks to the whole body to more measured doses, the minimum effective.
=== Apparatus ===
Electrotherapy equipment has historically included:
The electric bath for high-voltage static induction
Oudin coil, a high-voltage induction coil, in use around 1900
Faradic Battery, a device to provide localised electric stimulation
Pulvermacher's chain, a wearable electrochemical device mostly used by quacks, in use second half of 19th century
Leyden jars, an early form of capacitor, for storing electricity
Electrostatic generators of various sorts
=== People ===
Some important people in the history of electrotherapy include;
Luigi Galvani, a pioneer of medical electricity
Benjamin Franklin, an early proponent of electrotherapy who made it widely known, but mostly taken up by quacks and charlatans
Golding Bird, mentioned above
Charles Grafton Page
Duchenne de Boulogne
Jacques-Arsène d'Arsonval
George Miller Beard
Margaret Cleaves, a promoter of ozone therapy
Many of the forms of electricity used in electrotherapy were named after scientists
==== Notable historic fringe practitioners ====
James Graham
Franz Mesmer
=== Muscle stimulation ===
In 1856 Guillaume Duchenne announced that alternating was superior to direct current for electrotherapeutic triggering of muscle contractions. What he called the 'warming effect' of direct currents irritated the skin, since, at voltage strengths needed for muscle contractions, they cause the skin to blister (at the anode) and pit (at the cathode). Furthermore, with DC each contraction required the current to be stopped and restarted. Moreover, alternating current could produce strong muscle contractions regardless of the condition of the muscle, whereas DC-induced contractions were strong if the muscle was strong, and weak if the muscle was weak.
Since that time almost all rehabilitation involving muscle contraction has been done with a symmetrical rectangular biphasic waveform. During the 1940s, however, the U.S. War Department, investigating the application of electrical stimulation not just to retard and prevent atrophy but to restore muscle mass and strength, employed what was termed galvanic exercise on the atrophied hands of patients who had an ulnar nerve lesion from surgery upon a wound. These galvanic exercises employed a monophasic (single-pulse) direct current waveform.
The American Physical Therapy Association, a professional organization representing physical therapists, accepts the use of electrotherapy in the field of physical therapy.
== See also ==
Cranial electrotherapy stimulation
Electrical brain stimulation
Electroanalgesia
Electroconvulsive therapy
Electrotherapy (cosmetic)
Galvanic bath
Microcurrent electrical neuromuscular stimulator
Neuromuscular diagnostics
Neurotherapy
Pulsed electromagnetic field therapy
Transcranial direct-current stimulation
Transcranial magnetic stimulation
Transcutaneous electrical nerve stimulation
Vagus nerve stimulation
== References ==
== External links ==
Irmak R (2020). Atlas of Electrotherapy -I : Waveforms and Electrode Placements for Cervical, Thoracic and Lumbosacral Regions. Rafet Irmak. ISBN 978-605-89408-7-1.
Tim Watson's site | Wikipedia/Electrotherapy |
Behaviour therapy or behavioural psychotherapy is a broad term referring to clinical psychotherapy that uses techniques derived from behaviourism and/or cognitive psychology. It looks at specific, learned behaviours and how the environment, or other people's mental states, influences those behaviours, and consists of techniques based on behaviorism's theory of learning: respondent or operant conditioning. Behaviourists who practice these techniques are either behaviour analysts or cognitive-behavioural therapists. They tend to look for treatment outcomes that are objectively measurable. Behaviour therapy does not involve one specific method, but it has a wide range of techniques that can be used to treat a person's psychological problems.
Behavioural psychotherapy is sometimes juxtaposed with cognitive psychotherapy. While cognitive behavioural therapy integrates aspects of both approaches, such as cognitive restructuring, positive reinforcement, habituation (or desensitisation), counterconditioning, and modelling.
Applied behaviour analysis (ABA) is the application of behaviour analysis that focuses on functionally assessing how behaviour is influenced by the observable learning environment and how to change such behaviour through contingency management or exposure therapies, which are used throughout clinical behaviour analysis therapies or other interventions based on the same learning principles.
Cognitive-behavioural therapy views cognition and emotions as preceding overt behaviour and implements treatment plans in psychotherapy to lessen the issue by managing competing thoughts and emotions, often in conjunction with behavioural learning principles.
A 2013 Cochrane review comparing behaviour therapies to psychological therapies found them to be equally effective, although at the time the evidence base that evaluates the benefits and harms of behaviour therapies was weak.
== History ==
Precursors of certain fundamental aspects of behaviour therapy have been identified in various ancient philosophical traditions, particularly Stoicism. For example, Wolpe and Lazarus wrote,
While the modern behavior therapist deliberately applies principles of learning to this therapeutic operations, empirical behavior therapy is probably as old as civilization – if we consider civilization as having started when man first did things to further the well-being of other men. From the time that this became a feature of human life there must have been occasions when a man complained of his ills to another who advised or persuaded him of a course of action. In a broad sense, this could be called behavior therapy whenever the behavior itself was conceived as the therapeutic agent. Ancient writings contain innumerable behavioral prescriptions that accord with this broad conception of behavior therapy.
The first use of the term behaviour modification appears to have been by Edward Thorndike in 1911. His article Provisional Laws of Acquired Behavior or Learning makes frequent use of the term "modifying behavior". Through early research in the 1940s and the 1950s the term was used by Joseph Wolpe's research group. The experimental tradition in clinical psychology used it to refer to psycho-therapeutic techniques derived from empirical research. It has since come to refer mainly to techniques for increasing adaptive behaviour through reinforcement and decreasing maladaptive behaviour through extinction or punishment (with emphasis on the former). Two related terms are behaviour therapy and applied behaviour analysis. Since techniques derived from behavioural psychology tend to be the most effective in altering behaviour, most practitioners consider behaviour modification along with behaviour therapy and applied behaviour analysis to be founded in behaviourism. While behaviour modification and applied behaviour analysis typically uses interventions based on the same behavioural principles, many behaviour modifiers who are not applied behaviour analysts tend to use packages of interventions and do not conduct functional assessments before intervening.
Possibly the first occurrence of the term "behavior therapy" was in a 1953 research project by B.F. Skinner, Ogden Lindsley, Nathan Azrin and Harry C. Solomon. The paper talked about operant conditioning and how it could be used to help improve the functioning of people who were diagnosed with chronic schizophrenia. Early pioneers in behaviour therapy include Joseph Wolpe and Hans Eysenck.
In general, behaviour therapy is seen as having three distinct points of origin: South Africa (Wolpe's group), the United States (Skinner), and the United Kingdom (Rachman and Eysenck). Each had its own distinct approach to viewing behaviour problems. Eysenck in particular viewed behaviour problems as an interplay between personality characteristics, environment, and behaviour. Skinner's group in the United States took more of an operant conditioning focus. The operant focus created a functional approach to assessment and interventions focused on contingency management such as the token economy and behavioural activation. Skinner's student Ogden Lindsley is credited with forming a movement called precision teaching, which developed a particular type of graphing program called the standard celeration chart to monitor the progress of clients. Skinner became interested in the individualising of programs for improved learning in those with or without disabilities and worked with Fred S. Keller to develop programmed instruction. Programmed instruction had some clinical success in aphasia rehabilitation. Gerald Patterson used programme instruction to develop his parenting text for children with conduct problems. (see Parent management training.) With age, respondent conditioning appears to slow but operant conditioning remains relatively stable. While the concept had its share of advocates and critics in the west, its introduction in the Asian setting, particularly in India in the early 1970s and its grand success were testament to the famous Indian psychologist H. Narayan Murthy's enduring commitment to the principles of behavioural therapy and biofeedback.
While many behaviour therapists remain staunchly committed to the basic operant and respondent paradigm, in the second half of the 20th century, many therapists coupled behaviour therapy with the cognitive therapy, of Aaron Beck, Albert Ellis, and Donald Meichenbaum to form cognitive behaviour therapy. In some areas the cognitive component had an additive effect (for example, evidence suggests that cognitive interventions improve the result of social phobia treatment.) but in other areas it did not enhance the treatment, which led to the pursuit of third generation behaviour therapies. Third generation behaviour therapy uses basic principles of operant and respondent psychology but couples them with functional analysis and a clinical formulation/case conceptualisation of verbal behaviour more inline with view of the behaviour analysts. Some research supports these therapies as being more effective in some cases than cognitive therapy, but overall the question is still in need of answers.
== Theoretical basis ==
The behavioural approach to therapy assumes that behaviour that is associated with psychological problems develops through the same processes of learning that affects the development of other behaviours. Therefore, behaviourists see personality problems in the way that personality was developed. They do not look at behaviour disorders as something a person has, but consider that it reflects how learning has influenced certain people to behave in a certain way in certain situations.
Behaviour therapy is based upon the principles of classical conditioning developed by Ivan Pavlov and operant conditioning developed by B.F. Skinner. Classical conditioning happens when a neutral stimulus comes right before another stimulus that triggers a reflexive response. The idea is that if the neutral stimulus and whatever other stimulus that triggers a response is paired together often enough that the neutral stimulus will produce the reflexive response. Operant conditioning has to do with rewards and punishments and how they can either increase or decrease certain behaviours.
Contingency management programs are a direct product of research from operant conditioning.
== Current forms ==
Behavioural therapy based on operant and respondent principles has considerable evidence base to support its usage. This approach remains a vital area of clinical psychology and is often termed clinical behavior analysis. Behavioral psychotherapy has become increasingly contextual in recent years. Behavioral psychotherapy has developed greater interest in recent years in personality disorders as well as a greater focus on acceptance and complex case conceptualizations.
=== Functional analytic psychotherapy ===
One current form of behavioural psychotherapy is functional analytic psychotherapy. Functional analytic psychotherapy is a longer duration behaviour therapy. Functional analytic therapy focuses on in-session use of reinforcement and is primarily a relationally-based therapy. As with most of the behavioural psychotherapies, functional analytic psychotherapy is contextual in its origins and nature. and draws heavily on radical behaviourism and functional contextualism.
Functional analytic psychotherapy holds to a process model of research, which makes it unique compared to traditional behaviour therapy and cognitive behavioural therapy.
Functional analytic psychotherapy has a strong research support. Recent functional analytic psychotherapy research efforts are focusing on management of aggressive inpatients.
== Assessment ==
Behaviour therapists complete a functional analysis or a functional assessment that looks at four important areas: stimulus, organism, response and consequences. The stimulus is the condition or environmental trigger that causes behaviour. An organism involves the internal responses of a person, like physiological responses, emotions and cognition. A response is the behaviour that a person exhibits and the consequences are the result of the behaviour. These four things are incorporated into an assessment done by the behaviour therapist.
Most behaviour therapists use objective assessment methods like structured interviews, objective psychological tests or different behavioural rating forms. These types of assessments are used so that the behaviour therapist can determine exactly what a client's problem may be and establish a baseline for any maladaptive responses that the client may have. By having this baseline, as therapy continues this same measure can be used to check a client's progress, which can help determine if the therapy is working. Behaviour therapists do not typically ask the why questions but tend to be more focused on the how, when, where and what questions. Tests such as the Rorschach inkblot test or personality tests like the MMPI (Minnesota Multiphasic Personality Inventory) are not commonly used for behavioural assessment because they are based on personality trait theory assuming that a person's answer to these methods can predict behaviour. Behaviour assessment is more focused on the observations of a person's behaviour in their natural environment.
Behavioural assessment specifically attempts to find out what the environmental and self-imposed variables are. These variables are the things that are allowing a person to maintain their maladaptive feelings, thoughts and behaviours. In a behavioural assessment "person variables" are also considered. These "person variables" come from a person's social learning history and they affect the way in which the environment affects that person's behaviour. An example of a person variable would be behavioural competence. Behavioural competence looks at whether a person has the appropriate skills and behaviours that are necessary when performing a specific response to a certain situation or stimuli.
When making a behavioural assessment the behaviour therapist wants to answer two questions: (1) what are the different factors (environmental or psychological) that are maintaining the maladaptive behaviour and (2) what type of behaviour therapy or technique that can help the individual improve most effectively. The first question involves looking at all aspects of a person, which can be summed up by the acronym BASIC ID. This acronym stands for behaviour, affective responses, sensory reactions, imagery, cognitive processes, interpersonal relationships and drug use.
== Clinical applications ==
Behaviour therapy based its core interventions on functional analysis. Just a few of the many problems that behaviour therapy have functionally analyzed include intimacy in couples relationships, forgiveness in couples, chronic pain, stress-related behaviour problems of being an adult child of a person with an alcohol use disorder, anorexia, chronic distress, substance abuse, depression, anxiety, insomnia and obesity.
Functional analysis has even been applied to problems that therapists commonly encounter like client resistance, partially engaged clients and involuntary clients. Applications to these problems have left clinicians with considerable tools for enhancing therapeutic effectiveness. One way to enhance therapeutic effectiveness is to use positive reinforcement or operant conditioning. Although behaviour therapy is based on the general learning model, it can be applied in a lot of different treatment packages that can be specifically developed to deal with problematic behaviours. Some of the more well known types of treatments are: Relaxation training, systematic desensitization, virtual reality exposure, exposure and response prevention techniques, social skills training, modelling, behavioural rehearsal and homework, and aversion therapy and punishment.
Relaxation training involves clients learning to lower arousal to reduce their stress by tensing and releasing certain muscle groups throughout their body. Systematic desensitization is a treatment in which the client slowly substitutes a new learned response for a maladaptive response by moving up a hierarchy of situations involving fear. Systematic desensitization is based in part on counter conditioning. Counter conditioning is learning new ways to change one response for another and in the case of desensitization it is substituting that maladaptive behaviour for a more relaxing behaviour. Exposure and response prevention techniques (also known as flooding and response prevention) is the general technique in which a therapist exposes an individual to anxiety-provoking stimuli while keeping them from having any avoidance responses.
Virtual reality therapy provides realistic, computer-based simulations of troublesome situations. The modelling process involves a person being subjected to watching other individuals who demonstrate behaviour that is considered adaptive and that should be adopted by the client. This exposure involves not only the cues of the "model person" as well as the situations of a certain behaviour that way the relationship can be seen between the appropriateness of a certain behaviour and situation in which that behaviour occurs is demonstrated. With the behavioural rehearsal and homework treatment a client gets a desired behaviour during a therapy session and then they practice and record that behaviour between their sessions. Aversion therapy and punishment is a technique in which an aversive (painful or unpleasant) stimulus is used to decrease unwanted behaviours from occurring. It is concerned with two procedures: 1) the procedures are used to decrease the likelihood of the frequency of a certain behaviour and 2) procedures that will reduce the attractiveness of certain behaviours and the stimuli that elicit them. The punishment side of aversion therapy is when an aversive stimulus is presented at the same time that a negative stimulus and then they are stopped at the same time when a positive stimulus or response is presented. Examples of the type of negative stimulus or punishment that can be used is shock therapy treatments, aversive drug treatments as well as response cost contingent punishment which involves taking away a reward.
Applied behaviour analysis is using behavioural methods to modify certain behaviours that are seen as being important socially or personally. There are four main characteristics of applied behaviour analysis. First behaviour analysis is focused mainly on overt behaviours in an applied setting. Treatments are developed as a way to alter the relationship between those overt behaviours and their consequences.
Another characteristic of applied behaviour analysis is how it (behaviour analysis) goes about evaluating treatment effects. The individual subject is where the focus of study is on, the investigation is centred on the one individual being treated. A third characteristic is that it focuses on what the environment does to cause significant behaviour changes. Finally the last characteristic of applied behaviour analysis is the use of those techniques that stem from operant and classical conditioning such as providing reinforcement, punishment, stimulus control and any other learning principles that may apply.
Social skills training teaches clients skills to access reinforcers and lessen life punishment. Operant conditioning procedures in meta-analysis had the largest effect size for training social skills, followed by modelling, coaching, and social cognitive techniques in that order. Social skills training has some empirical support particularly for schizophrenia. However, with schizophrenia, behavioural programs have generally lost favour.
Some other techniques that have been used in behaviour therapy are contingency contracting, response costs, token economies, biofeedback, and using shaping and grading task assignments.
Shaping and graded task assignments are used when behaviour that needs to be learned is complex. The complex behaviours that need to be learned are broken down into simpler steps where the person can achieve small things gradually building up to the more complex behaviour. Each step approximates the eventual goal and helps the person to expand their activities in a gradual way. This behaviour is used when a person feels that something in their lives can not be changed and life's tasks appear to be overwhelming.
Another technique of behaviour therapy involves holding a client or patient accountable of their behaviours in an effort to change them. This is called a contingency contract, which is a formal written contract between two or more people that defines the specific expected behaviours that you wish to change and the rewards and punishments that go along with that behaviour. In order for a contingency contract to be official it needs to have five elements. First it must state what each person will get if they successfully complete the desired behaviour. Secondly those people involved have to monitor the behaviours. Third, if the desired behaviour is not being performed in the way that was agreed upon in the contract the punishments that were defined in the contract must be done. Fourth if the persons involved are complying with the contract they must receive bonuses. The last element involves documenting the compliance and noncompliance while using this treatment in order to give the persons involved consistent feedback about the target behaviour and the provision of reinforcers.
Token economies is a behaviour therapy technique where clients are reinforced with tokens that are considered a type of currency that can be used to purchase desired rewards, like being able to watch television or getting a snack that they want when they perform designated behaviours. Token economies are mainly used in institutional and therapeutic settings. In order for a token economy to be effective there must be consistency in administering the program by the entire staff. Procedures must be clearly defined so that there is no confusion among the clients. Instead of looking for ways to punish the patients or to deny them of rewards, the staff has to reinforce the positive behaviours so that the clients will increase the occurrence of the desired behaviour. Over time the tokens need to be replaced with less tangible rewards such as compliments so that the client will be prepared when they leave the institution and won't expect to get something every time they perform a desired behaviour.
Closely related to token economies is a technique called response costs. This technique can either be used with or without token economies. Response costs is the punishment side of token economies where there is a loss of a reward or privilege after someone performs an undesirable behaviour. Like token economies this technique is used mainly in institutional and therapeutic settings.
Considerable policy implications have been inspired by behavioural views of various forms of psychopathology. One form of behaviour therapy, habit reversal training, has been found to be highly effective for treating tics.
=== In rehabilitation ===
Currently, there is a greater call for behavioural psychologists to be involved in rehabilitation efforts.
=== Treatment of mental disorders ===
Two large studies done by the Faculty of Health Sciences at Simon Fraser University indicate that both behaviour therapy and cognitive-behavioural therapy (CBT) are equally effective for OCD. CBT is typically considered the "first-line" treatment for OCD. CBT has also been shown to perform slightly better at treating co-occurring depression.
Considerable policy implications have been inspired by behavioural views of various forms of psychopathology. One form of behaviour therapy (habit reversal training) has been found to be highly effective for treating tics.
There has been a development towards combining techniques to treat psychiatric disorders. Cognitive interventions are used to enhance the effects of more established behavioural interventions based on operant and classical conditioning. An increased effort has also been placed to address the interpersonal context of behaviour.
Behaviour therapy can be applied to a number of mental disorders and in many cases is more effective for specific disorders as compared to others. Behaviour therapy techniques can be used to deal with any phobias that a person may have. Desensitization has also been successfully applied to other issues such as dealing with anger, if a person has trouble sleeping and certain speech disorders. Desensitization does not occur over night, there is a process of treatment. Desensitization is done on a hierarchy and happens over a number of sessions. The hierarchy goes from situations that make a person less anxious or nervous up to things that are considered to be extreme for the patient.
Modelling has been used in dealing with fears and phobias. Fears are thought to develop through observational learning, and so positive modelling, when a person's behaviour is imitated, can used to counter these effects. In a systematic review of 1,677 papers, positive modelling was found to lower fear levels. Modelling has been used in the treatment of fear of snakes as well as a fear of water.
Aversive therapy techniques have been used to treat sexual deviations, as well as alcohol use disorder.
Exposure and prevention procedure techniques can be used to treat people who have anxiety problems as well as any fears or phobias. These procedures have also been used to help people dealing with any anger issues as well as pathological grievers (people who have distressing thoughts about a deceased person).
Virtual reality therapy deals with fear of heights, fear of flying, and a variety of other anxiety disorders. VRT has also been applied to help people with substance abuse problems reduce their responsiveness to certain cues that trigger their need to use drugs.
Shaping and graded task assignments has been used in dealing with suicide and depressed or inhibited individuals. This is used when a patient feel hopeless and they have no way of changing their lives. This hopelessness involves how the person reacts and responds to someone else and certain situations and their perceived powerlessness to change that situation that adds to the hopelessness. For a person with suicidal ideation, it is important to start with small steps. Because that person may perceive everything as being a big step, the smaller you start the easier it will be for the person to master each step. This technique has also been applied to people dealing with agoraphobia, or fear of being in public places or doing something embarrassing.
Contingency contracting has been used to effectively deal with behaviour problems in delinquents and when dealing with on task behaviours in students.
Token economies are used in controlled environments and are found mostly in psychiatric hospitals. They can be used to help patients with different mental illnesses but it doesn't focus on the treatment of the mental illness but instead on the behavioural aspects of a patient. The response cost technique has been used to successfully address a variety of behaviours such as smoking, overeating, stuttering, and psychotic talk.
=== Treatment outcomes ===
Systematic desensitization has been shown to successfully treat phobias about heights, driving, insects as well as any anxiety that a person may have. Anxiety can include social anxiety, anxiety about public speaking as well as test anxiety. It has been shown that the use of systematic desensitization is an effective technique that can be applied to a number of problems that a person may have.
When using modelling procedures this technique is often compared to another behavioural therapy technique. When compared to desensitization, the modelling technique does appear to be less effective. However it is clear that the greater the interaction between the patient and the subject he is modelling the greater the effectiveness of the treatment.
While undergoing exposure therapy, a person typically needs five sessions to assess the treatment's effectiveness. After five sessions, exposure treatment has been shown to provide benefit to the patient. However, it is still recommended treatment continue beyond the initial five sessions.
Virtual reality therapy (VRT) has shown to be effective for a fear of heights. It has also been shown to help with the treatment of a variety of anxiety disorders. Due to the costs associated with VRT in 2007, therapists were still awaiting results of controlled trials investigating VRT, to assess which applications demonstrate the best results.
For those with suicidal ideation, treatment depends on how severe the person's depression and sense of hopelessness is. If these things are severe, the person's response to completing small steps will not be of importance to them, because they don't consider the success an accomplishment. Generally, in those without severe depression or fear, this technique has been successful, as completion of simpler activities builds their confidences and allows them to progress to more complex situations.
Contingency contracts have been seen to be effective in changing any undesired behaviours of individuals. It has been seen to be effective in treating behaviour problems in delinquents regardless of the specific characteristics of the contract.
Token economies have been shown to be effective when treating patients in psychiatric wards who had chronic schizophrenia. The results showed that the contingent tokens were controlling the behaviour of the patients.
Response costs has been shown to work in suppressing a variety of behaviours such as smoking, overeating or stuttering with a diverse group of clinical populations ranging from sociopaths to school children. These behaviours that have been suppressed using this technique often do not recover when the punishment contingency is withdrawn. Also undesirable side effects that are usually seen with punishment are not typically found when using the response cost technique.
== "Third generation" ==
Since the 1980s, a series of new behavioral therapies have been developed. These have been later labeled by Steven C. Hayes as "the third-generation" of behavioural therapy. Under this classification, the first generation of behavioural therapy is that independently developed in the 1950s by Joseph Wolpe, Ogden Lindsley and Hans Eysenck, while the second generation is the cognitive therapy developed by Aaron Beck in the 1970s.
Other authors object to the term "third generation" or "third wave" and incorporate many of the "third wave" therapeutic techniques under the general umbrella term of modern cognitive behavioural therapies.
This "third wave" of behavioural therapy has sometimes been called clinical behaviour analysis because it has been claimed that it represents a movement away from cognitivism and back toward radical behaviourism and other forms of behaviourism, in particular functional analysis and behavioural models of verbal behaviour. This area includes acceptance and commitment therapy (ACT), cognitive behavioural analysis system of psychotherapy (CBASP) (McCullough, 2000), behavioural activation (BA), dialectical behaviour therapy, functional analytic psychotherapy (FAP), integrative behavioural couples therapy, metacognitive therapy and metacognitive training. These approaches are squarely within the applied behaviour analysis tradition of behaviour therapy.
Acceptance and Commitment Therapy (ACT) may be the most well-researched of all the third-generation behaviour therapy models. It is based on relational frame theory. As of March 2022, there are over 900 randomized trials of Acceptance and Commitment Therapy and 60 mediational studies of the ACT literature. ACT has been included in over 275 meta-analyses and systematic reviews. As the result of multiple randomized trials of ACT by the World Health Organization, WHO now distribute ACT-based self-help for "anyone who experiences stress, wherever they live, and whatever their circumstances." As of March 2022, a number of different organizations have stated that Acceptance and Commitment Therapy is empirically supported in certain areas or as a whole according to their standards. These include: American Psychological Association, Society of Clinical Psychology (Div. 12), The World Health Organization, The United Kingdom National Institute for Health and Care Excellence (NICE), Australian Psychological Society, Netherlands Institute of Psychologists: Sections of Neuropsychology and Rehabilitation, Sweden Association of Physiotherapists, SAMHSA's National Registry of Evidence-based Programs and Practices, California Evidence-Based Clearinghouse for Child Welfare, and the U.S. Veterans Affairs/DoD.
Functional analytic psychotherapy is based on a functional analysis of the therapeutic relationship. It places a greater emphasis on the therapeutic context and returns to the use of in-session reinforcement. In general, 40 years of research supports the idea that in-session reinforcement of behaviour can lead to behavioural change.
Behavioural activation emerged from a component analysis of cognitive behaviour therapy. Researchers hope to prove that it can be complete treatment in its own right. Behavioural activation is based on a matching model of reinforcement. A recent review of the research, supports the notion that the use of behavioural activation is clinically important for the treatment of depression.
Integrative behavioural couples therapy developed from dissatisfaction with traditional behavioural couples therapy. Integrative behavioural couples therapy looks to Skinner (1966) for the difference between contingency-shaped and rule-governed behaviour. It couples this analysis with a thorough functional assessment of the couple's relationship. Recent efforts have used radical behavioural concepts to interpret a number of clinical phenomena including forgiveness.
A review study published in 2008, concluded that at the time, third-generation behavioral psychotherapies did not meet the criteria for empirically supported treatments.
== Organisations ==
Many organisations exist for behaviour therapists around the world. In the United States, the American Psychological Association's Division 25 is the division for behaviour analysis. The Association for Contextual Behavioral Science is another professional organisation. ACBS is home to many clinicians with specific interest in third generation behaviour therapy. Doctoral-level behaviour analysts who are psychologists belong to American Psychological Association's Division 25 – behaviour analysis. APA offers a diploma in behavioural psychology.
The Association for Behavioral and Cognitive Therapies (formerly the Association for the Advancement of Behavior Therapy) is for those with a more cognitive orientation. The ABCT also has an interest group in behaviour analysis, which focuses on clinical behaviour analysis. In addition, the Association for Behavioral and Cognitive Therapies has a special interest group on addictions.
== Characteristics ==
By nature, behavioural therapies are empirical (data-driven), contextual (focused on the environment and context), functional (interested in the effect or consequence a behaviour ultimately has), probabilistic (viewing behaviour as statistically predictable), monistic (rejecting mind–body dualism and treating the person as a unit), and relational (analysing bidirectional interactions).
Behavioural therapy develops, adds and provides behavioural intervention strategies and programs for clients, and training to people who care to facilitate successful lives in various communities.
== Training ==
Recent efforts in behavioural psychotherapy have focused on the supervision process. A key point of behavioural models of supervision is that the supervisory process parallels the behavioural psychotherapy provided.
== See also ==
== References ==
== Sources ==
Bellack, A.S.; Hersen, M. (1985). Dictionary of Behavior Therapy Techniques. General Psychology Series. Pergamon Press. ISBN 978-0-08-030168-6.
Boyle, S.W. (2006). "Knowledge and Skills for Intervention". Direct Practice in Social Work (1st ed.). Pearson/Allyn & Bacon. ISBN 978-0-205-40162-8.
O'Leary, K.D.; Wilson, G.T. (1975). Behavior Therapy: Application and Outcome. Prentice-Hall series on social learning theory. Prentice-Hall. ISBN 978-0-13-073890-5.
Rimm, David; Masters, John C. (1974). Behavior therapy: techniques and empirical findings. New York: Academic Press. ISBN 0-12-588850-3. OCLC 793562.
Schaefer, H.H.; Martin, P.L. (1969). Behavioral Therapy. Blakiston Division, McGraw-Hill.
== External links ==
Library resources in your library and in other libraries about Behaviour therapy | Wikipedia/Behavioral_psychotherapy |
Transference-focused psychotherapy (TFP) is a highly structured, twice-weekly modified psychodynamic treatment based on Otto F. Kernberg's object relations model of borderline personality disorder (BPD). It views the individual with borderline personality organization (BPO) as holding unreconciled and contradictory internalized representations of self and significant others that are affectively charged. The defense against these contradictory internalized object relations leads to disturbed relationships with others and with oneself. The distorted perceptions of self, others, and associated affects are the focus of treatment as they emerge in the relationship with the therapist (transference). The treatment focuses on the integration of split-off parts of self and object representations, and the consistent interpretation of these distorted perceptions is considered the mechanism of change.
TFP has been validated as an efficacious treatment for BPD, but too few studies have been conducted to allow firm conclusions about its value. TFP is one of a number of treatments that may be useful in the treatment of BPD; however, in a study which compared TFP, dialectical behavior therapy, and modified psychodynamic supportive psychotherapy, only TFP was shown to change how patients think about themselves in relationships.
== Borderline personality disorder ==
TFP is a treatment for borderline personality disorder (BPD). Patients with BPD are often characterized by intense affect, stormy relationships, and impulsive behaviors. Due to their high reactivity to environmental stimuli, patients with BPD often experience dramatic and short-lived shifts in their mood, alternating between experiences of euphoria, depression, anxiety, and nervousness. Patients with BPD often experience intolerable feelings of emptiness that they attempt to fill with impulsive and self-damaging behaviors, such as substance abuse, risky sexual behavior, uncontrolled spending, or binge eating. Furthermore, patients with BPD often exhibit recurrent suicidal behaviors, gestures, or threats. Under intense stress patients with BPD may exhibit transient dissociative or paranoid symptoms.
== Theoretical model of borderline personality ==
According to the object relations model, in normal psychological development, mental templates of oneself in relation to others—or object representations—become increasingly more differentiated and integrated. The infant's experience, initially organized around moments of pain (e.g., "I am uncomfortable and in need of someone to care for me") and pleasure (e.g., "I am now being soothed by someone and feel loved"), become increasingly integrated and differentiated mental templates of oneself in relation to others. These increasingly mature representations allow for the realistic blending of good and bad such that positive and negative qualities can be integrated into a complex, multifaceted representation of an individual (e.g., "Although she is not caring for me at this moment, I know she loves me and will do so in the future"). Such integrated representations allow for the tolerance of ambivalence, difference, and contradiction in oneself and others.
For Kernberg, the degree of differentiation and integration of these representations of self and other, along with their affective valence, constitutes personality organization. In a normal personality organization the individual has an integrated model of self and others, allowing for stability and consistency within one's identity and in the perception of others, as well as a capacity for becoming intimate with others while maintaining one's sense of self. For example, such an individual would be able to tolerate hateful feelings in the context of a loving relationship without internal conflict or a sense of discontinuity in the perception of the other. In contrast, in borderline personality organization (BPO), the lack of integration in representations of self and other leads to the use of primitive defense mechanisms (e.g., splitting, projective identification, and dissociation), identity diffusion (i.e., an inconsistent view of self and others), and unstable reality testing (i.e., inconsistent differentiation between internal and external experience). Under conditions of high stress, individuals with BPD may fail to appreciate the "whole" of the situation and interpret events in catastrophic and intensely personal ways. They fail to discriminate the intentions and motivations of the other and perceive only threat or rejection. As such, thoughts and feelings about self and others are split into dichotomous experiences of good or bad, black or white, all or nothing.
== Goals ==
The major goals of TFP are to reduce suicidality and self-injurious behaviors and facilitate better behavioral control, increased affect regulation, more gratifying relationships, and the ability to pursue life goals. This is believed to be accomplished through the development of integrated representations of self and others, the modification of primitive defensive operations, and the resolution of identity diffusion that perpetuate the fragmentation of the patient's internal representational world.
== Treatment procedure ==
=== Contract ===
The treatment begins with drafting the treatment contract comprising general guidelines for all clients and specific items for problem areas of the individual client threatening the therapy progress. The contract also specifies therapist responsibilities. The client and therapist must sign the treatment contract before the therapy.
=== Therapeutic process ===
TFP consists of the following three steps:
Diagnostic description of a particular internalized object relation in the transference
Diagnostic elaboration of the corresponding self and object representation in the transference, and of their enactment in the transference or countertransference
Integration of the split-off self representations, leading to an integrated sense of self and others which resolves identity diffusion
During the first year of treatment, TFP focuses on a hierarchy of issues:
Containment of suicidal and self-destructive behaviors
Various ways of destroying the treatments
Identification and recapitulation of dominant object relational patterns (from unintegrated and undifferentiated affects and representations of self and others to a more coherent whole)
In this treatment, the analysis of the transference is the primary vehicle for the transformation of primitive (e.g., split, polarized) to advanced (e.g., complex, differentiated and integrated) object relations. Thus, in contrast to therapies that focus on the short-term treatment of symptoms, TFP has the ambitious goal of not just changing symptoms, but changing the personality organization, which is the context of the symptoms. To do this, the client's affectively charged internal representations of previous relationships are consistently interpreted as the therapist becomes aware of them in the therapeutic relationship, that is, the transference. Techniques of clarification, confrontation, and interpretation are used within the evolving transference relationship between the patient and the therapist.
In the psychotherapeutic relationship, self and object representations are activated in the transference. In the course of the therapy, projection and identification are operating, i.e., devalued self-representations are projected onto the therapist whilst the client identifies with a critical object representation. These processes are usually connected to affective experiences such as anger or fear.
The information that emerges within the transference provides direct access to the individual's internal world for two reasons. First, it is observable by both therapist and patient simultaneously so that inconsistent perceptions of the shared reality can be discussed immediately. Second, the perceptions of shared reality are accompanied by affect whereas the discussion of historical material can have an intellectualized quality and be thus less informative.
TFP emphasizes the role of interpretation within psychotherapy sessions. As the split-off representations of self and other get played out in the course of the treatment, the therapist helps the patient to understand the reasons (the fears or the anxieties) that support the continued separation of these fragmented senses of self and other. This understanding is accompanied by the experience of strong affects within the therapeutic relationship. The integration of the split and polarized concepts of self and others leads to a more complex, differentiated, and realistic sense of self and others that allows for better modulation of affects and in turn clearer thinking. Therefore, as split-off representations become integrated, patients tend to experience an increased coherence of identity, relationships that are balanced and constant over time and therefore not at risk of being overwhelmed by aggressive affect, a greater capacity for intimacy, a reduction in self-destructive behaviors, and general improvement in functioning.
=== Mechanisms of change ===
In TFP, hypothesized mechanisms of change derive from Kernberg's developmentally based theory of Borderline Personality Organization, conceptualized in terms of unintegrated and undifferentiated affects and representations of self and other. Partial representations of self and other are paired and linked by an affect in mental units called object relation dyads. These dyads are elements of psychological structure. In borderline pathology, the lack of integration of the internal object relations dyads corresponds to a 'split' psychological structure in which totally negative representations are split off/segregated from idealized positive representations of self and other (seeing people as all good or all bad). The putative global mechanism of change in patients treated with TFP is the integration of these polarized affect states and representations of self and other into a more coherent whole.
== Empirical support ==
=== Preliminary research ===
In early research studying the efficacy of a year-long TFP, suicide attempts were significantly reduced during treatment. Additionally, the physical condition of the patients was significantly improved. When the researchers compared the treatment year to the year prior, it was found that there was a significant reduction in psychiatric hospitalizations and days spent as inpatients in psychiatric hospitals. The dropout rate for the 1-year study was 19.1%, which the authors state as comparable to dropout rates in previous studies assessing the treatment of borderline individuals, including dialectical behavior therapy (DBT) research.
=== TFP vs. treatment-as-usual (TAU) ===
Results indicated that the TFP group experienced significant decreases in ER visits and hospitalizations during the treatment year, as well as significant increases in global functioning when compared to TAU.
=== TFP vs. treatment by community experts ===
A randomized clinical trial compared the outcomes of TFP or treatment by community experts for 104 borderline patients. The dropout rate was significantly higher in the community psychotherapy condition; however, the dropout rate for TFP was 38.5%, which the authors acknowledge as somewhat higher than dropout rates associated with DBT and schema-focused therapy (SFT). The TFP group experienced significant improvement in personality organization, psychosocial functioning, and number of suicide attempts. In this study neither group was associated with a significant change in self-harming behaviors.
=== TFP vs. DBT vs. supportive treatment ===
Prior to treatment and at four-month intervals during treatment, patients were assessed in the following domains: suicidal behavior, aggression, impulsivity, anxiety, depression, and social adjustment. Results indicate that patients in all three treatments showed improvement in multiple domains at the one-year mark. Only DBT and TFP were significantly associated with improvement in suicidal behaviors; however, TFP outperformed DBT in anger and impulsivity improvement. Overall, participation in TFP predicted significant improvement in 10 of the 12 variables across the 6 domains, DBT in 5 of 12, and ST in 6 of the 12 variables.
=== TFP vs. schema-focused therapy ===
Significant improvements were found in both treatment groups on DSM-IV BPD criteria and on all four of the study's outcome measures (borderline psychopathology, general psychopathology, quality of life, and TFP/SFT personality concepts) after 1-, 2-, and 3-years. Schema-focused therapy (SFT, or schema therapy as it is now commonly known) was associated with a significantly higher retention rate. After three years of treatment, schema therapy patients showed greater increases in quality of life, and significantly more schema therapy patients recovered or showed clinical improvement on the BPD Severity Index, fourth version. However, the TFP cell contained more suicidal patients and showed less adherence casting doubt on a direct comparison between treatments. The schema therapy group improved significantly more than the TFP group with respect to relationships, impulsivity, and parasuicidal/suicidal behaviour although many of the alliance ratings were made after dropout. It was concluded that schema therapy was significantly more effective than TFP on all outcome measures assessed during the study. A follow-up of this study concluded that both clients and therapists rated therapeutic alliance higher in schema therapy than in TFP.
== References == | Wikipedia/Transference_focused_psychotherapy |
A clinical formulation, also known as case formulation and problem formulation, is a theoretically-based explanation or conceptualisation of the information obtained from a clinical assessment. It offers a hypothesis about the cause and nature of the presenting problems and is considered an adjunct or alternative approach to the more categorical approach of psychiatric diagnosis. In clinical practice, formulations are used to communicate a hypothesis and provide framework for developing the most suitable treatment approach. It is most commonly used by clinical psychologists and is deemed to be a core component of that profession. Mental health nurses, social workers, and some psychiatrists may also use formulations.
== Types of formulation ==
Different psychological schools or models utilize clinical formulations, including cognitive behavioral therapy (CBT) and related therapies: systemic therapy, psychodynamic therapy, and applied behavior analysis. The structure and content of a clinical formulation is determined by the psychological model. Most systems of formulation contain the following broad categories of information: symptoms and problems; precipitating stressors or events; predisposing life events or stressors; and an explanatory mechanism that links the preceding categories together and offers a description of the precipitants and maintaining influences of the person's problems.
Behavioral case formulations used in applied behavior analysis and behavior therapy are built on a rank list of problem behaviors, from which a functional analysis is conducted, sometimes based on relational frame theory. Such functional analysis is also used in third-generation behavior therapy or clinical behavior analysis such as acceptance and commitment therapy and functional analytic psychotherapy. Functional analysis looks at setting events (ecological variables, history effects, and motivating operations), antecedents, behavior chains, the problem behavior, and the consequences, short- and long-term, for the behavior.
A model of formulation that is more specific to CBT is described by Jacqueline Persons. This has seven components: problem list, core beliefs, precipitants and activating situations, origins, working hypothesis, treatment plan, and predicted obstacles to treatment.
A psychodynamic formulation would consist of a summarizing statement, a description of nondynamic factors, description of core psychodynamics using a specific model (such as ego psychology, object relations or self psychology), and a prognostic assessment which identifies the potential areas of resistance in therapy.
One school of psychotherapy which relies heavily on the formulation is cognitive analytic therapy (CAT). CAT is a fixed-term therapy, typically of around 16 sessions. At around session four, a formal written reformulation letter is offered to the patient which forms the basis for the rest of the treatment. This is usually followed by a diagrammatic reformulation to amplify and reinforce the letter.
Many psychologists use an integrative psychotherapy approach to formulation. This is to take advantage of the benefits of resources from each model the psychologist is trained in, according to the patient's needs.
== Critical evaluation of formulations ==
The quality of specific clinical formulations, and the quality of the general theoretical models used in those formulations, can be evaluated with criteria such as:
Clarity and parsimony: Is the model understandable and internally consistent, and are key concepts discrete, specific, and non-redundant?
Precision and testability: Does the model produce testable hypotheses, with operationally defined and measurable concepts?
Empirical adequacy: Are the posited mechanisms within the model empirically validated?
Comprehensiveness and generalizability: Is the model holistic enough to apply across a range of clinical phenomena?
Utility and applied value: Does it facilitate shared meaning-making between clinician and client, and are interventions based on the model shown to be effective?
Formulations can vary in temporal scope from case-based to episode-based or moment-based, and formulations may evolve during the course of treatment. Therefore, ongoing monitoring, testing, and assessment during treatment are necessary: monitoring can take the form of session-by-session progress reviews using quantitative measures, and formulations can be modified if an intervention is not as effective as hoped.
== History ==
Psychologist George Kelly, who developed personal construct theory in the 1950s, noted his complaint against traditional diagnosis in his book The Psychology of Personal Constructs (1955): "Much of the reform proposed by the psychology of personal constructs is directed towards the tendency for psychologists to impose preemptive constructions upon human behaviour. Diagnosis is all too frequently an attempt to cram a whole live struggling client into a nosological category.": 154 In place of nosological categories, Kelly used the word "formulation" and mentioned two types of formulation:: 337 a first stage of structuralization, in which the clinician tentatively organizes clinical case information "in terms of dimensions rather than in terms of disease entities": 192 while focusing on "the more important ways in which the client can change, and not merely ways in which the psychologist can distinguish him from other persons",: 154 and a second stage of construction, in which the clinician seeks a kind of negotiated integration of the clinician's organization of the case information with the client's personal meanings.
Psychologists Hans Eysenck, Monte B. Shapiro, Vic Meyer, and Ira Turkat were also among the early developers of systematic individualized alternatives to diagnosis.: 4 Meyer has been credited with providing perhaps the first training course of behaviour therapy based on a case formulation model, at the Middlesex Hospital Medical School in London in 1970.: 13 Meyer's original choice of words for clinical formulation were "behavioural formulation" or "problem formulation".: 14
== See also ==
== References ==
== Further reading == | Wikipedia/Clinical_formulation |
Active vibration control is the active application of force in an equal and opposite fashion to the forces imposed by external vibration. With this application, a precision industrial process can be maintained on a platform essentially vibration-free.
Many precision industrial processes cannot take place if the machinery is being affected by vibration. For example, the production of semiconductor wafers requires that the machines used for the photolithography steps be used in an essentially vibration-free environment or the sub-micrometre features will be blurred. Active vibration control is now also commercially available for reducing vibration in helicopters, offering better comfort with less weight than traditional passive technologies.
In the past, passive techniques were used. These include traditional vibration dampers, shock absorbers, and base isolation.
The typical active vibration control system uses several components:
A massive platform suspended by several active drivers (that may use voice coils, hydraulics, pneumatics, piezo-electric or other techniques)
Three accelerometers that measure acceleration in the three degrees of freedom
An electronic amplifier system that amplifies and inverts the signals from the accelerometers. A PID controller can be used to get better performance than a simple inverting amplifier.
For very large systems, pneumatic or hydraulic components that provide the high drive power required.
If the vibration is periodic, then the control system may adapt to the ongoing vibration, thereby providing better cancellation than would have been provided simply by reacting to each new acceleration without referring to past accelerations.
Active vibration control has been successfully implemented for vibration attenuation of beam, plate and shell structures by numerous researchers.
For effective active vibration control, the structure should be smart enough to sense external disturbances and react accordingly. In order to develop an active structure (also known as smart structure), smart materials must be integrated or embedded with the structure. The smart structure involves sensors (strain, acceleration, velocity, force etc.), actuators (force, inertial, strain etc.) and a control algorithm (feedback or feed forward). The number of smart materials have been investigated and fabricated over the years; some of them are shape memory alloys, piezoelectric materials, optical fibers, electro-rheological fluids, magneto-strictive materials.
== See also ==
Active noise control
Active vibration isolation
Magnetorheological fluid
Noise-cancelling headphones
== References == | Wikipedia/Active_vibration_control |
Active sound design is an acoustic technology concept used in automotive vehicles to alter or enhance the sound inside and outside of the vehicle. Active sound design (ASD) often uses active noise control and acoustic enhancement techniques to achieve a synthesized vehicle sound.
The typical implementations of ASD vary, from amplifying or reducing an existing sound to creating an entirely new sound. Each vehicle manufacturer may use different software or hardware techniques in ASD, as there is no one unified model. ASD exists under multiple names, like Acura’s Active Sound Control, Kia’s Active Sound System, Volkswagen’s Soundaktor, and QNX’s Acoustic Management System.
The first instance of in-vehicle active noise canceling (ANC) was developed by Lotus and featured in the 1992 Nissan Bluebird. In 2009, Lotus partnered with Harman International for an improved ANC system that eliminated noise from the road, tires, and vehicle chassis. With recent demand for economical and cleaner combustion engine vehicles, engine systems have become more efficient but less audibly appealing to consumers. Electric and fuel cell vehicles operate with high-pitched tones, lacking the recognizable sound of a typical combustion engine. With ASD, both combustion and electric vehicle manufacturers aim to improve the reception of these vehicles by increasing the quality of interior and exterior vehicle sound.
== Components ==
Active noise cancelling (ANC) is a software process that uses existing in-vehicle infotainment hardware to eliminate undesirable noise within the interior of a vehicle. This elimination technique is known as harmonic order reduction, where unwanted audio signals are identified by sensors and filtered out of the overall interior vehicle sound. Manufacturers may use ANC within a vehicle to improve the effects of ASD.
Engine sound enhancement (ESE) is a technology that allows manufacturers to enhance engine sounds with synthetic noise composed from live engine data, including components such as engine revolutions per minute (RPM) and engine torque. This synthetically composed sound is relayed through interior or exterior vehicle speakers. In ASD, manufacturers may use ESE to enhance perceived engine power without the mechanical alterations that other techniques may require.
== Motivations for ASD ==
In the face of environmental restrictions and demand for fuel economy in the automotive industry, smaller engine subsystems have made interior vehicle noise less pleasant in combustion engine vehicles. Electric and hybrid vehicles lack a distinct engine sound altogether, instead featuring a quieter high frequency noise that causes annoyance for vehicle passengers and poses a threat to pedestrians who may not recognize an oncoming vehicle. These developments have sparked consumer demand for a more desirable interior sound, as well as a brand identity in both the interior and exterior of the vehicle that is recognizable and mitigates safety risks.
Traditional iterations of sound control in vehicles included tedious mechanical alterations such as balance shafts and sound-deadening material that increased manufacturing time and cost. With the renewal of sound design in the form of ASD, manufacturing costs and complications are reduced. Instead of integrating the technology into the engine structure, the sound can be fixed at a later stage of development and optimized to the vehicle.
== Variations ==
Active sound design (ASD) takes inputs from engine and vehicle speed, pedal input, exhaust noise, and vehicle vibrations to change the interior and exterior noise of the vehicle. These input variables are filtered to produce desired outputs. Variations of ASD select one or multiple of these variables to implement a new sound. These variations include:
Passive sound generation : signals taken directly from the engine output and relayed in the interior of the vehicle.
Passive and active sound generation: amplifying the exhaust input and creating a new output to enhance the vehicle's exterior noise.
Active mounts : taking inputs from the exterior vehicle and feeding vibration outputs to the vehicle interior.
Synthetic sound : generating a new sound through the interior stereo audio.
== Application and Theory ==
In a typical combustion engine, cylinders are responsible for burning gasoline and producing energy to power the vehicle. These cylinders fire periodically and can be reduced to a series of sinusoidal waves (by conventions of the Fourier transform). These sine waves are dictated by the rotations per minute (RPM) of the engine crankshaft and the firing order, or arrangement, of the cylinders. To enrich engine sound in the passenger cabin, the harmonic orders of engine sound missing from the interior sound can be amplified through Digital Signal Processing (DSP) techniques.
To capture the missing orders, the engine load condition is identified by acceleration sensors on the engine of older vehicles, or by way of the Controller Area Network bus (CAN-bus) in modern cars. Using dynamic band-pass filters (a device that relays specified frequencies), the missing orders are passed. To minimize artefacts (disruptive clicks) during the transfer, the signal is passed through cascading high- and low-pass filters. With an adaptation from the engine’s RPM signal (captured by an inductive voltage transformer), the orders are amplified through the vehicle firewall (body separating the engine from the interior) and interior sound system.
=== Subharmonics and Sound Signatures ===
In electric and fuel cell vehicles, virtual (synthetic) sounds are often used to accommodate for the absence of a combustion engine sound. To create the optimal sound design in an electric vehicle (EV), manufacturers must acknowledge the psychoacoustic theories behind a sound preference. In a study of diesel engine sound quality, experimental analysis compared a subjective rating of sound quality components with J.D. Power’s APEAL study.
Based on studies of user preference in vehicle interiors, manufacturers aim to reduce loudness increment and high-frequency sound for a more pleasant driving experience. In modern EVs, the stock vehicle noise is masked with an RPM-dependent low-pass-filtered sound. This low-pass-filtered sound is a lower-frequency synthetic sound that is based on the EV’s actual engine parameters, like speed and load.
Alt and Jochum’s simple-integer ratio technique of harmonic order is applied to this virtual noise. Subharmonics (lower-frequency copies) are then isolated from the original high-frequency components of the EV. In an evaluation of several generated sound stimuli, individuals subjectively identified that these subharmonics were preferable for the interior sound of an EV.
Combustion engine vehicles respond dynamically to different driving conditions. For manufacturers to synthesize a brand sound in an EV, they must consider a sound signature that encompasses a dynamic driving sound. A base sound signature is defined by a schematic of sub-signatures and micro-signatures that can be expanded to increase the dynamic quality of the sound. These sub-signatures can be assigned to parameters (load, speed) or maneuvers that relay particular sound samples. By synthesizing micro-signatures in EV drivetrains, the resulting sound is more vivid and emotional than the base frequencies of the EV.
== Challenges ==
=== Consumer Response ===
For the average consumer, the advent of ASD goes largely unnoticed. With recent BMW models, however, consumers feel cheated by the synthetic engine sound. Numerous instructional videos featured online give a step-by-step on disabling ASD within BMW’s vehicles, as well as articles that addressed the false-sounding synthetic noise.
=== Brand Identity for electric and fuel cell vehicles ===
Typical combustion engine vehicles provide sound feedback during operation that represents the brand identity of the car. Because of the nature of the single gear system and arrangement of power converters in electric and fuel cell vehicles, the frequency of sound changes minimally over a period of acceleration and is not well matched to the actual state of the vehicle speed and load. Additionally, the lack of engine noise leaves a spectral gap (empty space) between wind and road noise and amplifies individual vehicle components, reducing the sound quality inside the cabin.
To create a brand identity, manufacturers must choose between reproducing a typical combustion engine sound and creating an entirely new sound concept.
=== Reproduction of the combustion engine process ===
Current implementations of active sound design in combustion engine vehicles may not accurately reproduce the micro structure variations (variations between cylinder firings) of the combustion process. As the signal waves originate from multiple periodically firing cylinders, identifying and replicating the harmonic engine orders is an inefficient process. Additionally, this approach assumes uniformity in the combustion engine. The force provided from the cylinders is periodic and may vary from one cycle to another, making it impossible for the natural component of engine noise to be replicated.
== Example Applications ==
Several automotive companies implement their own branded versions of ASD technology.
In vehicle models such as the BMW M5, an engine management system enhances the sounds provided by speed and engine power by filtering through the audio data it receives. Drivers can select a driving setting that will modify the interior acoustics as well as the actual performance of the vehicle.
Similarly, the Kia Stinger features five drive modes (eco, comfort, smart, sport, and custom) that adjust the loudness and aggressiveness of the sound inside the vehicle cabin. Paired with a turbocharged engine, this vehicle is engineered to adapt to user preferences. The turbocharger increases efficiency and forces additional compressed air into the combustion engine, creating a consistent and clean sound output.
Porsche’s ASD implementation combines a Helmholtz resonator and sound symposer to transport engine sounds directly into the vehicle cabin. The Helmholtz universal resonator restricts engine sound through an electronically controlled valve that oscillates with air, much like the sound that is emitted when one blows over the top of a bottle. The sound symposer consists of a line of plastic tubing with a membrane and flap valve that behave much like a human ear. When the Sport button is pressed, the resonator and sound symposer open fully to amplify the engine sound in the vehicle cabin.
== See also ==
Active noise control
== References ==
== External links ==
Study of High Voltage Inductive Voltage Transformer for Transients and Ferroresonance
Fundamentals of Harmonics | Wikipedia/Active_sound_design |
In mathematical analysis, the Dirac delta function (or δ distribution), also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Thus it can be represented heuristically as
δ
(
x
)
=
{
0
,
x
≠
0
∞
,
x
=
0
{\displaystyle \delta (x)={\begin{cases}0,&x\neq 0\\{\infty },&x=0\end{cases}}}
such that
∫
−
∞
∞
δ
(
x
)
d
x
=
1.
{\displaystyle \int _{-\infty }^{\infty }\delta (x)dx=1.}
Since there is no function having this property, modelling the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.
The delta function was introduced by physicist Paul Dirac, and has since been applied routinely in physics and engineering to model point masses and instantaneous impulses. It is called the delta function because it is a continuous analogue of the Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1. The mathematical rigor of the delta function was disputed until Laurent Schwartz developed the theory of distributions, where it is defined as a linear form acting on functions.
== Motivation and overview ==
The graph of the Dirac delta is usually thought of as following the whole x-axis and the positive y-axis.: 174 The Dirac delta is used to model a tall narrow spike function (an impulse), and other similar abstractions such as a point charge, point mass or electron point. For example, to calculate the dynamics of a billiard ball being struck, one can approximate the force of the impact by a Dirac delta. In doing so, one not only simplifies the equations, but one also is able to calculate the motion of the ball, by only considering the total impulse of the collision, without a detailed model of all of the elastic energy transfer at subatomic levels (for instance).
To be specific, suppose that a billiard ball is at rest. At time
t
=
0
{\displaystyle t=0}
it is struck by another ball, imparting it with a momentum P, with units kg⋅m⋅s−1. The exchange of momentum is not actually instantaneous, being mediated by elastic processes at the molecular and subatomic level, but for practical purposes it is convenient to consider that energy transfer as effectively instantaneous. The force therefore is P δ(t); the units of δ(t) are s−1.
To model this situation more rigorously, suppose that the force instead is uniformly distributed over a small time interval
Δ
t
=
[
0
,
T
]
{\displaystyle \Delta t=[0,T]}
. That is,
F
Δ
t
(
t
)
=
{
P
/
Δ
t
0
<
t
≤
T
,
0
otherwise
.
{\displaystyle F_{\Delta t}(t)={\begin{cases}P/\Delta t&0<t\leq T,\\0&{\text{otherwise}}.\end{cases}}}
Then the momentum at any time t is found by integration:
p
(
t
)
=
∫
0
t
F
Δ
t
(
τ
)
d
τ
=
{
P
t
≥
T
P
t
/
Δ
t
0
≤
t
≤
T
0
otherwise.
{\displaystyle p(t)=\int _{0}^{t}F_{\Delta t}(\tau )\,d\tau ={\begin{cases}P&t\geq T\\P\,t/\Delta t&0\leq t\leq T\\0&{\text{otherwise.}}\end{cases}}}
Now, the model situation of an instantaneous transfer of momentum requires taking the limit as Δt → 0, giving a result everywhere except at 0:
p
(
t
)
=
{
P
t
>
0
0
t
<
0.
{\displaystyle p(t)={\begin{cases}P&t>0\\0&t<0.\end{cases}}}
Here the functions
F
Δ
t
{\displaystyle F_{\Delta t}}
are thought of as useful approximations to the idea of instantaneous transfer of momentum.
The delta function allows us to construct an idealized limit of these approximations. Unfortunately, the actual limit of the functions (in the sense of pointwise convergence)
lim
Δ
t
→
0
+
F
Δ
t
{\textstyle \lim _{\Delta t\to 0^{+}}F_{\Delta t}}
is zero everywhere but a single point, where it is infinite. To make proper sense of the Dirac delta, we should instead insist that the property
∫
−
∞
∞
F
Δ
t
(
t
)
d
t
=
P
,
{\displaystyle \int _{-\infty }^{\infty }F_{\Delta t}(t)\,dt=P,}
which holds for all
Δ
t
>
0
{\displaystyle \Delta t>0}
, should continue to hold in the limit. So, in the equation
F
(
t
)
=
P
δ
(
t
)
=
lim
Δ
t
→
0
F
Δ
t
(
t
)
{\textstyle F(t)=P\,\delta (t)=\lim _{\Delta t\to 0}F_{\Delta t}(t)}
, it is understood that the limit is always taken outside the integral.
In applied mathematics, as we have done here, the delta function is often manipulated as a kind of limit (a weak limit) of a sequence of functions, each member of which has a tall spike at the origin: for example, a sequence of Gaussian distributions centered at the origin with variance tending to zero.
The Dirac delta is not truly a function, at least not a usual one with domain and range in real numbers. For example, the objects f(x) = δ(x) and g(x) = 0 are equal everywhere except at x = 0 yet have integrals that are different. According to Lebesgue integration theory, if f and g are functions such that f = g almost everywhere, then f is integrable if and only if g is integrable and the integrals of f and g are identical. A rigorous approach to regarding the Dirac delta function as a mathematical object in its own right requires measure theory or the theory of distributions.
== History ==
In physics, the Dirac delta function was popularized by Paul Dirac in this book The Principles of Quantum Mechanics published in 1930. However, Oliver Heaviside, 35 years before Dirac, described an impulsive function called the Heaviside step function for purposes and with properties analogous to Dirac's work. Even earlier several mathematicians and physicists used limits of sharply peaked functions in derivations.
An infinitesimal formula for an infinitely tall, unit impulse delta function (infinitesimal version of Cauchy distribution) explicitly appears in an 1827 text of Augustin-Louis Cauchy. Siméon Denis Poisson considered the issue in connection with the study of wave propagation as did Gustav Kirchhoff somewhat later. Kirchhoff and Hermann von Helmholtz also introduced the unit impulse as a limit of Gaussians, which also corresponded to Lord Kelvin's notion of a point heat source. The Dirac delta function as such was introduced by Paul Dirac in his 1927 paper The Physical Interpretation of the Quantum Dynamics. He called it the "delta function" since he used it as a continuum analogue of the discrete Kronecker delta.
Mathematicians refer to the same concept as a distribution rather than a function.: 33
Joseph Fourier presented what is now called the Fourier integral theorem in his treatise Théorie analytique de la chaleur in the form:
f
(
x
)
=
1
2
π
∫
−
∞
∞
d
α
f
(
α
)
∫
−
∞
∞
d
p
cos
(
p
x
−
p
α
)
,
{\displaystyle f(x)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\ \ d\alpha \,f(\alpha )\ \int _{-\infty }^{\infty }dp\ \cos(px-p\alpha )\ ,}
which is tantamount to the introduction of the δ-function in the form:
δ
(
x
−
α
)
=
1
2
π
∫
−
∞
∞
d
p
cos
(
p
x
−
p
α
)
.
{\displaystyle \delta (x-\alpha )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }dp\ \cos(px-p\alpha )\ .}
Later, Augustin Cauchy expressed the theorem using exponentials:
f
(
x
)
=
1
2
π
∫
−
∞
∞
e
i
p
x
(
∫
−
∞
∞
e
−
i
p
α
f
(
α
)
d
α
)
d
p
.
{\displaystyle f(x)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\ e^{ipx}\left(\int _{-\infty }^{\infty }e^{-ip\alpha }f(\alpha )\,d\alpha \right)\,dp.}
Cauchy pointed out that in some circumstances the order of integration is significant in this result (contrast Fubini's theorem).
As justified using the theory of distributions, the Cauchy equation can be rearranged to resemble Fourier's original formulation and expose the δ-function as
f
(
x
)
=
1
2
π
∫
−
∞
∞
e
i
p
x
(
∫
−
∞
∞
e
−
i
p
α
f
(
α
)
d
α
)
d
p
=
1
2
π
∫
−
∞
∞
(
∫
−
∞
∞
e
i
p
x
e
−
i
p
α
d
p
)
f
(
α
)
d
α
=
∫
−
∞
∞
δ
(
x
−
α
)
f
(
α
)
d
α
,
{\displaystyle {\begin{aligned}f(x)&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ipx}\left(\int _{-\infty }^{\infty }e^{-ip\alpha }f(\alpha )\,d\alpha \right)\,dp\\[4pt]&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\left(\int _{-\infty }^{\infty }e^{ipx}e^{-ip\alpha }\,dp\right)f(\alpha )\,d\alpha =\int _{-\infty }^{\infty }\delta (x-\alpha )f(\alpha )\,d\alpha ,\end{aligned}}}
where the δ-function is expressed as
δ
(
x
−
α
)
=
1
2
π
∫
−
∞
∞
e
i
p
(
x
−
α
)
d
p
.
{\displaystyle \delta (x-\alpha )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ip(x-\alpha )}\,dp\ .}
A rigorous interpretation of the exponential form and the various limitations upon the function f necessary for its application extended over several centuries. The problems with a classical interpretation are explained as follows:
The greatest drawback of the classical Fourier transformation is a rather narrow class of functions (originals) for which it can be effectively computed. Namely, it is necessary that these functions decrease sufficiently rapidly to zero (in the neighborhood of infinity) to ensure the existence of the Fourier integral. For example, the Fourier transform of such simple functions as polynomials does not exist in the classical sense. The extension of the classical Fourier transformation to distributions considerably enlarged the class of functions that could be transformed and this removed many obstacles.
Further developments included generalization of the Fourier integral, "beginning with Plancherel's pathbreaking L2-theory (1910), continuing with Wiener's and Bochner's works (around 1930) and culminating with the amalgamation into L. Schwartz's theory of distributions (1945) ...", and leading to the formal development of the Dirac delta function.
== Definitions ==
The Dirac delta function
δ
(
x
)
{\displaystyle \delta (x)}
can be loosely thought of as a function on the real line which is zero everywhere except at the origin, where it is infinite,
δ
(
x
)
≃
{
+
∞
,
x
=
0
0
,
x
≠
0
{\displaystyle \delta (x)\simeq {\begin{cases}+\infty ,&x=0\\0,&x\neq 0\end{cases}}}
and which is also constrained to satisfy the identity
∫
−
∞
∞
δ
(
x
)
d
x
=
1.
{\displaystyle \int _{-\infty }^{\infty }\delta (x)\,dx=1.}
This is merely a heuristic characterization. The Dirac delta is not a function in the traditional sense as no extended real number valued function defined on the real numbers has these properties.
=== As a measure ===
One way to rigorously capture the notion of the Dirac delta function is to define a measure, called Dirac measure, which accepts a subset A of the real line R as an argument, and returns δ(A) = 1 if 0 ∈ A, and δ(A) = 0 otherwise. If the delta function is conceptualized as modeling an idealized point mass at 0, then δ(A) represents the mass contained in the set A. One may then define the integral against δ as the integral of a function against this mass distribution. Formally, the Lebesgue integral provides the necessary analytic device. The Lebesgue integral with respect to the measure δ satisfies
∫
−
∞
∞
f
(
x
)
δ
(
d
x
)
=
f
(
0
)
{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (dx)=f(0)}
for all continuous compactly supported functions f. The measure δ is not absolutely continuous with respect to the Lebesgue measure—in fact, it is a singular measure. Consequently, the delta measure has no Radon–Nikodym derivative (with respect to Lebesgue measure)—no true function for which the property
∫
−
∞
∞
f
(
x
)
δ
(
x
)
d
x
=
f
(
0
)
{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (x)\,dx=f(0)}
holds. As a result, the latter notation is a convenient abuse of notation, and not a standard (Riemann or Lebesgue) integral.
As a probability measure on R, the delta measure is characterized by its cumulative distribution function, which is the unit step function.
H
(
x
)
=
{
1
if
x
≥
0
0
if
x
<
0.
{\displaystyle H(x)={\begin{cases}1&{\text{if }}x\geq 0\\0&{\text{if }}x<0.\end{cases}}}
This means that H(x) is the integral of the cumulative indicator function 1(−∞, x] with respect to the measure δ; to wit,
H
(
x
)
=
∫
R
1
(
−
∞
,
x
]
(
t
)
δ
(
d
t
)
=
δ
(
(
−
∞
,
x
]
)
,
{\displaystyle H(x)=\int _{\mathbf {R} }\mathbf {1} _{(-\infty ,x]}(t)\,\delta (dt)=\delta \!\left((-\infty ,x]\right),}
the latter being the measure of this interval. Thus in particular the integration of the delta function against a continuous function can be properly understood as a Riemann–Stieltjes integral:
∫
−
∞
∞
f
(
x
)
δ
(
d
x
)
=
∫
−
∞
∞
f
(
x
)
d
H
(
x
)
.
{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (dx)=\int _{-\infty }^{\infty }f(x)\,dH(x).}
All higher moments of δ are zero. In particular, characteristic function and moment generating function are both equal to one.
=== As a distribution ===
In the theory of distributions, a generalized function is considered not a function in itself but only through how it affects other functions when "integrated" against them. In keeping with this philosophy, to define the delta function properly, it is enough to say what the "integral" of the delta function is against a sufficiently "good" test function φ. Test functions are also known as bump functions. If the delta function is already understood as a measure, then the Lebesgue integral of a test function against that measure supplies the necessary integral.
A typical space of test functions consists of all smooth functions on R with compact support that have as many derivatives as required. As a distribution, the Dirac delta is a linear functional on the space of test functions and is defined by
for every test function φ.
For δ to be properly a distribution, it must be continuous in a suitable topology on the space of test functions. In general, for a linear functional S on the space of test functions to define a distribution, it is necessary and sufficient that, for every positive integer N there is an integer MN and a constant CN such that for every test function φ, one has the inequality
|
S
[
φ
]
|
≤
C
N
∑
k
=
0
M
N
sup
x
∈
[
−
N
,
N
]
|
φ
(
k
)
(
x
)
|
{\displaystyle \left|S[\varphi ]\right|\leq C_{N}\sum _{k=0}^{M_{N}}\sup _{x\in [-N,N]}\left|\varphi ^{(k)}(x)\right|}
where sup represents the supremum. With the δ distribution, one has such an inequality (with CN = 1) with MN = 0 for all N. Thus δ is a distribution of order zero. It is, furthermore, a distribution with compact support (the support being {0}).
The delta distribution can also be defined in several equivalent ways. For instance, it is the distributional derivative of the Heaviside step function. This means that for every test function φ, one has
δ
[
φ
]
=
−
∫
−
∞
∞
φ
′
(
x
)
H
(
x
)
d
x
.
{\displaystyle \delta [\varphi ]=-\int _{-\infty }^{\infty }\varphi '(x)\,H(x)\,dx.}
Intuitively, if integration by parts were permitted, then the latter integral should simplify to
∫
−
∞
∞
φ
(
x
)
H
′
(
x
)
d
x
=
∫
−
∞
∞
φ
(
x
)
δ
(
x
)
d
x
,
{\displaystyle \int _{-\infty }^{\infty }\varphi (x)\,H'(x)\,dx=\int _{-\infty }^{\infty }\varphi (x)\,\delta (x)\,dx,}
and indeed, a form of integration by parts is permitted for the Stieltjes integral, and in that case, one does have
−
∫
−
∞
∞
φ
′
(
x
)
H
(
x
)
d
x
=
∫
−
∞
∞
φ
(
x
)
d
H
(
x
)
.
{\displaystyle -\int _{-\infty }^{\infty }\varphi '(x)\,H(x)\,dx=\int _{-\infty }^{\infty }\varphi (x)\,dH(x).}
In the context of measure theory, the Dirac measure gives rise to distribution by integration. Conversely, equation (1) defines a Daniell integral on the space of all compactly supported continuous functions φ which, by the Riesz representation theorem, can be represented as the Lebesgue integral of φ with respect to some Radon measure.
Generally, when the term Dirac delta function is used, it is in the sense of distributions rather than measures, the Dirac measure being among several terms for the corresponding notion in measure theory. Some sources may also use the term Dirac delta distribution.
=== Generalizations ===
The delta function can be defined in n-dimensional Euclidean space Rn as the measure such that
∫
R
n
f
(
x
)
δ
(
d
x
)
=
f
(
0
)
{\displaystyle \int _{\mathbf {R} ^{n}}f(\mathbf {x} )\,\delta (d\mathbf {x} )=f(\mathbf {0} )}
for every compactly supported continuous function f. As a measure, the n-dimensional delta function is the product measure of the 1-dimensional delta functions in each variable separately. Thus, formally, with x = (x1, x2, ..., xn), one has
The delta function can also be defined in the sense of distributions exactly as above in the one-dimensional case. However, despite widespread use in engineering contexts, (2) should be manipulated with care, since the product of distributions can only be defined under quite narrow circumstances.
The notion of a Dirac measure makes sense on any set. Thus if X is a set, x0 ∈ X is a marked point, and Σ is any sigma algebra of subsets of X, then the measure defined on sets A ∈ Σ by
δ
x
0
(
A
)
=
{
1
if
x
0
∈
A
0
if
x
0
∉
A
{\displaystyle \delta _{x_{0}}(A)={\begin{cases}1&{\text{if }}x_{0}\in A\\0&{\text{if }}x_{0}\notin A\end{cases}}}
is the delta measure or unit mass concentrated at x0.
Another common generalization of the delta function is to a differentiable manifold where most of its properties as a distribution can also be exploited because of the differentiable structure. The delta function on a manifold M centered at the point x0 ∈ M is defined as the following distribution:
for all compactly supported smooth real-valued functions φ on M. A common special case of this construction is a case in which M is an open set in the Euclidean space Rn.
On a locally compact Hausdorff space X, the Dirac delta measure concentrated at a point x is the Radon measure associated with the Daniell integral (3) on compactly supported continuous functions φ. At this level of generality, calculus as such is no longer possible, however a variety of techniques from abstract analysis are available. For instance, the mapping
x
0
↦
δ
x
0
{\displaystyle x_{0}\mapsto \delta _{x_{0}}}
is a continuous embedding of X into the space of finite Radon measures on X, equipped with its vague topology. Moreover, the convex hull of the image of X under this embedding is dense in the space of probability measures on X.
== Properties ==
=== Scaling and symmetry ===
The delta function satisfies the following scaling property for a non-zero scalar α:
∫
−
∞
∞
δ
(
α
x
)
d
x
=
∫
−
∞
∞
δ
(
u
)
d
u
|
α
|
=
1
|
α
|
{\displaystyle \int _{-\infty }^{\infty }\delta (\alpha x)\,dx=\int _{-\infty }^{\infty }\delta (u)\,{\frac {du}{|\alpha |}}={\frac {1}{|\alpha |}}}
and so
Scaling property proof:
∫
−
∞
∞
d
x
g
(
x
)
δ
(
a
x
)
=
1
a
∫
−
∞
∞
d
x
′
g
(
x
′
a
)
δ
(
x
′
)
=
1
a
g
(
0
)
.
{\displaystyle \int \limits _{-\infty }^{\infty }dx\ g(x)\delta (ax)={\frac {1}{a}}\int \limits _{-\infty }^{\infty }dx'\ g\left({\frac {x'}{a}}\right)\delta (x')={\frac {1}{a}}g(0).}
where a change of variable x′ = ax is used. If a is negative, i.e., a = −|a|, then
∫
−
∞
∞
d
x
g
(
x
)
δ
(
a
x
)
=
1
−
|
a
|
∫
∞
−
∞
d
x
′
g
(
x
′
a
)
δ
(
x
′
)
=
1
|
a
|
∫
−
∞
∞
d
x
′
g
(
x
′
a
)
δ
(
x
′
)
=
1
|
a
|
g
(
0
)
.
{\displaystyle \int \limits _{-\infty }^{\infty }dx\ g(x)\delta (ax)={\frac {1}{-\left\vert a\right\vert }}\int \limits _{\infty }^{-\infty }dx'\ g\left({\frac {x'}{a}}\right)\delta (x')={\frac {1}{\left\vert a\right\vert }}\int \limits _{-\infty }^{\infty }dx'\ g\left({\frac {x'}{a}}\right)\delta (x')={\frac {1}{\left\vert a\right\vert }}g(0).}
Thus,
δ
(
a
x
)
=
1
|
a
|
δ
(
x
)
{\displaystyle \delta (ax)={\frac {1}{\left\vert a\right\vert }}\delta (x)}
.
In particular, the delta function is an even distribution (symmetry), in the sense that
δ
(
−
x
)
=
δ
(
x
)
{\displaystyle \delta (-x)=\delta (x)}
which is homogeneous of degree −1.
=== Algebraic properties ===
The distributional product of δ with x is equal to zero:
x
δ
(
x
)
=
0.
{\displaystyle x\,\delta (x)=0.}
More generally,
(
x
−
a
)
n
δ
(
x
−
a
)
=
0
{\displaystyle (x-a)^{n}\delta (x-a)=0}
for all positive integers
n
{\displaystyle n}
.
Conversely, if xf(x) = xg(x), where f and g are distributions, then
f
(
x
)
=
g
(
x
)
+
c
δ
(
x
)
{\displaystyle f(x)=g(x)+c\delta (x)}
for some constant c.
=== Translation ===
The integral of any function multiplied by the time-delayed Dirac delta
δ
T
(
t
)
=
δ
(
t
−
T
)
{\displaystyle \delta _{T}(t){=}\delta (t{-}T)}
is
∫
−
∞
∞
f
(
t
)
δ
(
t
−
T
)
d
t
=
f
(
T
)
.
{\displaystyle \int _{-\infty }^{\infty }f(t)\,\delta (t-T)\,dt=f(T).}
This is sometimes referred to as the sifting property or the sampling property. The delta function is said to "sift out" the value of f(t) at t = T.
It follows that the effect of convolving a function f(t) with the time-delayed Dirac delta is to time-delay f(t) by the same amount:
(
f
∗
δ
T
)
(
t
)
=
d
e
f
∫
−
∞
∞
f
(
τ
)
δ
(
t
−
T
−
τ
)
d
τ
=
∫
−
∞
∞
f
(
τ
)
δ
(
τ
−
(
t
−
T
)
)
d
τ
since
δ
(
−
x
)
=
δ
(
x
)
by (4)
=
f
(
t
−
T
)
.
{\displaystyle {\begin{aligned}(f*\delta _{T})(t)\ &{\stackrel {\mathrm {def} }{=}}\ \int _{-\infty }^{\infty }f(\tau )\,\delta (t-T-\tau )\,d\tau \\&=\int _{-\infty }^{\infty }f(\tau )\,\delta (\tau -(t-T))\,d\tau \qquad {\text{since}}~\delta (-x)=\delta (x)~~{\text{by (4)}}\\&=f(t-T).\end{aligned}}}
The sifting property holds under the precise condition that f be a tempered distribution (see the discussion of the Fourier transform below). As a special case, for instance, we have the identity (understood in the distribution sense)
∫
−
∞
∞
δ
(
ξ
−
x
)
δ
(
x
−
η
)
d
x
=
δ
(
η
−
ξ
)
.
{\displaystyle \int _{-\infty }^{\infty }\delta (\xi -x)\delta (x-\eta )\,dx=\delta (\eta -\xi ).}
=== Composition with a function ===
More generally, the delta distribution may be composed with a smooth function g(x) in such a way that the familiar change of variables formula holds (where
u
=
g
(
x
)
{\displaystyle u=g(x)}
), that
∫
R
δ
(
g
(
x
)
)
f
(
g
(
x
)
)
|
g
′
(
x
)
|
d
x
=
∫
g
(
R
)
δ
(
u
)
f
(
u
)
d
u
{\displaystyle \int _{\mathbb {R} }\delta {\bigl (}g(x){\bigr )}f{\bigl (}g(x){\bigr )}\left|g'(x)\right|dx=\int _{g(\mathbb {R} )}\delta (u)\,f(u)\,du}
provided that g is a continuously differentiable function with g′ nowhere zero. That is, there is a unique way to assign meaning to the distribution
δ
∘
g
{\displaystyle \delta \circ g}
so that this identity holds for all compactly supported test functions f. Therefore, the domain must be broken up to exclude the g′ = 0 point. This distribution satisfies δ(g(x)) = 0 if g is nowhere zero, and otherwise if g has a real root at x0, then
δ
(
g
(
x
)
)
=
δ
(
x
−
x
0
)
|
g
′
(
x
0
)
|
.
{\displaystyle \delta (g(x))={\frac {\delta (x-x_{0})}{|g'(x_{0})|}}.}
It is natural therefore to define the composition δ(g(x)) for continuously differentiable functions g by
δ
(
g
(
x
)
)
=
∑
i
δ
(
x
−
x
i
)
|
g
′
(
x
i
)
|
{\displaystyle \delta (g(x))=\sum _{i}{\frac {\delta (x-x_{i})}{|g'(x_{i})|}}}
where the sum extends over all roots of g(x), which are assumed to be simple. Thus, for example
δ
(
x
2
−
α
2
)
=
1
2
|
α
|
[
δ
(
x
+
α
)
+
δ
(
x
−
α
)
]
.
{\displaystyle \delta \left(x^{2}-\alpha ^{2}\right)={\frac {1}{2|\alpha |}}{\Big [}\delta \left(x+\alpha \right)+\delta \left(x-\alpha \right){\Big ]}.}
In the integral form, the generalized scaling property may be written as
∫
−
∞
∞
f
(
x
)
δ
(
g
(
x
)
)
d
x
=
∑
i
f
(
x
i
)
|
g
′
(
x
i
)
|
.
{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (g(x))\,dx=\sum _{i}{\frac {f(x_{i})}{|g'(x_{i})|}}.}
=== Indefinite integral ===
For a constant
a
∈
R
{\displaystyle a\in \mathbb {R} }
and a "well-behaved" arbitrary real-valued function y(x),
∫
y
(
x
)
δ
(
x
−
a
)
d
x
=
y
(
a
)
H
(
x
−
a
)
+
c
,
{\displaystyle \displaystyle {\int }y(x)\delta (x-a)dx=y(a)H(x-a)+c,}
where H(x) is the Heaviside step function and c is an integration constant.
=== Properties in n dimensions ===
The delta distribution in an n-dimensional space satisfies the following scaling property instead,
δ
(
α
x
)
=
|
α
|
−
n
δ
(
x
)
,
{\displaystyle \delta (\alpha {\boldsymbol {x}})=|\alpha |^{-n}\delta ({\boldsymbol {x}})~,}
so that δ is a homogeneous distribution of degree −n.
Under any reflection or rotation ρ, the delta function is invariant,
δ
(
ρ
x
)
=
δ
(
x
)
.
{\displaystyle \delta (\rho {\boldsymbol {x}})=\delta ({\boldsymbol {x}})~.}
As in the one-variable case, it is possible to define the composition of δ with a bi-Lipschitz function g: Rn → Rn uniquely so that the following holds
∫
R
n
δ
(
g
(
x
)
)
f
(
g
(
x
)
)
|
det
g
′
(
x
)
|
d
x
=
∫
g
(
R
n
)
δ
(
u
)
f
(
u
)
d
u
{\displaystyle \int _{\mathbb {R} ^{n}}\delta (g({\boldsymbol {x}}))\,f(g({\boldsymbol {x}}))\left|\det g'({\boldsymbol {x}})\right|d{\boldsymbol {x}}=\int _{g(\mathbb {R} ^{n})}\delta ({\boldsymbol {u}})f({\boldsymbol {u}})\,d{\boldsymbol {u}}}
for all compactly supported functions f.
Using the coarea formula from geometric measure theory, one can also define the composition of the delta function with a submersion from one Euclidean space to another one of different dimension; the result is a type of current. In the special case of a continuously differentiable function g : Rn → R such that the gradient of g is nowhere zero, the following identity holds
∫
R
n
f
(
x
)
δ
(
g
(
x
)
)
d
x
=
∫
g
−
1
(
0
)
f
(
x
)
|
∇
g
|
d
σ
(
x
)
{\displaystyle \int _{\mathbb {R} ^{n}}f({\boldsymbol {x}})\,\delta (g({\boldsymbol {x}}))\,d{\boldsymbol {x}}=\int _{g^{-1}(0)}{\frac {f({\boldsymbol {x}})}{|{\boldsymbol {\nabla }}g|}}\,d\sigma ({\boldsymbol {x}})}
where the integral on the right is over g−1(0), the (n − 1)-dimensional surface defined by g(x) = 0 with respect to the Minkowski content measure. This is known as a simple layer integral.
More generally, if S is a smooth hypersurface of Rn, then we can associate to S the distribution that integrates any compactly supported smooth function g over S:
δ
S
[
g
]
=
∫
S
g
(
s
)
d
σ
(
s
)
{\displaystyle \delta _{S}[g]=\int _{S}g({\boldsymbol {s}})\,d\sigma ({\boldsymbol {s}})}
where σ is the hypersurface measure associated to S. This generalization is associated with the potential theory of simple layer potentials on S. If D is a domain in Rn with smooth boundary S, then δS is equal to the normal derivative of the indicator function of D in the distribution sense,
−
∫
R
n
g
(
x
)
∂
1
D
(
x
)
∂
n
d
x
=
∫
S
g
(
s
)
d
σ
(
s
)
,
{\displaystyle -\int _{\mathbb {R} ^{n}}g({\boldsymbol {x}})\,{\frac {\partial 1_{D}({\boldsymbol {x}})}{\partial n}}\,d{\boldsymbol {x}}=\int _{S}\,g({\boldsymbol {s}})\,d\sigma ({\boldsymbol {s}}),}
where n is the outward normal. For a proof, see e.g. the article on the surface delta function.
In three dimensions, the delta function is represented in spherical coordinates by:
δ
(
r
−
r
0
)
=
{
1
r
2
sin
θ
δ
(
r
−
r
0
)
δ
(
θ
−
θ
0
)
δ
(
ϕ
−
ϕ
0
)
x
0
,
y
0
,
z
0
≠
0
1
2
π
r
2
sin
θ
δ
(
r
−
r
0
)
δ
(
θ
−
θ
0
)
x
0
=
y
0
=
0
,
z
0
≠
0
1
4
π
r
2
δ
(
r
−
r
0
)
x
0
=
y
0
=
z
0
=
0
{\displaystyle \delta ({\boldsymbol {r}}-{\boldsymbol {r}}_{0})={\begin{cases}\displaystyle {\frac {1}{r^{2}\sin \theta }}\delta (r-r_{0})\delta (\theta -\theta _{0})\delta (\phi -\phi _{0})&x_{0},y_{0},z_{0}\neq 0\\\displaystyle {\frac {1}{2\pi r^{2}\sin \theta }}\delta (r-r_{0})\delta (\theta -\theta _{0})&x_{0}=y_{0}=0,\ z_{0}\neq 0\\\displaystyle {\frac {1}{4\pi r^{2}}}\delta (r-r_{0})&x_{0}=y_{0}=z_{0}=0\end{cases}}}
== Derivatives ==
The derivative of the Dirac delta distribution, denoted δ′ and also called the Dirac delta prime or Dirac delta derivative as described in Laplacian of the indicator, is defined on compactly supported smooth test functions φ by
δ
′
[
φ
]
=
−
δ
[
φ
′
]
=
−
φ
′
(
0
)
.
{\displaystyle \delta '[\varphi ]=-\delta [\varphi ']=-\varphi '(0).}
The first equality here is a kind of integration by parts, for if δ were a true function then
∫
−
∞
∞
δ
′
(
x
)
φ
(
x
)
d
x
=
δ
(
x
)
φ
(
x
)
|
−
∞
∞
−
∫
−
∞
∞
δ
(
x
)
φ
′
(
x
)
d
x
=
−
∫
−
∞
∞
δ
(
x
)
φ
′
(
x
)
d
x
=
−
φ
′
(
0
)
.
{\displaystyle \int _{-\infty }^{\infty }\delta '(x)\varphi (x)\,dx=\delta (x)\varphi (x)|_{-\infty }^{\infty }-\int _{-\infty }^{\infty }\delta (x)\varphi '(x)\,dx=-\int _{-\infty }^{\infty }\delta (x)\varphi '(x)\,dx=-\varphi '(0).}
By mathematical induction, the k-th derivative of δ is defined similarly as the distribution given on test functions by
δ
(
k
)
[
φ
]
=
(
−
1
)
k
φ
(
k
)
(
0
)
.
{\displaystyle \delta ^{(k)}[\varphi ]=(-1)^{k}\varphi ^{(k)}(0).}
In particular, δ is an infinitely differentiable distribution.
The first derivative of the delta function is the distributional limit of the difference quotients:
δ
′
(
x
)
=
lim
h
→
0
δ
(
x
+
h
)
−
δ
(
x
)
h
.
{\displaystyle \delta '(x)=\lim _{h\to 0}{\frac {\delta (x+h)-\delta (x)}{h}}.}
More properly, one has
δ
′
=
lim
h
→
0
1
h
(
τ
h
δ
−
δ
)
{\displaystyle \delta '=\lim _{h\to 0}{\frac {1}{h}}(\tau _{h}\delta -\delta )}
where τh is the translation operator, defined on functions by τhφ(x) = φ(x + h), and on a distribution S by
(
τ
h
S
)
[
φ
]
=
S
[
τ
−
h
φ
]
.
{\displaystyle (\tau _{h}S)[\varphi ]=S[\tau _{-h}\varphi ].}
In the theory of electromagnetism, the first derivative of the delta function represents a point magnetic dipole situated at the origin. Accordingly, it is referred to as a dipole or the doublet function.
The derivative of the delta function satisfies a number of basic properties, including:
δ
′
(
−
x
)
=
−
δ
′
(
x
)
x
δ
′
(
x
)
=
−
δ
(
x
)
{\displaystyle {\begin{aligned}\delta '(-x)&=-\delta '(x)\\x\delta '(x)&=-\delta (x)\end{aligned}}}
which can be shown by applying a test function and integrating by parts.
The latter of these properties can also be demonstrated by applying distributional derivative definition, Leibniz 's theorem and linearity of inner product:
⟨
x
δ
′
,
φ
⟩
=
⟨
δ
′
,
x
φ
⟩
=
−
⟨
δ
,
(
x
φ
)
′
⟩
=
−
⟨
δ
,
x
′
φ
+
x
φ
′
⟩
=
−
⟨
δ
,
x
′
φ
⟩
−
⟨
δ
,
x
φ
′
⟩
=
−
x
′
(
0
)
φ
(
0
)
−
x
(
0
)
φ
′
(
0
)
=
−
x
′
(
0
)
⟨
δ
,
φ
⟩
−
x
(
0
)
⟨
δ
,
φ
′
⟩
=
−
x
′
(
0
)
⟨
δ
,
φ
⟩
+
x
(
0
)
⟨
δ
′
,
φ
⟩
=
⟨
x
(
0
)
δ
′
−
x
′
(
0
)
δ
,
φ
⟩
⟹
x
(
t
)
δ
′
(
t
)
=
x
(
0
)
δ
′
(
t
)
−
x
′
(
0
)
δ
(
t
)
=
−
x
′
(
0
)
δ
(
t
)
=
−
δ
(
t
)
{\displaystyle {\begin{aligned}\langle x\delta ',\varphi \rangle \,&=\,\langle \delta ',x\varphi \rangle \,=\,-\langle \delta ,(x\varphi )'\rangle \,=\,-\langle \delta ,x'\varphi +x\varphi '\rangle \,=\,-\langle \delta ,x'\varphi \rangle -\langle \delta ,x\varphi '\rangle \,=\,-x'(0)\varphi (0)-x(0)\varphi '(0)\\&=\,-x'(0)\langle \delta ,\varphi \rangle -x(0)\langle \delta ,\varphi '\rangle \,=\,-x'(0)\langle \delta ,\varphi \rangle +x(0)\langle \delta ',\varphi \rangle \,=\,\langle x(0)\delta '-x'(0)\delta ,\varphi \rangle \\\Longrightarrow x(t)\delta '(t)&=x(0)\delta '(t)-x'(0)\delta (t)=-x'(0)\delta (t)=-\delta (t)\end{aligned}}}
Furthermore, the convolution of δ′ with a compactly-supported, smooth function f is
δ
′
∗
f
=
δ
∗
f
′
=
f
′
,
{\displaystyle \delta '*f=\delta *f'=f',}
which follows from the properties of the distributional derivative of a convolution.
=== Higher dimensions ===
More generally, on an open set U in the n-dimensional Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
, the Dirac delta distribution centered at a point a ∈ U is defined by
δ
a
[
φ
]
=
φ
(
a
)
{\displaystyle \delta _{a}[\varphi ]=\varphi (a)}
for all
φ
∈
C
c
∞
(
U
)
{\displaystyle \varphi \in C_{c}^{\infty }(U)}
, the space of all smooth functions with compact support on U. If
α
=
(
α
1
,
…
,
α
n
)
{\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n})}
is any multi-index with
|
α
|
=
α
1
+
⋯
+
α
n
{\displaystyle |\alpha |=\alpha _{1}+\cdots +\alpha _{n}}
and
∂
α
{\displaystyle \partial ^{\alpha }}
denotes the associated mixed partial derivative operator, then the α-th derivative ∂αδa of δa is given by
⟨
∂
α
δ
a
,
φ
⟩
=
(
−
1
)
|
α
|
⟨
δ
a
,
∂
α
φ
⟩
=
(
−
1
)
|
α
|
∂
α
φ
(
x
)
|
x
=
a
for all
φ
∈
C
c
∞
(
U
)
.
{\displaystyle \left\langle \partial ^{\alpha }\delta _{a},\,\varphi \right\rangle =(-1)^{|\alpha |}\left\langle \delta _{a},\partial ^{\alpha }\varphi \right\rangle =(-1)^{|\alpha |}\partial ^{\alpha }\varphi (x){\Big |}_{x=a}\quad {\text{ for all }}\varphi \in C_{c}^{\infty }(U).}
That is, the α-th derivative of δa is the distribution whose value on any test function φ is the α-th derivative of φ at a (with the appropriate positive or negative sign).
The first partial derivatives of the delta function are thought of as double layers along the coordinate planes. More generally, the normal derivative of a simple layer supported on a surface is a double layer supported on that surface and represents a laminar magnetic monopole. Higher derivatives of the delta function are known in physics as multipoles.
Higher derivatives enter into mathematics naturally as the building blocks for the complete structure of distributions with point support. If S is any distribution on U supported on the set {a} consisting of a single point, then there is an integer m and coefficients cα such that
S
=
∑
|
α
|
≤
m
c
α
∂
α
δ
a
.
{\displaystyle S=\sum _{|\alpha |\leq m}c_{\alpha }\partial ^{\alpha }\delta _{a}.}
== Representations ==
=== Nascent delta function ===
The delta function can be viewed as the limit of a sequence of functions
δ
(
x
)
=
lim
ε
→
0
+
η
ε
(
x
)
,
{\displaystyle \delta (x)=\lim _{\varepsilon \to 0^{+}}\eta _{\varepsilon }(x),}
where ηε(x) is sometimes called a nascent delta function. This limit is meant in a weak sense: either that
for all continuous functions f having compact support, or that this limit holds for all smooth functions f with compact support. The difference between these two slightly different modes of weak convergence is often subtle: the former is convergence in the vague topology of measures, and the latter is convergence in the sense of distributions.
==== Approximations to the identity ====
Typically a nascent delta function ηε can be constructed in the following manner. Let η be an absolutely integrable function on R of total integral 1, and define
η
ε
(
x
)
=
ε
−
1
η
(
x
ε
)
.
{\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-1}\eta \left({\frac {x}{\varepsilon }}\right).}
In n dimensions, one uses instead the scaling
η
ε
(
x
)
=
ε
−
n
η
(
x
ε
)
.
{\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-n}\eta \left({\frac {x}{\varepsilon }}\right).}
Then a simple change of variables shows that ηε also has integral 1. One may show that (5) holds for all continuous compactly supported functions f, and so ηε converges weakly to δ in the sense of measures.
The ηε constructed in this way are known as an approximation to the identity. This terminology is because the space L1(R) of absolutely integrable functions is closed under the operation of convolution of functions: f ∗ g ∈ L1(R) whenever f and g are in L1(R). However, there is no identity in L1(R) for the convolution product: no element h such that f ∗ h = f for all f. Nevertheless, the sequence ηε does approximate such an identity in the sense that
f
∗
η
ε
→
f
as
ε
→
0.
{\displaystyle f*\eta _{\varepsilon }\to f\quad {\text{as }}\varepsilon \to 0.}
This limit holds in the sense of mean convergence (convergence in L1). Further conditions on the ηε, for instance that it be a mollifier associated to a compactly supported function, are needed to ensure pointwise convergence almost everywhere.
If the initial η = η1 is itself smooth and compactly supported then the sequence is called a mollifier. The standard mollifier is obtained by choosing η to be a suitably normalized bump function, for instance
η
(
x
)
=
{
1
I
n
exp
(
−
1
1
−
|
x
|
2
)
if
|
x
|
<
1
0
if
|
x
|
≥
1.
{\displaystyle \eta (x)={\begin{cases}{\frac {1}{I_{n}}}\exp {\Big (}-{\frac {1}{1-|x|^{2}}}{\Big )}&{\text{if }}|x|<1\\0&{\text{if }}|x|\geq 1.\end{cases}}}
(
I
n
{\displaystyle I_{n}}
ensuring that the total integral is 1).
In some situations such as numerical analysis, a piecewise linear approximation to the identity is desirable. This can be obtained by taking η1 to be a hat function. With this choice of η1, one has
η
ε
(
x
)
=
ε
−
1
max
(
1
−
|
x
ε
|
,
0
)
{\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-1}\max \left(1-\left|{\frac {x}{\varepsilon }}\right|,0\right)}
which are all continuous and compactly supported, although not smooth and so not a mollifier.
==== Probabilistic considerations ====
In the context of probability theory, it is natural to impose the additional condition that the initial η1 in an approximation to the identity should be positive, as such a function then represents a probability distribution. Convolution with a probability distribution is sometimes favorable because it does not result in overshoot or undershoot, as the output is a convex combination of the input values, and thus falls between the maximum and minimum of the input function. Taking η1 to be any probability distribution at all, and letting ηε(x) = η1(x/ε)/ε as above will give rise to an approximation to the identity. In general this converges more rapidly to a delta function if, in addition, η has mean 0 and has small higher moments. For instance, if η1 is the uniform distribution on
[
−
1
2
,
1
2
]
{\textstyle \left[-{\frac {1}{2}},{\frac {1}{2}}\right]}
, also known as the rectangular function, then:
η
ε
(
x
)
=
1
ε
rect
(
x
ε
)
=
{
1
ε
,
−
ε
2
<
x
<
ε
2
,
0
,
otherwise
.
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\varepsilon }}\operatorname {rect} \left({\frac {x}{\varepsilon }}\right)={\begin{cases}{\frac {1}{\varepsilon }},&-{\frac {\varepsilon }{2}}<x<{\frac {\varepsilon }{2}},\\0,&{\text{otherwise}}.\end{cases}}}
Another example is with the Wigner semicircle distribution
η
ε
(
x
)
=
{
2
π
ε
2
ε
2
−
x
2
,
−
ε
<
x
<
ε
,
0
,
otherwise
.
{\displaystyle \eta _{\varepsilon }(x)={\begin{cases}{\frac {2}{\pi \varepsilon ^{2}}}{\sqrt {\varepsilon ^{2}-x^{2}}},&-\varepsilon <x<\varepsilon ,\\0,&{\text{otherwise}}.\end{cases}}}
This is continuous and compactly supported, but not a mollifier because it is not smooth.
==== Semigroups ====
Nascent delta functions often arise as convolution semigroups. This amounts to the further constraint that the convolution of ηε with ηδ must satisfy
η
ε
∗
η
δ
=
η
ε
+
δ
{\displaystyle \eta _{\varepsilon }*\eta _{\delta }=\eta _{\varepsilon +\delta }}
for all ε, δ > 0. Convolution semigroups in L1 that form a nascent delta function are always an approximation to the identity in the above sense, however the semigroup condition is quite a strong restriction.
In practice, semigroups approximating the delta function arise as fundamental solutions or Green's functions to physically motivated elliptic or parabolic partial differential equations. In the context of applied mathematics, semigroups arise as the output of a linear time-invariant system. Abstractly, if A is a linear operator acting on functions of x, then a convolution semigroup arises by solving the initial value problem
{
∂
∂
t
η
(
t
,
x
)
=
A
η
(
t
,
x
)
,
t
>
0
lim
t
→
0
+
η
(
t
,
x
)
=
δ
(
x
)
{\displaystyle {\begin{cases}{\dfrac {\partial }{\partial t}}\eta (t,x)=A\eta (t,x),\quad t>0\\[5pt]\displaystyle \lim _{t\to 0^{+}}\eta (t,x)=\delta (x)\end{cases}}}
in which the limit is as usual understood in the weak sense. Setting ηε(x) = η(ε, x) gives the associated nascent delta function.
Some examples of physically important convolution semigroups arising from such a fundamental solution include the following.
===== The heat kernel =====
The heat kernel, defined by
η
ε
(
x
)
=
1
2
π
ε
e
−
x
2
2
ε
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\sqrt {2\pi \varepsilon }}}\mathrm {e} ^{-{\frac {x^{2}}{2\varepsilon }}}}
represents the temperature in an infinite wire at time t > 0, if a unit of heat energy is stored at the origin of the wire at time t = 0. This semigroup evolves according to the one-dimensional heat equation:
∂
u
∂
t
=
1
2
∂
2
u
∂
x
2
.
{\displaystyle {\frac {\partial u}{\partial t}}={\frac {1}{2}}{\frac {\partial ^{2}u}{\partial x^{2}}}.}
In probability theory, ηε(x) is a normal distribution of variance ε and mean 0. It represents the probability density at time t = ε of the position of a particle starting at the origin following a standard Brownian motion. In this context, the semigroup condition is then an expression of the Markov property of Brownian motion.
In higher-dimensional Euclidean space Rn, the heat kernel is
η
ε
=
1
(
2
π
ε
)
n
/
2
e
−
x
⋅
x
2
ε
,
{\displaystyle \eta _{\varepsilon }={\frac {1}{(2\pi \varepsilon )^{n/2}}}\mathrm {e} ^{-{\frac {x\cdot x}{2\varepsilon }}},}
and has the same physical interpretation, mutatis mutandis. It also represents a nascent delta function in the sense that ηε → δ in the distribution sense as ε → 0.
===== The Poisson kernel =====
The Poisson kernel
η
ε
(
x
)
=
1
π
I
m
{
1
x
−
i
ε
}
=
1
π
ε
ε
2
+
x
2
=
1
2
π
∫
−
∞
∞
e
i
ξ
x
−
|
ε
ξ
|
d
ξ
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\pi }}\mathrm {Im} \left\{{\frac {1}{x-\mathrm {i} \varepsilon }}\right\}={\frac {1}{\pi }}{\frac {\varepsilon }{\varepsilon ^{2}+x^{2}}}={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\mathrm {e} ^{\mathrm {i} \xi x-|\varepsilon \xi |}\,d\xi }
is the fundamental solution of the Laplace equation in the upper half-plane. It represents the electrostatic potential in a semi-infinite plate whose potential along the edge is held at fixed at the delta function. The Poisson kernel is also closely related to the Cauchy distribution and Epanechnikov and Gaussian kernel functions. This semigroup evolves according to the equation
∂
u
∂
t
=
−
(
−
∂
2
∂
x
2
)
1
2
u
(
t
,
x
)
{\displaystyle {\frac {\partial u}{\partial t}}=-\left(-{\frac {\partial ^{2}}{\partial x^{2}}}\right)^{\frac {1}{2}}u(t,x)}
where the operator is rigorously defined as the Fourier multiplier
F
[
(
−
∂
2
∂
x
2
)
1
2
f
]
(
ξ
)
=
|
2
π
ξ
|
F
f
(
ξ
)
.
{\displaystyle {\mathcal {F}}\left[\left(-{\frac {\partial ^{2}}{\partial x^{2}}}\right)^{\frac {1}{2}}f\right](\xi )=|2\pi \xi |{\mathcal {F}}f(\xi ).}
==== Oscillatory integrals ====
In areas of physics such as wave propagation and wave mechanics, the equations involved are hyperbolic and so may have more singular solutions. As a result, the nascent delta functions that arise as fundamental solutions of the associated Cauchy problems are generally oscillatory integrals. An example, which comes from a solution of the Euler–Tricomi equation of transonic gas dynamics, is the rescaled Airy function
ε
−
1
/
3
Ai
(
x
ε
−
1
/
3
)
.
{\displaystyle \varepsilon ^{-1/3}\operatorname {Ai} \left(x\varepsilon ^{-1/3}\right).}
Although using the Fourier transform, it is easy to see that this generates a semigroup in some sense—it is not absolutely integrable and so cannot define a semigroup in the above strong sense. Many nascent delta functions constructed as oscillatory integrals only converge in the sense of distributions (an example is the Dirichlet kernel below), rather than in the sense of measures.
Another example is the Cauchy problem for the wave equation in R1+1:
c
−
2
∂
2
u
∂
t
2
−
Δ
u
=
0
u
=
0
,
∂
u
∂
t
=
δ
for
t
=
0.
{\displaystyle {\begin{aligned}c^{-2}{\frac {\partial ^{2}u}{\partial t^{2}}}-\Delta u&=0\\u=0,\quad {\frac {\partial u}{\partial t}}=\delta &\qquad {\text{for }}t=0.\end{aligned}}}
The solution u represents the displacement from equilibrium of an infinite elastic string, with an initial disturbance at the origin.
Other approximations to the identity of this kind include the sinc function (used widely in electronics and telecommunications)
η
ε
(
x
)
=
1
π
x
sin
(
x
ε
)
=
1
2
π
∫
−
1
ε
1
ε
cos
(
k
x
)
d
k
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\pi x}}\sin \left({\frac {x}{\varepsilon }}\right)={\frac {1}{2\pi }}\int _{-{\frac {1}{\varepsilon }}}^{\frac {1}{\varepsilon }}\cos(kx)\,dk}
and the Bessel function
η
ε
(
x
)
=
1
ε
J
1
ε
(
x
+
1
ε
)
.
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\varepsilon }}J_{\frac {1}{\varepsilon }}\left({\frac {x+1}{\varepsilon }}\right).}
=== Plane wave decomposition ===
One approach to the study of a linear partial differential equation
L
[
u
]
=
f
,
{\displaystyle L[u]=f,}
where L is a differential operator on Rn, is to seek first a fundamental solution, which is a solution of the equation
L
[
u
]
=
δ
.
{\displaystyle L[u]=\delta .}
When L is particularly simple, this problem can often be resolved using the Fourier transform directly (as in the case of the Poisson kernel and heat kernel already mentioned). For more complicated operators, it is sometimes easier first to consider an equation of the form
L
[
u
]
=
h
{\displaystyle L[u]=h}
where h is a plane wave function, meaning that it has the form
h
=
h
(
x
⋅
ξ
)
{\displaystyle h=h(x\cdot \xi )}
for some vector ξ. Such an equation can be resolved (if the coefficients of L are analytic functions) by the Cauchy–Kovalevskaya theorem or (if the coefficients of L are constant) by quadrature. So, if the delta function can be decomposed into plane waves, then one can in principle solve linear partial differential equations.
Such a decomposition of the delta function into plane waves was part of a general technique first introduced essentially by Johann Radon, and then developed in this form by Fritz John (1955). Choose k so that n + k is an even integer, and for a real number s, put
g
(
s
)
=
Re
[
−
s
k
log
(
−
i
s
)
k
!
(
2
π
i
)
n
]
=
{
|
s
|
k
4
k
!
(
2
π
i
)
n
−
1
n
odd
−
|
s
|
k
log
|
s
|
k
!
(
2
π
i
)
n
n
even.
{\displaystyle g(s)=\operatorname {Re} \left[{\frac {-s^{k}\log(-is)}{k!(2\pi i)^{n}}}\right]={\begin{cases}{\frac {|s|^{k}}{4k!(2\pi i)^{n-1}}}&n{\text{ odd}}\\[5pt]-{\frac {|s|^{k}\log |s|}{k!(2\pi i)^{n}}}&n{\text{ even.}}\end{cases}}}
Then δ is obtained by applying a power of the Laplacian to the integral with respect to the unit sphere measure dω of g(x · ξ) for ξ in the unit sphere Sn−1:
δ
(
x
)
=
Δ
x
(
n
+
k
)
/
2
∫
S
n
−
1
g
(
x
⋅
ξ
)
d
ω
ξ
.
{\displaystyle \delta (x)=\Delta _{x}^{(n+k)/2}\int _{S^{n-1}}g(x\cdot \xi )\,d\omega _{\xi }.}
The Laplacian here is interpreted as a weak derivative, so that this equation is taken to mean that, for any test function φ,
φ
(
x
)
=
∫
R
n
φ
(
y
)
d
y
Δ
x
n
+
k
2
∫
S
n
−
1
g
(
(
x
−
y
)
⋅
ξ
)
d
ω
ξ
.
{\displaystyle \varphi (x)=\int _{\mathbf {R} ^{n}}\varphi (y)\,dy\,\Delta _{x}^{\frac {n+k}{2}}\int _{S^{n-1}}g((x-y)\cdot \xi )\,d\omega _{\xi }.}
The result follows from the formula for the Newtonian potential (the fundamental solution of Poisson's equation). This is essentially a form of the inversion formula for the Radon transform because it recovers the value of φ(x) from its integrals over hyperplanes. For instance, if n is odd and k = 1, then the integral on the right hand side is
c
n
Δ
x
n
+
1
2
∬
S
n
−
1
φ
(
y
)
|
(
y
−
x
)
⋅
ξ
|
d
ω
ξ
d
y
=
c
n
Δ
x
(
n
+
1
)
/
2
∫
S
n
−
1
d
ω
ξ
∫
−
∞
∞
|
p
|
R
φ
(
ξ
,
p
+
x
⋅
ξ
)
d
p
{\displaystyle {\begin{aligned}&c_{n}\Delta _{x}^{\frac {n+1}{2}}\iint _{S^{n-1}}\varphi (y)|(y-x)\cdot \xi |\,d\omega _{\xi }\,dy\\[5pt]&\qquad =c_{n}\Delta _{x}^{(n+1)/2}\int _{S^{n-1}}\,d\omega _{\xi }\int _{-\infty }^{\infty }|p|R\varphi (\xi ,p+x\cdot \xi )\,dp\end{aligned}}}
where Rφ(ξ, p) is the Radon transform of φ:
R
φ
(
ξ
,
p
)
=
∫
x
⋅
ξ
=
p
f
(
x
)
d
n
−
1
x
.
{\displaystyle R\varphi (\xi ,p)=\int _{x\cdot \xi =p}f(x)\,d^{n-1}x.}
An alternative equivalent expression of the plane wave decomposition is:
δ
(
x
)
=
{
(
n
−
1
)
!
(
2
π
i
)
n
∫
S
n
−
1
(
x
⋅
ξ
)
−
n
d
ω
ξ
n
even
1
2
(
2
π
i
)
n
−
1
∫
S
n
−
1
δ
(
n
−
1
)
(
x
⋅
ξ
)
d
ω
ξ
n
odd
.
{\displaystyle \delta (x)={\begin{cases}{\frac {(n-1)!}{(2\pi i)^{n}}}\displaystyle \int _{S^{n-1}}(x\cdot \xi )^{-n}\,d\omega _{\xi }&n{\text{ even}}\\{\frac {1}{2(2\pi i)^{n-1}}}\displaystyle \int _{S^{n-1}}\delta ^{(n-1)}(x\cdot \xi )\,d\omega _{\xi }&n{\text{ odd}}.\end{cases}}}
=== Fourier transform ===
The delta function is a tempered distribution, and therefore it has a well-defined Fourier transform. Formally, one finds
δ
^
(
ξ
)
=
∫
−
∞
∞
e
−
2
π
i
x
ξ
δ
(
x
)
d
x
=
1.
{\displaystyle {\widehat {\delta }}(\xi )=\int _{-\infty }^{\infty }e^{-2\pi ix\xi }\,\delta (x)dx=1.}
Properly speaking, the Fourier transform of a distribution is defined by imposing self-adjointness of the Fourier transform under the duality pairing
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
of tempered distributions with Schwartz functions. Thus
δ
^
{\displaystyle {\widehat {\delta }}}
is defined as the unique tempered distribution satisfying
⟨
δ
^
,
φ
⟩
=
⟨
δ
,
φ
^
⟩
{\displaystyle \langle {\widehat {\delta }},\varphi \rangle =\langle \delta ,{\widehat {\varphi }}\rangle }
for all Schwartz functions φ. And indeed it follows from this that
δ
^
=
1.
{\displaystyle {\widehat {\delta }}=1.}
As a result of this identity, the convolution of the delta function with any other tempered distribution S is simply S:
S
∗
δ
=
S
.
{\displaystyle S*\delta =S.}
That is to say that δ is an identity element for the convolution on tempered distributions, and in fact, the space of compactly supported distributions under convolution is an associative algebra with identity the delta function. This property is fundamental in signal processing, as convolution with a tempered distribution is a linear time-invariant system, and applying the linear time-invariant system measures its impulse response. The impulse response can be computed to any desired degree of accuracy by choosing a suitable approximation for δ, and once it is known, it characterizes the system completely. See LTI system theory § Impulse response and convolution.
The inverse Fourier transform of the tempered distribution f(ξ) = 1 is the delta function. Formally, this is expressed as
∫
−
∞
∞
1
⋅
e
2
π
i
x
ξ
d
ξ
=
δ
(
x
)
{\displaystyle \int _{-\infty }^{\infty }1\cdot e^{2\pi ix\xi }\,d\xi =\delta (x)}
and more rigorously, it follows since
⟨
1
,
f
^
⟩
=
f
(
0
)
=
⟨
δ
,
f
⟩
{\displaystyle \langle 1,{\widehat {f}}\rangle =f(0)=\langle \delta ,f\rangle }
for all Schwartz functions f.
In these terms, the delta function provides a suggestive statement of the orthogonality property of the Fourier kernel on R. Formally, one has
∫
−
∞
∞
e
i
2
π
ξ
1
t
[
e
i
2
π
ξ
2
t
]
∗
d
t
=
∫
−
∞
∞
e
−
i
2
π
(
ξ
2
−
ξ
1
)
t
d
t
=
δ
(
ξ
2
−
ξ
1
)
.
{\displaystyle \int _{-\infty }^{\infty }e^{i2\pi \xi _{1}t}\left[e^{i2\pi \xi _{2}t}\right]^{*}\,dt=\int _{-\infty }^{\infty }e^{-i2\pi (\xi _{2}-\xi _{1})t}\,dt=\delta (\xi _{2}-\xi _{1}).}
This is, of course, shorthand for the assertion that the Fourier transform of the tempered distribution
f
(
t
)
=
e
i
2
π
ξ
1
t
{\displaystyle f(t)=e^{i2\pi \xi _{1}t}}
is
f
^
(
ξ
2
)
=
δ
(
ξ
1
−
ξ
2
)
{\displaystyle {\widehat {f}}(\xi _{2})=\delta (\xi _{1}-\xi _{2})}
which again follows by imposing self-adjointness of the Fourier transform.
By analytic continuation of the Fourier transform, the Laplace transform of the delta function is found to be
∫
0
∞
δ
(
t
−
a
)
e
−
s
t
d
t
=
e
−
s
a
.
{\displaystyle \int _{0}^{\infty }\delta (t-a)\,e^{-st}\,dt=e^{-sa}.}
==== Fourier kernels ====
In the study of Fourier series, a major question consists of determining whether and in what sense the Fourier series associated with a periodic function converges to the function. The n-th partial sum of the Fourier series of a function f of period 2π is defined by convolution (on the interval [−π,π]) with the Dirichlet kernel:
D
N
(
x
)
=
∑
n
=
−
N
N
e
i
n
x
=
sin
(
(
N
+
1
2
)
x
)
sin
(
x
/
2
)
.
{\displaystyle D_{N}(x)=\sum _{n=-N}^{N}e^{inx}={\frac {\sin \left(\left(N+{\frac {1}{2}}\right)x\right)}{\sin(x/2)}}.}
Thus,
s
N
(
f
)
(
x
)
=
D
N
∗
f
(
x
)
=
∑
n
=
−
N
N
a
n
e
i
n
x
{\displaystyle s_{N}(f)(x)=D_{N}*f(x)=\sum _{n=-N}^{N}a_{n}e^{inx}}
where
a
n
=
1
2
π
∫
−
π
π
f
(
y
)
e
−
i
n
y
d
y
.
{\displaystyle a_{n}={\frac {1}{2\pi }}\int _{-\pi }^{\pi }f(y)e^{-iny}\,dy.}
A fundamental result of elementary Fourier series states that the Dirichlet kernel restricted to the interval [−π,π] tends to a multiple of the delta function as N → ∞. This is interpreted in the distribution sense, that
s
N
(
f
)
(
0
)
=
∫
−
π
π
D
N
(
x
)
f
(
x
)
d
x
→
2
π
f
(
0
)
{\displaystyle s_{N}(f)(0)=\int _{-\pi }^{\pi }D_{N}(x)f(x)\,dx\to 2\pi f(0)}
for every compactly supported smooth function f. Thus, formally one has
δ
(
x
)
=
1
2
π
∑
n
=
−
∞
∞
e
i
n
x
{\displaystyle \delta (x)={\frac {1}{2\pi }}\sum _{n=-\infty }^{\infty }e^{inx}}
on the interval [−π,π].
Despite this, the result does not hold for all compactly supported continuous functions: that is DN does not converge weakly in the sense of measures. The lack of convergence of the Fourier series has led to the introduction of a variety of summability methods to produce convergence. The method of Cesàro summation leads to the Fejér kernel
F
N
(
x
)
=
1
N
∑
n
=
0
N
−
1
D
n
(
x
)
=
1
N
(
sin
N
x
2
sin
x
2
)
2
.
{\displaystyle F_{N}(x)={\frac {1}{N}}\sum _{n=0}^{N-1}D_{n}(x)={\frac {1}{N}}\left({\frac {\sin {\frac {Nx}{2}}}{\sin {\frac {x}{2}}}}\right)^{2}.}
The Fejér kernels tend to the delta function in a stronger sense that
∫
−
π
π
F
N
(
x
)
f
(
x
)
d
x
→
2
π
f
(
0
)
{\displaystyle \int _{-\pi }^{\pi }F_{N}(x)f(x)\,dx\to 2\pi f(0)}
for every compactly supported continuous function f. The implication is that the Fourier series of any continuous function is Cesàro summable to the value of the function at every point.
=== Hilbert space theory ===
The Dirac delta distribution is a densely defined unbounded linear functional on the Hilbert space L2 of square-integrable functions. Indeed, smooth compactly supported functions are dense in L2, and the action of the delta distribution on such functions is well-defined. In many applications, it is possible to identify subspaces of L2 and to give a stronger topology on which the delta function defines a bounded linear functional.
==== Sobolev spaces ====
The Sobolev embedding theorem for Sobolev spaces on the real line R implies that any square-integrable function f such that
‖
f
‖
H
1
2
=
∫
−
∞
∞
|
f
^
(
ξ
)
|
2
(
1
+
|
ξ
|
2
)
d
ξ
<
∞
{\displaystyle \|f\|_{H^{1}}^{2}=\int _{-\infty }^{\infty }|{\widehat {f}}(\xi )|^{2}(1+|\xi |^{2})\,d\xi <\infty }
is automatically continuous, and satisfies in particular
δ
[
f
]
=
|
f
(
0
)
|
<
C
‖
f
‖
H
1
.
{\displaystyle \delta [f]=|f(0)|<C\|f\|_{H^{1}}.}
Thus δ is a bounded linear functional on the Sobolev space H1. Equivalently δ is an element of the continuous dual space H−1 of H1. More generally, in n dimensions, one has δ ∈ H−s(Rn) provided s > n/2.
==== Spaces of holomorphic functions ====
In complex analysis, the delta function enters via Cauchy's integral formula, which asserts that if D is a domain in the complex plane with smooth boundary, then
f
(
z
)
=
1
2
π
i
∮
∂
D
f
(
ζ
)
d
ζ
ζ
−
z
,
z
∈
D
{\displaystyle f(z)={\frac {1}{2\pi i}}\oint _{\partial D}{\frac {f(\zeta )\,d\zeta }{\zeta -z}},\quad z\in D}
for all holomorphic functions f in D that are continuous on the closure of D. As a result, the delta function δz is represented in this class of holomorphic functions by the Cauchy integral:
δ
z
[
f
]
=
f
(
z
)
=
1
2
π
i
∮
∂
D
f
(
ζ
)
d
ζ
ζ
−
z
.
{\displaystyle \delta _{z}[f]=f(z)={\frac {1}{2\pi i}}\oint _{\partial D}{\frac {f(\zeta )\,d\zeta }{\zeta -z}}.}
Moreover, let H2(∂D) be the Hardy space consisting of the closure in L2(∂D) of all holomorphic functions in D continuous up to the boundary of D. Then functions in H2(∂D) uniquely extend to holomorphic functions in D, and the Cauchy integral formula continues to hold. In particular for z ∈ D, the delta function δz is a continuous linear functional on H2(∂D). This is a special case of the situation in several complex variables in which, for smooth domains D, the Szegő kernel plays the role of the Cauchy integral.
Another representation of the delta function in a space of holomorphic functions is on the space
H
(
D
)
∩
L
2
(
D
)
{\displaystyle H(D)\cap L^{2}(D)}
of square-integrable holomorphic functions in an open set
D
⊂
C
n
{\displaystyle D\subset \mathbb {C} ^{n}}
. This is a closed subspace of
L
2
(
D
)
{\displaystyle L^{2}(D)}
, and therefore is a Hilbert space. On the other hand, the functional that evaluates a holomorphic function in
H
(
D
)
∩
L
2
(
D
)
{\displaystyle H(D)\cap L^{2}(D)}
at a point
z
{\displaystyle z}
of
D
{\displaystyle D}
is a continuous functional, and so by the Riesz representation theorem, is represented by integration against a kernel
K
z
(
ζ
)
{\displaystyle K_{z}(\zeta )}
, the Bergman kernel. This kernel is the analog of the delta function in this Hilbert space. A Hilbert space having such a kernel is called a reproducing kernel Hilbert space. In the special case of the unit disc, one has
δ
w
[
f
]
=
f
(
w
)
=
1
π
∬
|
z
|
<
1
f
(
z
)
d
x
d
y
(
1
−
z
¯
w
)
2
.
{\displaystyle \delta _{w}[f]=f(w)={\frac {1}{\pi }}\iint _{|z|<1}{\frac {f(z)\,dx\,dy}{(1-{\bar {z}}w)^{2}}}.}
==== Resolutions of the identity ====
Given a complete orthonormal basis set of functions {φn} in a separable Hilbert space, for example, the normalized eigenvectors of a compact self-adjoint operator, any vector f can be expressed as
f
=
∑
n
=
1
∞
α
n
φ
n
.
{\displaystyle f=\sum _{n=1}^{\infty }\alpha _{n}\varphi _{n}.}
The coefficients {αn} are found as
α
n
=
⟨
φ
n
,
f
⟩
,
{\displaystyle \alpha _{n}=\langle \varphi _{n},f\rangle ,}
which may be represented by the notation:
α
n
=
φ
n
†
f
,
{\displaystyle \alpha _{n}=\varphi _{n}^{\dagger }f,}
a form of the bra–ket notation of Dirac. Adopting this notation, the expansion of f takes the dyadic form:
f
=
∑
n
=
1
∞
φ
n
(
φ
n
†
f
)
.
{\displaystyle f=\sum _{n=1}^{\infty }\varphi _{n}\left(\varphi _{n}^{\dagger }f\right).}
Letting I denote the identity operator on the Hilbert space, the expression
I
=
∑
n
=
1
∞
φ
n
φ
n
†
,
{\displaystyle I=\sum _{n=1}^{\infty }\varphi _{n}\varphi _{n}^{\dagger },}
is called a resolution of the identity. When the Hilbert space is the space L2(D) of square-integrable functions on a domain D, the quantity:
φ
n
φ
n
†
,
{\displaystyle \varphi _{n}\varphi _{n}^{\dagger },}
is an integral operator, and the expression for f can be rewritten
f
(
x
)
=
∑
n
=
1
∞
∫
D
(
φ
n
(
x
)
φ
n
∗
(
ξ
)
)
f
(
ξ
)
d
ξ
.
{\displaystyle f(x)=\sum _{n=1}^{\infty }\int _{D}\,\left(\varphi _{n}(x)\varphi _{n}^{*}(\xi )\right)f(\xi )\,d\xi .}
The right-hand side converges to f in the L2 sense. It need not hold in a pointwise sense, even when f is a continuous function. Nevertheless, it is common to abuse notation and write
f
(
x
)
=
∫
δ
(
x
−
ξ
)
f
(
ξ
)
d
ξ
,
{\displaystyle f(x)=\int \,\delta (x-\xi )f(\xi )\,d\xi ,}
resulting in the representation of the delta function:
δ
(
x
−
ξ
)
=
∑
n
=
1
∞
φ
n
(
x
)
φ
n
∗
(
ξ
)
.
{\displaystyle \delta (x-\xi )=\sum _{n=1}^{\infty }\varphi _{n}(x)\varphi _{n}^{*}(\xi ).}
With a suitable rigged Hilbert space (Φ, L2(D), Φ*) where Φ ⊂ L2(D) contains all compactly supported smooth functions, this summation may converge in Φ*, depending on the properties of the basis φn. In most cases of practical interest, the orthonormal basis comes from an integral or differential operator (e.g. the heat kernel), in which case the series converges in the distribution sense.
=== Infinitesimal delta functions ===
Cauchy used an infinitesimal α to write down a unit impulse, infinitely tall and narrow Dirac-type delta function δα satisfying
∫
F
(
x
)
δ
α
(
x
)
d
x
=
F
(
0
)
{\textstyle \int F(x)\delta _{\alpha }(x)\,dx=F(0)}
in a number of articles in 1827. Cauchy defined an infinitesimal in Cours d'Analyse (1827) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's and Lazare Carnot's terminology.
Non-standard analysis allows one to rigorously treat infinitesimals. The article by Yamashita (2007) contains a bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by the hyperreals. Here the Dirac delta can be given by an actual function, having the property that for every real function F one has
∫
F
(
x
)
δ
α
(
x
)
d
x
=
F
(
0
)
{\textstyle \int F(x)\delta _{\alpha }(x)\,dx=F(0)}
as anticipated by Fourier and Cauchy.
== Dirac comb ==
A so-called uniform "pulse train" of Dirac delta measures, which is known as a Dirac comb, or as the Sha distribution, creates a sampling function, often used in digital signal processing (DSP) and discrete time signal analysis. The Dirac comb is given as the infinite sum, whose limit is understood in the distribution sense,
Ш
(
x
)
=
∑
n
=
−
∞
∞
δ
(
x
−
n
)
,
{\displaystyle \operatorname {\text{Ш}} (x)=\sum _{n=-\infty }^{\infty }\delta (x-n),}
which is a sequence of point masses at each of the integers.
Up to an overall normalizing constant, the Dirac comb is equal to its own Fourier transform. This is significant because if f is any Schwartz function, then the periodization of f is given by the convolution
(
f
∗
Ш
)
(
x
)
=
∑
n
=
−
∞
∞
f
(
x
−
n
)
.
{\displaystyle (f*\operatorname {\text{Ш}} )(x)=\sum _{n=-\infty }^{\infty }f(x-n).}
In particular,
(
f
∗
Ш
)
∧
=
f
^
Ш
^
=
f
^
Ш
{\displaystyle (f*\operatorname {\text{Ш}} )^{\wedge }={\widehat {f}}{\widehat {\operatorname {\text{Ш}} }}={\widehat {f}}\operatorname {\text{Ш}} }
is precisely the Poisson summation formula.
More generally, this formula remains to be true if f is a tempered distribution of rapid descent or, equivalently, if
f
^
{\displaystyle {\widehat {f}}}
is a slowly growing, ordinary function within the space of tempered distributions.
== Sokhotski–Plemelj theorem ==
The Sokhotski–Plemelj theorem, important in quantum mechanics, relates the delta function to the distribution p.v. 1/x, the Cauchy principal value of the function 1/x, defined by
⟨
p
.
v
.
1
x
,
φ
⟩
=
lim
ε
→
0
+
∫
|
x
|
>
ε
φ
(
x
)
x
d
x
.
{\displaystyle \left\langle \operatorname {p.v.} {\frac {1}{x}},\varphi \right\rangle =\lim _{\varepsilon \to 0^{+}}\int _{|x|>\varepsilon }{\frac {\varphi (x)}{x}}\,dx.}
Sokhotsky's formula states that
lim
ε
→
0
+
1
x
±
i
ε
=
p
.
v
.
1
x
∓
i
π
δ
(
x
)
,
{\displaystyle \lim _{\varepsilon \to 0^{+}}{\frac {1}{x\pm i\varepsilon }}=\operatorname {p.v.} {\frac {1}{x}}\mp i\pi \delta (x),}
Here the limit is understood in the distribution sense, that for all compactly supported smooth functions f,
∫
−
∞
∞
lim
ε
→
0
+
f
(
x
)
x
±
i
ε
d
x
=
∓
i
π
f
(
0
)
+
lim
ε
→
0
+
∫
|
x
|
>
ε
f
(
x
)
x
d
x
.
{\displaystyle \int _{-\infty }^{\infty }\lim _{\varepsilon \to 0^{+}}{\frac {f(x)}{x\pm i\varepsilon }}\,dx=\mp i\pi f(0)+\lim _{\varepsilon \to 0^{+}}\int _{|x|>\varepsilon }{\frac {f(x)}{x}}\,dx.}
== Relationship to the Kronecker delta ==
The Kronecker delta δij is the quantity defined by
δ
i
j
=
{
1
i
=
j
0
i
≠
j
{\displaystyle \delta _{ij}={\begin{cases}1&i=j\\0&i\not =j\end{cases}}}
for all integers i, j. This function then satisfies the following analog of the sifting property: if ai (for i in the set of all integers) is any doubly infinite sequence, then
∑
i
=
−
∞
∞
a
i
δ
i
k
=
a
k
.
{\displaystyle \sum _{i=-\infty }^{\infty }a_{i}\delta _{ik}=a_{k}.}
Similarly, for any real or complex valued continuous function f on R, the Dirac delta satisfies the sifting property
∫
−
∞
∞
f
(
x
)
δ
(
x
−
x
0
)
d
x
=
f
(
x
0
)
.
{\displaystyle \int _{-\infty }^{\infty }f(x)\delta (x-x_{0})\,dx=f(x_{0}).}
This exhibits the Kronecker delta function as a discrete analog of the Dirac delta function.
== Applications ==
=== Probability theory ===
In probability theory and statistics, the Dirac delta function is often used to represent a discrete distribution, or a partially discrete, partially continuous distribution, using a probability density function (which is normally used to represent absolutely continuous distributions). For example, the probability density function f(x) of a discrete distribution consisting of points x = {x1, ..., xn}, with corresponding probabilities p1, ..., pn, can be written as
f
(
x
)
=
∑
i
=
1
n
p
i
δ
(
x
−
x
i
)
.
{\displaystyle f(x)=\sum _{i=1}^{n}p_{i}\delta (x-x_{i}).}
As another example, consider a distribution in which 6/10 of the time returns a standard normal distribution, and 4/10 of the time returns exactly the value 3.5 (i.e. a partly continuous, partly discrete mixture distribution). The density function of this distribution can be written as
f
(
x
)
=
0.6
1
2
π
e
−
x
2
2
+
0.4
δ
(
x
−
3.5
)
.
{\displaystyle f(x)=0.6\,{\frac {1}{\sqrt {2\pi }}}e^{-{\frac {x^{2}}{2}}}+0.4\,\delta (x-3.5).}
The delta function is also used to represent the resulting probability density function of a random variable that is transformed by continuously differentiable function. If Y = g(X) is a continuous differentiable function, then the density of Y can be written as
f
Y
(
y
)
=
∫
−
∞
+
∞
f
X
(
x
)
δ
(
y
−
g
(
x
)
)
d
x
.
{\displaystyle f_{Y}(y)=\int _{-\infty }^{+\infty }f_{X}(x)\delta (y-g(x))\,dx.}
The delta function is also used in a completely different way to represent the local time of a diffusion process (like Brownian motion). The local time of a stochastic process B(t) is given by
ℓ
(
x
,
t
)
=
∫
0
t
δ
(
x
−
B
(
s
)
)
d
s
{\displaystyle \ell (x,t)=\int _{0}^{t}\delta (x-B(s))\,ds}
and represents the amount of time that the process spends at the point x in the range of the process. More precisely, in one dimension this integral can be written
ℓ
(
x
,
t
)
=
lim
ε
→
0
+
1
2
ε
∫
0
t
1
[
x
−
ε
,
x
+
ε
]
(
B
(
s
)
)
d
s
{\displaystyle \ell (x,t)=\lim _{\varepsilon \to 0^{+}}{\frac {1}{2\varepsilon }}\int _{0}^{t}\mathbf {1} _{[x-\varepsilon ,x+\varepsilon ]}(B(s))\,ds}
where
1
[
x
−
ε
,
x
+
ε
]
{\displaystyle \mathbf {1} _{[x-\varepsilon ,x+\varepsilon ]}}
is the indicator function of the interval
[
x
−
ε
,
x
+
ε
]
.
{\displaystyle [x-\varepsilon ,x+\varepsilon ].}
=== Quantum mechanics ===
The delta function is expedient in quantum mechanics. The wave function of a particle gives the probability amplitude of finding a particle within a given region of space. Wave functions are assumed to be elements of the Hilbert space L2 of square-integrable functions, and the total probability of finding a particle within a given interval is the integral of the magnitude of the wave function squared over the interval. A set {|φn⟩} of wave functions is orthonormal if
⟨
φ
n
∣
φ
m
⟩
=
δ
n
m
,
{\displaystyle \langle \varphi _{n}\mid \varphi _{m}\rangle =\delta _{nm},}
where δnm is the Kronecker delta. A set of orthonormal wave functions is complete in the space of square-integrable functions if any wave function |ψ⟩ can be expressed as a linear combination of the {|φn⟩} with complex coefficients:
ψ
=
∑
c
n
φ
n
,
{\displaystyle \psi =\sum c_{n}\varphi _{n},}
where cn = ⟨φn|ψ⟩. Complete orthonormal systems of wave functions appear naturally as the eigenfunctions of the Hamiltonian (of a bound system) in quantum mechanics that measures the energy levels, which are called the eigenvalues. The set of eigenvalues, in this case, is known as the spectrum of the Hamiltonian. In bra–ket notation this equality implies the resolution of the identity:
I
=
∑
|
φ
n
⟩
⟨
φ
n
|
.
{\displaystyle I=\sum |\varphi _{n}\rangle \langle \varphi _{n}|.}
Here the eigenvalues are assumed to be discrete, but the set of eigenvalues of an observable can also be continuous. An example is the position operator, Qψ(x) = xψ(x). The spectrum of the position (in one dimension) is the entire real line and is called a continuous spectrum. However, unlike the Hamiltonian, the position operator lacks proper eigenfunctions. The conventional way to overcome this shortcoming is to widen the class of available functions by allowing distributions as well, i.e., to replace the Hilbert space with a rigged Hilbert space. In this context, the position operator has a complete set of generalized eigenfunctions, labeled by the points y of the real line, given by
φ
y
(
x
)
=
δ
(
x
−
y
)
.
{\displaystyle \varphi _{y}(x)=\delta (x-y).}
The generalized eigenfunctions of the position operator are called the eigenkets and are denoted by φy = |y⟩.
Similar considerations apply to any other (unbounded) self-adjoint operator with continuous spectrum and no degenerate eigenvalues, such as the momentum operator P. In that case, there is a set Ω of real numbers (the spectrum) and a collection of distributions φy with y ∈ Ω such that
P
φ
y
=
y
φ
y
.
{\displaystyle P\varphi _{y}=y\varphi _{y}.}
That is, φy are the generalized eigenvectors of P. If they form an "orthonormal basis" in the distribution sense, that is:
⟨
φ
y
,
φ
y
′
⟩
=
δ
(
y
−
y
′
)
,
{\displaystyle \langle \varphi _{y},\varphi _{y'}\rangle =\delta (y-y'),}
then for any test function ψ,
ψ
(
x
)
=
∫
Ω
c
(
y
)
φ
y
(
x
)
d
y
{\displaystyle \psi (x)=\int _{\Omega }c(y)\varphi _{y}(x)\,dy}
where c(y) = ⟨ψ, φy⟩. That is, there is a resolution of the identity
I
=
∫
Ω
|
φ
y
⟩
⟨
φ
y
|
d
y
{\displaystyle I=\int _{\Omega }|\varphi _{y}\rangle \,\langle \varphi _{y}|\,dy}
where the operator-valued integral is again understood in the weak sense. If the spectrum of P has both continuous and discrete parts, then the resolution of the identity involves a summation over the discrete spectrum and an integral over the continuous spectrum.
The delta function also has many more specialized applications in quantum mechanics, such as the delta potential models for a single and double potential well.
=== Structural mechanics ===
The delta function can be used in structural mechanics to describe transient loads or point loads acting on structures. The governing equation of a simple mass–spring system excited by a sudden force impulse I at time t = 0 can be written
m
d
2
ξ
d
t
2
+
k
ξ
=
I
δ
(
t
)
,
{\displaystyle m{\frac {d^{2}\xi }{dt^{2}}}+k\xi =I\delta (t),}
where m is the mass, ξ is the deflection, and k is the spring constant.
As another example, the equation governing the static deflection of a slender beam is, according to Euler–Bernoulli theory,
E
I
d
4
w
d
x
4
=
q
(
x
)
,
{\displaystyle EI{\frac {d^{4}w}{dx^{4}}}=q(x),}
where EI is the bending stiffness of the beam, w is the deflection, x is the spatial coordinate, and q(x) is the load distribution. If a beam is loaded by a point force F at x = x0, the load distribution is written
q
(
x
)
=
F
δ
(
x
−
x
0
)
.
{\displaystyle q(x)=F\delta (x-x_{0}).}
As the integration of the delta function results in the Heaviside step function, it follows that the static deflection of a slender beam subject to multiple point loads is described by a set of piecewise polynomials.
Also, a point moment acting on a beam can be described by delta functions. Consider two opposing point forces F at a distance d apart. They then produce a moment M = Fd acting on the beam. Now, let the distance d approach the limit zero, while M is kept constant. The load distribution, assuming a clockwise moment acting at x = 0, is written
q
(
x
)
=
lim
d
→
0
(
F
δ
(
x
)
−
F
δ
(
x
−
d
)
)
=
lim
d
→
0
(
M
d
δ
(
x
)
−
M
d
δ
(
x
−
d
)
)
=
M
lim
d
→
0
δ
(
x
)
−
δ
(
x
−
d
)
d
=
M
δ
′
(
x
)
.
{\displaystyle {\begin{aligned}q(x)&=\lim _{d\to 0}{\Big (}F\delta (x)-F\delta (x-d){\Big )}\\[4pt]&=\lim _{d\to 0}\left({\frac {M}{d}}\delta (x)-{\frac {M}{d}}\delta (x-d)\right)\\[4pt]&=M\lim _{d\to 0}{\frac {\delta (x)-\delta (x-d)}{d}}\\[4pt]&=M\delta '(x).\end{aligned}}}
Point moments can thus be represented by the derivative of the delta function. Integration of the beam equation again results in piecewise polynomial deflection.
== See also ==
Atom (measure theory)
Degenerate distribution
Laplacian of the indicator
Uncertainty principle
== Notes ==
== References ==
Aratyn, Henrik; Rasinariu, Constantin (2006), A short course in mathematical methods with Maple, World Scientific, ISBN 978-981-256-461-0.
Arfken, G. B.; Weber, H. J. (2000), Mathematical Methods for Physicists (5th ed.), Boston, Massachusetts: Academic Press, ISBN 978-0-12-059825-0.
atis (2013), ATIS Telecom Glossary, archived from the original on 2013-03-13
Bracewell, R. N. (1986), The Fourier Transform and Its Applications (2nd ed.), McGraw-Hill, Bibcode:1986ftia.book.....B.
Bracewell, R. N. (2000), The Fourier Transform and Its Applications (3rd ed.), McGraw-Hill.
Córdoba, A. (1988), "La formule sommatoire de Poisson", Comptes Rendus de l'Académie des Sciences, Série I, 306: 373–376.
Courant, Richard; Hilbert, David (1962), Methods of Mathematical Physics, Volume II, Wiley-Interscience.
Davis, Howard Ted; Thomson, Kendall T (2000), Linear algebra and linear operators in engineering with applications in Mathematica, Academic Press, ISBN 978-0-12-206349-7
Dieudonné, Jean (1976), Treatise on analysis. Vol. II, New York: Academic Press [Harcourt Brace Jovanovich Publishers], ISBN 978-0-12-215502-4, MR 0530406.
Dieudonné, Jean (1972), Treatise on analysis. Vol. III, Boston, Massachusetts: Academic Press, MR 0350769
Dirac, Paul (1930), The Principles of Quantum Mechanics (1st ed.), Oxford University Press.
Driggers, Ronald G. (2003), Encyclopedia of Optical Engineering, CRC Press, Bibcode:2003eoe..book.....D, ISBN 978-0-8247-0940-2.
Duistermaat, Hans; Kolk (2010), Distributions: Theory and applications, Springer.
Federer, Herbert (1969), Geometric measure theory, Die Grundlehren der mathematischen Wissenschaften, vol. 153, New York: Springer-Verlag, pp. xiv+676, ISBN 978-3-540-60656-7, MR 0257325.
Gannon, Terry (2008), "Vertex operator algebras", Princeton Companion to Mathematics, Princeton University Press, ISBN 978-1400830398.
Gelfand, I. M.; Shilov, G. E. (1966–1968), Generalized functions, vol. 1–5, Academic Press, ISBN 9781483262246.
Hartmann, William M. (1997), Signals, sound, and sensation, Springer, ISBN 978-1-56396-283-7.
Hazewinkel, Michiel (1995). Encyclopaedia of Mathematics (set). Springer Science & Business Media. ISBN 978-1-55608-010-4.
Hazewinkel, Michiel (2011). Encyclopaedia of mathematics. Vol. 10. Springer. ISBN 978-90-481-4896-7. OCLC 751862625.
Hewitt, E; Stromberg, K (1963), Real and abstract analysis, Springer-Verlag.
Hörmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft., vol. 256, Springer, doi:10.1007/978-3-642-96750-4, ISBN 978-3-540-12104-6, MR 0717035.
Isham, C. J. (1995), Lectures on quantum theory: mathematical and structural foundations, Imperial College Press, Bibcode:1995lqtm.book.....I, ISBN 978-81-7764-190-5.
John, Fritz (1955), Plane waves and spherical means applied to partial differential equations, Interscience Publishers, New York-London, MR 0075429. Reprinted, Dover Publications, 2004, ISBN 9780486438047.
Lang, Serge (1997), Undergraduate analysis, Undergraduate Texts in Mathematics (2nd ed.), Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4757-2698-5, ISBN 978-0-387-94841-6, MR 1476913.
Lange, Rutger-Jan (2012), "Potential theory, path integrals and the Laplacian of the indicator", Journal of High Energy Physics, 2012 (11): 29–30, arXiv:1302.0864, Bibcode:2012JHEP...11..032L, doi:10.1007/JHEP11(2012)032, S2CID 56188533.
Laugwitz, D. (1989), "Definite values of infinite sums: aspects of the foundations of infinitesimal analysis around 1820", Arch. Hist. Exact Sci., 39 (3): 195–245, doi:10.1007/BF00329867, S2CID 120890300.
Levin, Frank S. (2002), "Coordinate-space wave functions and completeness", An introduction to quantum theory, Cambridge University Press, pp. 109ff, ISBN 978-0-521-59841-5
Li, Y. T.; Wong, R. (2008), "Integral and series representations of the Dirac delta function", Commun. Pure Appl. Anal., 7 (2): 229–247, arXiv:1303.1943, doi:10.3934/cpaa.2008.7.229, MR 2373214, S2CID 119319140.
de la Madrid Modino, R. (2001). Quantum mechanics in rigged Hilbert space language (PhD thesis). Universidad de Valladolid.
de la Madrid, R.; Bohm, A.; Gadella, M. (2002), "Rigged Hilbert Space Treatment of Continuous Spectrum", Fortschr. Phys., 50 (2): 185–216, arXiv:quant-ph/0109154, Bibcode:2002ForPh..50..185D, doi:10.1002/1521-3978(200203)50:2<185::AID-PROP185>3.0.CO;2-S, S2CID 9407651.
McMahon, D. (2005-11-22), "An Introduction to State Space" (PDF), Quantum Mechanics Demystified, A Self-Teaching Guide, Demystified Series, New York: McGraw-Hill, p. 108, ISBN 978-0-07-145546-6, retrieved 2008-03-17.
van der Pol, Balth.; Bremmer, H. (1987), Operational calculus (3rd ed.), New York: Chelsea Publishing Co., ISBN 978-0-8284-0327-6, MR 0904873.
Rudin, Walter (1966). Devine, Peter R. (ed.). Real and complex analysis (3rd ed.). New York: McGraw-Hill (published 1987). ISBN 0-07-100276-6.
Rudin, Walter (1991), Functional Analysis (2nd ed.), McGraw-Hill, ISBN 978-0-07-054236-5.
Vallée, Olivier; Soares, Manuel (2004), Airy functions and applications to physics, London: Imperial College Press, ISBN 9781911299486.
Saichev, A I; Woyczyński, Wojbor Andrzej (1997), "Chapter1: Basic definitions and operations", Distributions in the Physical and Engineering Sciences: Distributional and fractal calculus, integral transforms, and wavelets, Birkhäuser, ISBN 978-0-8176-3924-2
Schwartz, L. (1950), Théorie des distributions, vol. 1, Hermann.
Schwartz, L. (1951), Théorie des distributions, vol. 2, Hermann.
Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, ISBN 978-0-691-08078-9.
Strichartz, R. (1994), A Guide to Distribution Theory and Fourier Transforms, CRC Press, ISBN 978-0-8493-8273-4.
Vladimirov, V. S. (1971), Equations of mathematical physics, Marcel Dekker, ISBN 978-0-8247-1713-1.
Weisstein, Eric W. "Delta Function". MathWorld.
Yamashita, H. (2006), "Pointwise analysis of scalar fields: A nonstandard approach", Journal of Mathematical Physics, 47 (9): 092301, Bibcode:2006JMP....47i2301Y, doi:10.1063/1.2339017
Yamashita, H. (2007), "Comment on "Pointwise analysis of scalar fields: A nonstandard approach" [J. Math. Phys. 47, 092301 (2006)]", Journal of Mathematical Physics, 48 (8): 084101, Bibcode:2007JMP....48h4101Y, doi:10.1063/1.2771422
== External links ==
Media related to Dirac distribution at Wikimedia Commons
"Delta-function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
KhanAcademy.org video lesson
The Dirac Delta function, a tutorial on the Dirac delta function.
Video Lectures – Lecture 23, a lecture by Arthur Mattuck.
The Dirac delta measure is a hyperfunction
We show the existence of a unique solution and analyze a finite element approximation when the source term is a Dirac delta measure
Non-Lebesgue measures on R. Lebesgue-Stieltjes measure, Dirac delta measure. Archived 2008-03-07 at the Wayback Machine | Wikipedia/Delta_function |
The Schrödinger equation is a partial differential equation that governs the wave function of a non-relativistic quantum-mechanical system.: 1–2 Its discovery was a significant landmark in the development of quantum mechanics. It is named after Erwin Schrödinger, an Austrian physicist, who postulated the equation in 1925 and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933.
Conceptually, the Schrödinger equation is the quantum counterpart of Newton's second law in classical mechanics. Given a set of known initial conditions, Newton's second law makes a mathematical prediction as to what path a given physical system will take over time. The Schrödinger equation gives the evolution over time of the wave function, the quantum-mechanical characterization of an isolated physical system. The equation was postulated by Schrödinger based on a postulate of Louis de Broglie that all matter has an associated matter wave. The equation predicted bound states of the atom in agreement with experimental observations.: II:268
The Schrödinger equation is not the only way to study quantum mechanical systems and make predictions. Other formulations of quantum mechanics include matrix mechanics, introduced by Werner Heisenberg, and the path integral formulation, developed chiefly by Richard Feynman. When these approaches are compared, the use of the Schrödinger equation is sometimes called "wave mechanics".
The equation given by Schrödinger is nonrelativistic because it contains a first derivative in time and a second derivative in space, and therefore space and time are not on equal footing. Paul Dirac incorporated special relativity and quantum mechanics into a single formulation that simplifies to the Schrödinger equation in the non-relativistic limit. This is the Dirac equation, which contains a single derivative in both space and time. Another partial differential equation, the Klein–Gordon equation, led to a problem with probability density even though it was a relativistic wave equation. The probability density could be negative, which is physically unviable. This was fixed by Dirac by taking the so-called square root of the Klein–Gordon operator and in turn introducing Dirac matrices. In a modern context, the Klein–Gordon equation describes spin-less particles, while the Dirac equation describes spin-1/2 particles.
== Definition ==
=== Preliminaries ===
Introductory courses on physics or chemistry typically introduce the Schrödinger equation in a way that can be appreciated knowing only the concepts and notations of basic calculus, particularly derivatives with respect to space and time. A special case of the Schrödinger equation that admits a statement in those terms is the position-space Schrödinger equation for a single nonrelativistic particle in one dimension:
i
ℏ
∂
∂
t
Ψ
(
x
,
t
)
=
[
−
ℏ
2
2
m
∂
2
∂
x
2
+
V
(
x
,
t
)
]
Ψ
(
x
,
t
)
.
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (x,t)=\left[-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}+V(x,t)\right]\Psi (x,t).}
Here,
Ψ
(
x
,
t
)
{\displaystyle \Psi (x,t)}
is a wave function, a function that assigns a complex number to each point
x
{\displaystyle x}
at each time
t
{\displaystyle t}
. The parameter
m
{\displaystyle m}
is the mass of the particle, and
V
(
x
,
t
)
{\displaystyle V(x,t)}
is the potential that represents the environment in which the particle exists.: 74 The constant
i
{\displaystyle i}
is the imaginary unit, and
ℏ
{\displaystyle \hbar }
is the reduced Planck constant, which has units of action (energy multiplied by time).: 10
Broadening beyond this simple case, the mathematical formulation of quantum mechanics developed by Paul Dirac, David Hilbert, John von Neumann, and Hermann Weyl defines the state of a quantum mechanical system to be a vector
|
ψ
⟩
{\displaystyle |\psi \rangle }
belonging to a separable complex Hilbert space
H
{\displaystyle {\mathcal {H}}}
. This vector is postulated to be normalized under the Hilbert space's inner product, that is, in Dirac notation it obeys
⟨
ψ
|
ψ
⟩
=
1
{\displaystyle \langle \psi |\psi \rangle =1}
. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of square-integrable functions
L
2
{\displaystyle L^{2}}
, while the Hilbert space for the spin of a single proton is the two-dimensional complex vector space
C
2
{\displaystyle \mathbb {C} ^{2}}
with the usual inner product.: 322
Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which are self-adjoint operators acting on the Hilbert space. A wave function can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue
λ
{\displaystyle \lambda }
is non-degenerate and the probability is given by
|
⟨
λ
|
ψ
⟩
|
2
{\displaystyle |\langle \lambda |\psi \rangle |^{2}}
, where
|
λ
⟩
{\displaystyle |\lambda \rangle }
is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by
⟨
ψ
|
P
λ
|
ψ
⟩
{\displaystyle \langle \psi |P_{\lambda }|\psi \rangle }
, where
P
λ
{\displaystyle P_{\lambda }}
is the projector onto its associated eigenspace.
A momentum eigenstate would be a perfectly monochromatic wave of infinite extent, which is not square-integrable. Likewise a position eigenstate would be a Dirac delta distribution, not square-integrable and technically not a function at all. Consequently, neither can belong to the particle's Hilbert space. Physicists sometimes regard these eigenstates, composed of elements outside the Hilbert space, as "generalized eigenvectors". These are used for calculational convenience and do not represent physical states.: 100–105 Thus, a position-space wave function
Ψ
(
x
,
t
)
{\displaystyle \Psi (x,t)}
as used above can be written as the inner product of a time-dependent state vector
|
Ψ
(
t
)
⟩
{\displaystyle |\Psi (t)\rangle }
with unphysical but convenient "position eigenstates"
|
x
⟩
{\displaystyle |x\rangle }
:
Ψ
(
x
,
t
)
=
⟨
x
|
Ψ
(
t
)
⟩
.
{\displaystyle \Psi (x,t)=\langle x|\Psi (t)\rangle .}
=== Time-dependent equation ===
The form of the Schrödinger equation depends on the physical situation. The most general form is the time-dependent Schrödinger equation, which gives a description of a system evolving with time:: 143
where
t
{\displaystyle t}
is time,
|
Ψ
(
t
)
⟩
{\displaystyle \vert \Psi (t)\rangle }
is the state vector of the quantum system (
Ψ
{\displaystyle \Psi }
being the Greek letter psi), and
H
^
{\displaystyle {\hat {H}}}
is an observable, the Hamiltonian operator.
The term "Schrödinger equation" can refer to both the general equation, or the specific nonrelativistic version. The general equation is indeed quite general, used throughout quantum mechanics, for everything from the Dirac equation to quantum field theory, by plugging in diverse expressions for the Hamiltonian. The specific nonrelativistic version is an approximation that yields accurate results in many situations, but only to a certain extent (see relativistic quantum mechanics and relativistic quantum field theory).
To apply the Schrödinger equation, write down the Hamiltonian for the system, accounting for the kinetic and potential energies of the particles constituting the system, then insert it into the Schrödinger equation. The resulting partial differential equation is solved for the wave function, which contains information about the system. In practice, the square of the absolute value of the wave function at each point is taken to define a probability density function.: 78 For example, given a wave function in position space
Ψ
(
x
,
t
)
{\displaystyle \Psi (x,t)}
as above, we have
Pr
(
x
,
t
)
=
|
Ψ
(
x
,
t
)
|
2
.
{\displaystyle \Pr(x,t)=|\Psi (x,t)|^{2}.}
=== Time-independent equation ===
The time-dependent Schrödinger equation described above predicts that wave functions can form standing waves, called stationary states. These states are particularly important as their individual study later simplifies the task of solving the time-dependent Schrödinger equation for any state. Stationary states can also be described by a simpler form of the Schrödinger equation, the time-independent Schrödinger equation.
where
E
{\displaystyle E}
is the energy of the system.: 134 This is only used when the Hamiltonian itself is not dependent on time explicitly. However, even in this case the total wave function is dependent on time as explained in the section on linearity below. In the language of linear algebra, this equation is an eigenvalue equation. Therefore, the wave function is an eigenfunction of the Hamiltonian operator with corresponding eigenvalue(s)
E
{\displaystyle E}
.
== Properties ==
=== Linearity ===
The Schrödinger equation is a linear differential equation, meaning that if two state vectors
|
ψ
1
⟩
{\displaystyle |\psi _{1}\rangle }
and
|
ψ
2
⟩
{\displaystyle |\psi _{2}\rangle }
are solutions, then so is any linear combination
|
ψ
⟩
=
a
|
ψ
1
⟩
+
b
|
ψ
2
⟩
{\displaystyle |\psi \rangle =a|\psi _{1}\rangle +b|\psi _{2}\rangle }
of the two state vectors where a and b are any complex numbers.: 25 Moreover, the sum can be extended for any number of state vectors. This property allows superpositions of quantum states to be solutions of the Schrödinger equation. Even more generally, it holds that a general solution to the Schrödinger equation can be found by taking a weighted sum over a basis of states. A choice often employed is the basis of energy eigenstates, which are solutions of the time-independent Schrödinger equation. In this basis, a time-dependent state vector
|
Ψ
(
t
)
⟩
{\displaystyle |\Psi (t)\rangle }
can be written as the linear combination
|
Ψ
(
t
)
⟩
=
∑
n
A
n
e
−
i
E
n
t
/
ℏ
|
ψ
E
n
⟩
,
{\displaystyle |\Psi (t)\rangle =\sum _{n}A_{n}e^{{-iE_{n}t}/\hbar }|\psi _{E_{n}}\rangle ,}
where
A
n
{\displaystyle A_{n}}
are complex numbers and the vectors
|
ψ
E
n
⟩
{\displaystyle |\psi _{E_{n}}\rangle }
are solutions of the time-independent equation
H
^
|
ψ
E
n
⟩
=
E
n
|
ψ
E
n
⟩
{\displaystyle {\hat {H}}|\psi _{E_{n}}\rangle =E_{n}|\psi _{E_{n}}\rangle }
.
=== Unitarity ===
Holding the Hamiltonian
H
^
{\displaystyle {\hat {H}}}
constant, the Schrödinger equation has the solution
|
Ψ
(
t
)
⟩
=
e
−
i
H
^
t
/
ℏ
|
Ψ
(
0
)
⟩
.
{\displaystyle |\Psi (t)\rangle =e^{-i{\hat {H}}t/\hbar }|\Psi (0)\rangle .}
The operator
U
^
(
t
)
=
e
−
i
H
^
t
/
ℏ
{\displaystyle {\hat {U}}(t)=e^{-i{\hat {H}}t/\hbar }}
is known as the time-evolution operator, and it is unitary: it preserves the inner product between vectors in the Hilbert space. Unitarity is a general feature of time evolution under the Schrödinger equation. If the initial state is
|
Ψ
(
0
)
⟩
{\displaystyle |\Psi (0)\rangle }
, then the state at a later time
t
{\displaystyle t}
will be given by
|
Ψ
(
t
)
⟩
=
U
^
(
t
)
|
Ψ
(
0
)
⟩
{\displaystyle |\Psi (t)\rangle ={\hat {U}}(t)|\Psi (0)\rangle }
for some unitary operator
U
^
(
t
)
{\displaystyle {\hat {U}}(t)}
. Conversely, suppose that
U
^
(
t
)
{\displaystyle {\hat {U}}(t)}
is a continuous family of unitary operators parameterized by
t
{\displaystyle t}
. Without loss of generality, the parameterization can be chosen so that
U
^
(
0
)
{\displaystyle {\hat {U}}(0)}
is the identity operator and that
U
^
(
t
/
N
)
N
=
U
^
(
t
)
{\displaystyle {\hat {U}}(t/N)^{N}={\hat {U}}(t)}
for any
N
>
0
{\displaystyle N>0}
. Then
U
^
(
t
)
{\displaystyle {\hat {U}}(t)}
depends upon the parameter
t
{\displaystyle t}
in such a way that
U
^
(
t
)
=
e
−
i
G
^
t
{\displaystyle {\hat {U}}(t)=e^{-i{\hat {G}}t}}
for some self-adjoint operator
G
^
{\displaystyle {\hat {G}}}
, called the generator of the family
U
^
(
t
)
{\displaystyle {\hat {U}}(t)}
. A Hamiltonian is just such a generator (up to the factor of the Planck constant that would be set to 1 in natural units).
To see that the generator is Hermitian, note that with
U
^
(
δ
t
)
≈
U
^
(
0
)
−
i
G
^
δ
t
{\displaystyle {\hat {U}}(\delta t)\approx {\hat {U}}(0)-i{\hat {G}}\delta t}
, we have
U
^
(
δ
t
)
†
U
^
(
δ
t
)
≈
(
U
^
(
0
)
†
+
i
G
^
†
δ
t
)
(
U
^
(
0
)
−
i
G
^
δ
t
)
=
I
+
i
δ
t
(
G
^
†
−
G
^
)
+
O
(
δ
t
2
)
,
{\displaystyle {\hat {U}}(\delta t)^{\dagger }{\hat {U}}(\delta t)\approx ({\hat {U}}(0)^{\dagger }+i{\hat {G}}^{\dagger }\delta t)({\hat {U}}(0)-i{\hat {G}}\delta t)=I+i\delta t({\hat {G}}^{\dagger }-{\hat {G}})+O(\delta t^{2}),}
so
U
^
(
t
)
{\displaystyle {\hat {U}}(t)}
is unitary only if, to first order, its derivative is Hermitian.
=== Changes of basis ===
The Schrödinger equation is often presented using quantities varying as functions of position, but as a vector-operator equation it has a valid representation in any arbitrary complete basis of kets in Hilbert space. As mentioned above, "bases" that lie outside the physical Hilbert space are also employed for calculational purposes. This is illustrated by the position-space and momentum-space Schrödinger equations for a nonrelativistic, spinless particle.: 182 The Hilbert space for such a particle is the space of complex square-integrable functions on three-dimensional Euclidean space, and its Hamiltonian is the sum of a kinetic-energy term that is quadratic in the momentum operator and a potential-energy term:
i
ℏ
d
d
t
|
Ψ
(
t
)
⟩
=
(
1
2
m
p
^
2
+
V
^
)
|
Ψ
(
t
)
⟩
.
{\displaystyle i\hbar {\frac {d}{dt}}|\Psi (t)\rangle =\left({\frac {1}{2m}}{\hat {p}}^{2}+{\hat {V}}\right)|\Psi (t)\rangle .}
Writing
r
{\displaystyle \mathbf {r} }
for a three-dimensional position vector and
p
{\displaystyle \mathbf {p} }
for a three-dimensional momentum vector, the position-space Schrödinger equation is
i
ℏ
∂
∂
t
Ψ
(
r
,
t
)
=
−
ℏ
2
2
m
∇
2
Ψ
(
r
,
t
)
+
V
(
r
)
Ψ
(
r
,
t
)
.
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\Psi (\mathbf {r} ,t)+V(\mathbf {r} )\Psi (\mathbf {r} ,t).}
The momentum-space counterpart involves the Fourier transforms of the wave function and the potential:
i
ℏ
∂
∂
t
Ψ
~
(
p
,
t
)
=
p
2
2
m
Ψ
~
(
p
,
t
)
+
(
2
π
ℏ
)
−
3
/
2
∫
d
3
p
′
V
~
(
p
−
p
′
)
Ψ
~
(
p
′
,
t
)
.
{\displaystyle i\hbar {\frac {\partial }{\partial t}}{\tilde {\Psi }}(\mathbf {p} ,t)={\frac {\mathbf {p} ^{2}}{2m}}{\tilde {\Psi }}(\mathbf {p} ,t)+(2\pi \hbar )^{-3/2}\int d^{3}\mathbf {p} '\,{\tilde {V}}(\mathbf {p} -\mathbf {p} '){\tilde {\Psi }}(\mathbf {p} ',t).}
The functions
Ψ
(
r
,
t
)
{\displaystyle \Psi (\mathbf {r} ,t)}
and
Ψ
~
(
p
,
t
)
{\displaystyle {\tilde {\Psi }}(\mathbf {p} ,t)}
are derived from
|
Ψ
(
t
)
⟩
{\displaystyle |\Psi (t)\rangle }
by
Ψ
(
r
,
t
)
=
⟨
r
|
Ψ
(
t
)
⟩
,
{\displaystyle \Psi (\mathbf {r} ,t)=\langle \mathbf {r} |\Psi (t)\rangle ,}
Ψ
~
(
p
,
t
)
=
⟨
p
|
Ψ
(
t
)
⟩
,
{\displaystyle {\tilde {\Psi }}(\mathbf {p} ,t)=\langle \mathbf {p} |\Psi (t)\rangle ,}
where
|
r
⟩
{\displaystyle |\mathbf {r} \rangle }
and
|
p
⟩
{\displaystyle |\mathbf {p} \rangle }
do not belong to the Hilbert space itself, but have well-defined inner products with all elements of that space.
When restricted from three dimensions to one, the position-space equation is just the first form of the Schrödinger equation given above. The relation between position and momentum in quantum mechanics can be appreciated in a single dimension. In canonical quantization, the classical variables
x
{\displaystyle x}
and
p
{\displaystyle p}
are promoted to self-adjoint operators
x
^
{\displaystyle {\hat {x}}}
and
p
^
{\displaystyle {\hat {p}}}
that satisfy the canonical commutation relation
[
x
^
,
p
^
]
=
i
ℏ
.
{\displaystyle [{\hat {x}},{\hat {p}}]=i\hbar .}
This implies that: 190
⟨
x
|
p
^
|
Ψ
⟩
=
−
i
ℏ
d
d
x
Ψ
(
x
)
,
{\displaystyle \langle x|{\hat {p}}|\Psi \rangle =-i\hbar {\frac {d}{dx}}\Psi (x),}
so the action of the momentum operator
p
^
{\displaystyle {\hat {p}}}
in the position-space representation is
−
i
ℏ
d
d
x
{\textstyle -i\hbar {\frac {d}{dx}}}
. Thus,
p
^
2
{\displaystyle {\hat {p}}^{2}}
becomes a second derivative, and in three dimensions, the second derivative becomes the Laplacian
∇
2
{\displaystyle \nabla ^{2}}
.
The canonical commutation relation also implies that the position and momentum operators are Fourier conjugates of each other. Consequently, functions originally defined in terms of their position dependence can be converted to functions of momentum using the Fourier transform.: 103–104 In solid-state physics, the Schrödinger equation is often written for functions of momentum, as Bloch's theorem ensures the periodic crystal lattice potential couples
Ψ
~
(
p
)
{\displaystyle {\tilde {\Psi }}(p)}
with
Ψ
~
(
p
+
ℏ
K
)
{\displaystyle {\tilde {\Psi }}(p+\hbar K)}
for only discrete reciprocal lattice vectors
K
{\displaystyle K}
. This makes it convenient to solve the momentum-space Schrödinger equation at each point in the Brillouin zone independently of the other points in the Brillouin zone.: 138
=== Probability current ===
The Schrödinger equation is consistent with local probability conservation.: 238 It also ensures that a normalized wavefunction remains normalized after time evolution. In matrix mechanics, this means that the time evolution operator is a unitary operator. In contrast to, for example, the Klein Gordon equation, although a redefined inner product of a wavefunction can be time independent, the total volume integral of modulus square of the wavefunction need not be time independent.
The continuity equation for probability in non relativistic quantum mechanics is stated as:
∂
∂
t
ρ
(
r
,
t
)
+
∇
⋅
j
=
0
,
{\displaystyle {\frac {\partial }{\partial t}}\rho \left(\mathbf {r} ,t\right)+\nabla \cdot \mathbf {j} =0,}
where
j
=
1
2
m
(
Ψ
∗
p
^
Ψ
−
Ψ
p
^
Ψ
∗
)
=
−
i
ℏ
2
m
(
ψ
∗
∇
ψ
−
ψ
∇
ψ
∗
)
=
ℏ
m
Im
(
ψ
∗
∇
ψ
)
{\displaystyle \mathbf {j} ={\frac {1}{2m}}\left(\Psi ^{*}{\hat {\mathbf {p} }}\Psi -\Psi {\hat {\mathbf {p} }}\Psi ^{*}\right)=-{\frac {i\hbar }{2m}}(\psi ^{*}\nabla \psi -\psi \nabla \psi ^{*})={\frac {\hbar }{m}}\operatorname {Im} (\psi ^{*}\nabla \psi )}
is the probability current or probability flux (flow per unit area).
If the wavefunction is represented as
ψ
(
x
,
t
)
=
ρ
(
x
,
t
)
exp
(
i
S
(
x
,
t
)
ℏ
)
,
{\textstyle \psi ({\bf {x}},t)={\sqrt {\rho ({\bf {x}},t)}}\exp \left({\frac {iS({\bf {x}},t)}{\hbar }}\right),}
where
S
(
x
,
t
)
{\displaystyle S(\mathbf {x} ,t)}
is a real function which represents the complex phase of the wavefunction, then the probability flux is calculated as:
j
=
ρ
∇
S
m
{\displaystyle \mathbf {j} ={\frac {\rho \nabla S}{m}}}
Hence, the spatial variation of the phase of a wavefunction is said to characterize the probability flux of the wavefunction. Although the
∇
S
m
{\textstyle {\frac {\nabla S}{m}}}
term appears to play the role of velocity, it does not represent velocity at a point since simultaneous measurement of position and velocity violates uncertainty principle.
=== Separation of variables ===
If the Hamiltonian is not an explicit function of time, Schrödinger's equation reads:
i
ℏ
∂
∂
t
Ψ
(
r
,
t
)
=
[
−
ℏ
2
2
m
∇
2
+
V
(
r
)
]
Ψ
(
r
,
t
)
.
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=\left[-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V(\mathbf {r} )\right]\Psi (\mathbf {r} ,t).}
The operator on the left side depends only on time; the one on the right side depends only on space.
Solving the equation by separation of variables means seeking a solution of the form of a product of spatial and temporal parts
Ψ
(
r
,
t
)
=
ψ
(
r
)
τ
(
t
)
,
{\displaystyle \Psi (\mathbf {r} ,t)=\psi (\mathbf {r} )\tau (t),}
where
ψ
(
r
)
{\displaystyle \psi (\mathbf {r} )}
is a function of all the spatial coordinate(s) of the particle(s) constituting the system only, and
τ
(
t
)
{\displaystyle \tau (t)}
is a function of time only. Substituting this expression for
Ψ
{\displaystyle \Psi }
into the time dependent left hand side shows that
τ
(
t
)
{\displaystyle \tau (t)}
is a phase factor:
Ψ
(
r
,
t
)
=
ψ
(
r
)
e
−
i
E
t
/
ℏ
.
{\displaystyle \Psi (\mathbf {r} ,t)=\psi (\mathbf {r} )e^{-i{Et/\hbar }}.}
A solution of this type is called stationary, since the only time dependence is a phase factor that cancels when the probability density is calculated via the Born rule.: 143ff
The spatial part of the full wave function solves the equation
∇
2
ψ
(
r
)
+
2
m
ℏ
2
[
E
−
V
(
r
)
]
ψ
(
r
)
=
0
,
{\displaystyle \nabla ^{2}\psi (\mathbf {r} )+{\frac {2m}{\hbar ^{2}}}\left[E-V(\mathbf {r} )\right]\psi (\mathbf {r} )=0,}
where the energy
E
{\displaystyle E}
appears in the phase factor.
This generalizes to any number of particles in any number of dimensions (in a time-independent potential): the standing wave solutions of the time-independent equation are the states with definite energy, instead of a probability distribution of different energies. In physics, these standing waves are called "stationary states" or "energy eigenstates"; in chemistry they are called "atomic orbitals" or "molecular orbitals". Superpositions of energy eigenstates change their properties according to the relative phases between the energy levels. The energy eigenstates form a basis: any wave function may be written as a sum over the discrete energy states or an integral over continuous energy states, or more generally as an integral over a measure. This is an example of the spectral theorem, and in a finite-dimensional state space it is just a statement of the completeness of the eigenvectors of a Hermitian matrix.
Separation of variables can also be a useful method for the time-independent Schrödinger equation. For example, depending on the symmetry of the problem, the Cartesian axes might be separated, as in
ψ
(
r
)
=
ψ
x
(
x
)
ψ
y
(
y
)
ψ
z
(
z
)
,
{\displaystyle \psi (\mathbf {r} )=\psi _{x}(x)\psi _{y}(y)\psi _{z}(z),}
or radial and angular coordinates might be separated:
ψ
(
r
)
=
ψ
r
(
r
)
ψ
θ
(
θ
)
ψ
ϕ
(
ϕ
)
.
{\displaystyle \psi (\mathbf {r} )=\psi _{r}(r)\psi _{\theta }(\theta )\psi _{\phi }(\phi ).}
== Examples ==
=== Particle in a box ===
The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy inside a certain region and infinite potential energy outside.: 77–78 For the one-dimensional case in the
x
{\displaystyle x}
direction, the time-independent Schrödinger equation may be written
−
ℏ
2
2
m
d
2
ψ
d
x
2
=
E
ψ
.
{\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi .}
With the differential operator defined by
p
^
x
=
−
i
ℏ
d
d
x
{\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {d}{dx}}}
the previous equation is evocative of the classic kinetic energy analogue,
1
2
m
p
^
x
2
=
E
,
{\displaystyle {\frac {1}{2m}}{\hat {p}}_{x}^{2}=E,}
with state
ψ
{\displaystyle \psi }
in this case having energy
E
{\displaystyle E}
coincident with the kinetic energy of the particle.
The general solutions of the Schrödinger equation for the particle in a box are
ψ
(
x
)
=
A
e
i
k
x
+
B
e
−
i
k
x
E
=
ℏ
2
k
2
2
m
{\displaystyle \psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}}}
or, from Euler's formula,
ψ
(
x
)
=
C
sin
(
k
x
)
+
D
cos
(
k
x
)
.
{\displaystyle \psi (x)=C\sin(kx)+D\cos(kx).}
The infinite potential walls of the box determine the values of
C
,
D
,
{\displaystyle C,D,}
and
k
{\displaystyle k}
at
x
=
0
{\displaystyle x=0}
and
x
=
L
{\displaystyle x=L}
where
ψ
{\displaystyle \psi }
must be zero. Thus, at
x
=
0
{\displaystyle x=0}
,
ψ
(
0
)
=
0
=
C
sin
(
0
)
+
D
cos
(
0
)
=
D
{\displaystyle \psi (0)=0=C\sin(0)+D\cos(0)=D}
and
D
=
0
{\displaystyle D=0}
. At
x
=
L
{\displaystyle x=L}
,
ψ
(
L
)
=
0
=
C
sin
(
k
L
)
,
{\displaystyle \psi (L)=0=C\sin(kL),}
in which
C
{\displaystyle C}
cannot be zero as this would conflict with the postulate that
ψ
{\displaystyle \psi }
has norm 1. Therefore, since
sin
(
k
L
)
=
0
{\displaystyle \sin(kL)=0}
,
k
L
{\displaystyle kL}
must be an integer multiple of
π
{\displaystyle \pi }
,
k
=
n
π
L
n
=
1
,
2
,
3
,
…
.
{\displaystyle k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots .}
This constraint on
k
{\displaystyle k}
implies a constraint on the energy levels, yielding
E
n
=
ℏ
2
π
2
n
2
2
m
L
2
=
n
2
h
2
8
m
L
2
.
{\displaystyle E_{n}={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}.}
A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy.
=== Harmonic oscillator ===
The Schrödinger equation for this situation is
E
ψ
=
−
ℏ
2
2
m
d
2
d
x
2
ψ
+
1
2
m
ω
2
x
2
ψ
,
{\displaystyle E\psi =-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}\psi +{\frac {1}{2}}m\omega ^{2}x^{2}\psi ,}
where
x
{\displaystyle x}
is the displacement and
ω
{\displaystyle \omega }
the angular frequency. Furthermore, it can be used to describe approximately a wide variety of other systems, including vibrating atoms, molecules, and atoms or ions in lattices, and approximating other potentials near equilibrium points. It is also the basis of perturbation methods in quantum mechanics.
The solutions in position space are
ψ
n
(
x
)
=
1
2
n
n
!
(
m
ω
π
ℏ
)
1
/
4
e
−
m
ω
x
2
2
ℏ
H
n
(
m
ω
ℏ
x
)
,
{\displaystyle \psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\ \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\ e^{-{\frac {m\omega x^{2}}{2\hbar }}}\ {\mathcal {H}}_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),}
where
n
∈
{
0
,
1
,
2
,
…
}
{\displaystyle n\in \{0,1,2,\ldots \}}
, and the functions
H
n
{\displaystyle {\mathcal {H}}_{n}}
are the Hermite polynomials of order
n
{\displaystyle n}
. The solution set may be generated by
ψ
n
(
x
)
=
1
n
!
(
m
ω
2
ℏ
)
n
(
x
−
ℏ
m
ω
d
d
x
)
n
(
m
ω
π
ℏ
)
1
4
e
−
m
ω
x
2
2
ℏ
.
{\displaystyle \psi _{n}(x)={\frac {1}{\sqrt {n!}}}\left({\sqrt {\frac {m\omega }{2\hbar }}}\right)^{n}\left(x-{\frac {\hbar }{m\omega }}{\frac {d}{dx}}\right)^{n}\left({\frac {m\omega }{\pi \hbar }}\right)^{\frac {1}{4}}e^{\frac {-m\omega x^{2}}{2\hbar }}.}
The eigenvalues are
E
n
=
(
n
+
1
2
)
ℏ
ω
.
{\displaystyle E_{n}=\left(n+{\frac {1}{2}}\right)\hbar \omega .}
The case
n
=
0
{\displaystyle n=0}
is called the ground state, its energy is called the zero-point energy, and the wave function is a Gaussian.
The harmonic oscillator, like the particle in a box, illustrates the generic feature of the Schrödinger equation that the energies of bound eigenstates are discretized.: 352
=== Hydrogen atom ===
The Schrödinger equation for the electron in a hydrogen atom (or a hydrogen-like atom) is
E
ψ
=
−
ℏ
2
2
μ
∇
2
ψ
−
q
2
4
π
ε
0
r
ψ
{\displaystyle E\psi =-{\frac {\hbar ^{2}}{2\mu }}\nabla ^{2}\psi -{\frac {q^{2}}{4\pi \varepsilon _{0}r}}\psi }
where
q
{\displaystyle q}
is the electron charge,
r
{\displaystyle \mathbf {r} }
is the position of the electron relative to the nucleus,
r
=
|
r
|
{\displaystyle r=|\mathbf {r} |}
is the magnitude of the relative position, the potential term is due to the Coulomb interaction, wherein
ε
0
{\displaystyle \varepsilon _{0}}
is the permittivity of free space and
μ
=
m
q
m
p
m
q
+
m
p
{\displaystyle \mu ={\frac {m_{q}m_{p}}{m_{q}+m_{p}}}}
is the 2-body reduced mass of the hydrogen nucleus (just a proton) of mass
m
p
{\displaystyle m_{p}}
and the electron of mass
m
q
{\displaystyle m_{q}}
. The negative sign arises in the potential term since the proton and electron are oppositely charged. The reduced mass in place of the electron mass is used since the electron and proton together orbit each other about a common center of mass, and constitute a two-body problem to solve. The motion of the electron is of principal interest here, so the equivalent one-body problem is the motion of the electron using the reduced mass.
The Schrödinger equation for a hydrogen atom can be solved by separation of variables. In this case, spherical polar coordinates are the most convenient. Thus,
ψ
(
r
,
θ
,
φ
)
=
R
(
r
)
Y
ℓ
m
(
θ
,
φ
)
=
R
(
r
)
Θ
(
θ
)
Φ
(
φ
)
,
{\displaystyle \psi (r,\theta ,\varphi )=R(r)Y_{\ell }^{m}(\theta ,\varphi )=R(r)\Theta (\theta )\Phi (\varphi ),}
where R are radial functions and
Y
l
m
(
θ
,
φ
)
{\displaystyle Y_{l}^{m}(\theta ,\varphi )}
are spherical harmonics of degree
ℓ
{\displaystyle \ell }
and order
m
{\displaystyle m}
. This is the only atom for which the Schrödinger equation has been solved for exactly. Multi-electron atoms require approximate methods. The family of solutions are:
ψ
n
ℓ
m
(
r
,
θ
,
φ
)
=
(
2
n
a
0
)
3
(
n
−
ℓ
−
1
)
!
2
n
[
(
n
+
ℓ
)
!
]
e
−
r
/
n
a
0
(
2
r
n
a
0
)
ℓ
L
n
−
ℓ
−
1
2
ℓ
+
1
(
2
r
n
a
0
)
⋅
Y
ℓ
m
(
θ
,
φ
)
{\displaystyle \psi _{n\ell m}(r,\theta ,\varphi )={\sqrt {\left({\frac {2}{na_{0}}}\right)^{3}{\frac {(n-\ell -1)!}{2n[(n+\ell )!]}}}}e^{-r/na_{0}}\left({\frac {2r}{na_{0}}}\right)^{\ell }L_{n-\ell -1}^{2\ell +1}\left({\frac {2r}{na_{0}}}\right)\cdot Y_{\ell }^{m}(\theta ,\varphi )}
where
a
0
=
4
π
ε
0
ℏ
2
m
q
q
2
{\displaystyle a_{0}={\frac {4\pi \varepsilon _{0}\hbar ^{2}}{m_{q}q^{2}}}}
is the Bohr radius,
L
n
−
ℓ
−
1
2
ℓ
+
1
(
⋯
)
{\displaystyle L_{n-\ell -1}^{2\ell +1}(\cdots )}
are the generalized Laguerre polynomials of degree
n
−
ℓ
−
1
{\displaystyle n-\ell -1}
,
n
,
ℓ
,
m
{\displaystyle n,\ell ,m}
are the principal, azimuthal, and magnetic quantum numbers respectively, which take the values
n
=
1
,
2
,
3
,
…
,
{\displaystyle n=1,2,3,\dots ,}
ℓ
=
0
,
1
,
2
,
…
,
n
−
1
,
{\displaystyle \ell =0,1,2,\dots ,n-1,}
m
=
−
ℓ
,
…
,
ℓ
.
{\displaystyle m=-\ell ,\dots ,\ell .}
=== Approximate solutions ===
It is typically not possible to solve the Schrödinger equation exactly for situations of physical interest. Accordingly, approximate solutions are obtained using techniques like variational methods and WKB approximation. It is also common to treat a problem of interest as a small modification to a problem that can be solved exactly, a method known as perturbation theory.
== Semiclassical limit ==
One simple way to compare classical to quantum mechanics is to consider the time-evolution of the expected position and expected momentum, which can then be compared to the time-evolution of the ordinary position and momentum in classical mechanics.: 302 The quantum expectation values satisfy the Ehrenfest theorem. For a one-dimensional quantum particle moving in a potential
V
{\displaystyle V}
, the Ehrenfest theorem says
m
d
d
t
⟨
x
⟩
=
⟨
p
⟩
;
d
d
t
⟨
p
⟩
=
−
⟨
V
′
(
X
)
⟩
.
{\displaystyle m{\frac {d}{dt}}\langle x\rangle =\langle p\rangle ;\quad {\frac {d}{dt}}\langle p\rangle =-\left\langle V'(X)\right\rangle .}
Although the first of these equations is consistent with the classical behavior, the second is not: If the pair
(
⟨
X
⟩
,
⟨
P
⟩
)
{\displaystyle (\langle X\rangle ,\langle P\rangle )}
were to satisfy Newton's second law, the right-hand side of the second equation would have to be
−
V
′
(
⟨
X
⟩
)
{\displaystyle -V'\left(\left\langle X\right\rangle \right)}
which is typically not the same as
−
⟨
V
′
(
X
)
⟩
{\displaystyle -\left\langle V'(X)\right\rangle }
. For a general
V
′
{\displaystyle V'}
, therefore, quantum mechanics can lead to predictions where expectation values do not mimic the classical behavior. In the case of the quantum harmonic oscillator, however,
V
′
{\displaystyle V'}
is linear and this distinction disappears, so that in this very special case, the expected position and expected momentum do exactly follow the classical trajectories.
For general systems, the best we can hope for is that the expected position and momentum will approximately follow the classical trajectories. If the wave function is highly concentrated around a point
x
0
{\displaystyle x_{0}}
, then
V
′
(
⟨
X
⟩
)
{\displaystyle V'\left(\left\langle X\right\rangle \right)}
and
⟨
V
′
(
X
)
⟩
{\displaystyle \left\langle V'(X)\right\rangle }
will be almost the same, since both will be approximately equal to
V
′
(
x
0
)
{\displaystyle V'(x_{0})}
. In that case, the expected position and expected momentum will remain very close to the classical trajectories, at least for as long as the wave function remains highly localized in position.
The Schrödinger equation in its general form
i
ℏ
∂
∂
t
Ψ
(
r
,
t
)
=
H
^
Ψ
(
r
,
t
)
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi \left(\mathbf {r} ,t\right)={\hat {H}}\Psi \left(\mathbf {r} ,t\right)}
is closely related to the Hamilton–Jacobi equation (HJE)
−
∂
∂
t
S
(
q
i
,
t
)
=
H
(
q
i
,
∂
S
∂
q
i
,
t
)
{\displaystyle -{\frac {\partial }{\partial t}}S(q_{i},t)=H\left(q_{i},{\frac {\partial S}{\partial q_{i}}},t\right)}
where
S
{\displaystyle S}
is the classical action and
H
{\displaystyle H}
is the Hamiltonian function (not operator).: 308 Here the generalized coordinates
q
i
{\displaystyle q_{i}}
for
i
=
1
,
2
,
3
{\displaystyle i=1,2,3}
(used in the context of the HJE) can be set to the position in Cartesian coordinates as
r
=
(
q
1
,
q
2
,
q
3
)
=
(
x
,
y
,
z
)
{\displaystyle \mathbf {r} =(q_{1},q_{2},q_{3})=(x,y,z)}
.
Substituting
Ψ
=
ρ
(
r
,
t
)
e
i
S
(
r
,
t
)
/
ℏ
{\displaystyle \Psi ={\sqrt {\rho (\mathbf {r} ,t)}}e^{iS(\mathbf {r} ,t)/\hbar }}
where
ρ
{\displaystyle \rho }
is the probability density, into the Schrödinger equation and then taking the limit
ℏ
→
0
{\displaystyle \hbar \to 0}
in the resulting equation yield the Hamilton–Jacobi equation.
== Density matrices ==
Wave functions are not always the most convenient way to describe quantum systems and their behavior. When the preparation of a system is only imperfectly known, or when the system under investigation is a part of a larger whole, density matrices may be used instead.: 74 A density matrix is a positive semi-definite operator whose trace is equal to 1. (The term "density operator" is also used, particularly when the underlying Hilbert space is infinite-dimensional.) The set of all density matrices is convex, and the extreme points are the operators that project onto vectors in the Hilbert space. These are the density-matrix representations of wave functions; in Dirac notation, they are written
ρ
^
=
|
Ψ
⟩
⟨
Ψ
|
.
{\displaystyle {\hat {\rho }}=|\Psi \rangle \langle \Psi |.}
The density-matrix analogue of the Schrödinger equation for wave functions is
i
ℏ
∂
ρ
^
∂
t
=
[
H
^
,
ρ
^
]
,
{\displaystyle i\hbar {\frac {\partial {\hat {\rho }}}{\partial t}}=[{\hat {H}},{\hat {\rho }}],}
where the brackets denote a commutator. This is variously known as the von Neumann equation, the Liouville–von Neumann equation, or just the Schrödinger equation for density matrices.: 312 If the Hamiltonian is time-independent, this equation can be easily solved to yield
ρ
^
(
t
)
=
e
−
i
H
^
t
/
ℏ
ρ
^
(
0
)
e
i
H
^
t
/
ℏ
.
{\displaystyle {\hat {\rho }}(t)=e^{-i{\hat {H}}t/\hbar }{\hat {\rho }}(0)e^{i{\hat {H}}t/\hbar }.}
More generally, if the unitary operator
U
^
(
t
)
{\displaystyle {\hat {U}}(t)}
describes wave function evolution over some time interval, then the time evolution of a density matrix over that same interval is given by
ρ
^
(
t
)
=
U
^
(
t
)
ρ
^
(
0
)
U
^
(
t
)
†
.
{\displaystyle {\hat {\rho }}(t)={\hat {U}}(t){\hat {\rho }}(0){\hat {U}}(t)^{\dagger }.}
Unitary evolution of a density matrix conserves its von Neumann entropy.: 267
== Relativistic quantum physics and quantum field theory ==
The one-particle Schrödinger equation described above is valid essentially in the nonrelativistic domain. For one reason, it is essentially invariant under Galilean transformations, which form the symmetry group of Newtonian dynamics. Moreover, processes that change particle number are natural in relativity, and so an equation for one particle (or any fixed number thereof) can only be of limited use. A more general form of the Schrödinger equation that also applies in relativistic situations can be formulated within quantum field theory (QFT), a framework that allows the combination of quantum mechanics with special relativity. The region in which both simultaneously apply may be described by relativistic quantum mechanics. Such descriptions may use time evolution generated by a Hamiltonian operator, as in the Schrödinger functional method.
=== Klein–Gordon and Dirac equations ===
Attempts to combine quantum physics with special relativity began with building relativistic wave equations from the relativistic energy–momentum relation
E
2
=
(
p
c
)
2
+
(
m
0
c
2
)
2
,
{\displaystyle E^{2}=(pc)^{2}+\left(m_{0}c^{2}\right)^{2},}
instead of nonrelativistic energy equations. The Klein–Gordon equation and the Dirac equation are two such equations. The Klein–Gordon equation,
−
1
c
2
∂
2
∂
t
2
ψ
+
∇
2
ψ
=
m
2
c
2
ℏ
2
ψ
,
{\displaystyle -{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\psi +\nabla ^{2}\psi ={\frac {m^{2}c^{2}}{\hbar ^{2}}}\psi ,}
was the first such equation to be obtained, even before the nonrelativistic one-particle Schrödinger equation, and applies to massive spinless particles. Historically, Dirac obtained the Dirac equation by seeking a differential equation that would be first-order in both time and space, a desirable property for a relativistic theory. Taking the "square root" of the left-hand side of the Klein–Gordon equation in this way required factorizing it into a product of two operators, which Dirac wrote using 4 × 4 matrices
α
1
,
α
2
,
α
3
,
β
{\displaystyle \alpha _{1},\alpha _{2},\alpha _{3},\beta }
. Consequently, the wave function also became a four-component function, governed by the Dirac equation that, in free space, read
(
β
m
c
2
+
c
(
∑
n
=
1
3
α
n
p
n
)
)
ψ
=
i
ℏ
∂
ψ
∂
t
.
{\displaystyle \left(\beta mc^{2}+c\left(\sum _{n\mathop {=} 1}^{3}\alpha _{n}p_{n}\right)\right)\psi =i\hbar {\frac {\partial \psi }{\partial t}}.}
This has again the form of the Schrödinger equation, with the time derivative of the wave function being given by a Hamiltonian operator acting upon the wave function. Including influences upon the particle requires modifying the Hamiltonian operator. For example, the Dirac Hamiltonian for a particle of mass m and electric charge q in an electromagnetic field (described by the electromagnetic potentials φ and A) is:
H
^
Dirac
=
γ
0
[
c
γ
⋅
(
p
^
−
q
A
)
+
m
c
2
+
γ
0
q
φ
]
,
{\displaystyle {\hat {H}}_{\text{Dirac}}=\gamma ^{0}\left[c{\boldsymbol {\gamma }}\cdot \left({\hat {\mathbf {p} }}-q\mathbf {A} \right)+mc^{2}+\gamma ^{0}q\varphi \right],}
in which the γ = (γ1, γ2, γ3) and γ0 are the Dirac gamma matrices related to the spin of the particle. The Dirac equation is true for all spin-1⁄2 particles, and the solutions to the equation are 4-component spinor fields with two components corresponding to the particle and the other two for the antiparticle.
For the Klein–Gordon equation, the general form of the Schrödinger equation is inconvenient to use, and in practice the Hamiltonian is not expressed in an analogous way to the Dirac Hamiltonian. The equations for relativistic quantum fields, of which the Klein–Gordon and Dirac equations are two examples, can be obtained in other ways, such as starting from a Lagrangian density and using the Euler–Lagrange equations for fields, or using the representation theory of the Lorentz group in which certain representations can be used to fix the equation for a free particle of given spin (and mass).
In general, the Hamiltonian to be substituted in the general Schrödinger equation is not just a function of the position and momentum operators (and possibly time), but also of spin matrices. Also, the solutions to a relativistic wave equation, for a massive particle of spin s, are complex-valued 2(2s + 1)-component spinor fields.
=== Fock space ===
As originally formulated, the Dirac equation is an equation for a single quantum particle, just like the single-particle Schrödinger equation with wave function
Ψ
(
x
,
t
)
{\displaystyle \Psi (x,t)}
. This is of limited use in relativistic quantum mechanics, where particle number is not fixed. Heuristically, this complication can be motivated by noting that mass–energy equivalence implies material particles can be created from energy. A common way to address this in QFT is to introduce a Hilbert space where the basis states are labeled by particle number, a so-called Fock space. The Schrödinger equation can then be formulated for quantum states on this Hilbert space. However, because the Schrödinger equation picks out a preferred time axis, the Lorentz invariance of the theory is no longer manifest, and accordingly, the theory is often formulated in other ways.
== History ==
Following Max Planck's quantization of light (see black-body radiation), Albert Einstein interpreted Planck's quanta to be photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, one of the first signs of wave–particle duality. Since energy and momentum are related in the same way as frequency and wave number in special relativity, it followed that the momentum
p
{\displaystyle p}
of a photon is inversely proportional to its wavelength
λ
{\displaystyle \lambda }
, or proportional to its wave number
k
{\displaystyle k}
:
p
=
h
λ
=
ℏ
k
,
{\displaystyle p={\frac {h}{\lambda }}=\hbar k,}
where
h
{\displaystyle h}
is the Planck constant and
ℏ
=
h
/
2
π
{\displaystyle \hbar ={h}/{2\pi }}
is the reduced Planck constant. Louis de Broglie hypothesized that this is true for all particles, even particles which have mass such as electrons. He showed that, assuming that the matter waves propagate along with their particle counterparts, electrons form standing waves, meaning that only certain discrete rotational frequencies about the nucleus of an atom are allowed.
These quantized orbits correspond to discrete energy levels, and de Broglie reproduced the Bohr model formula for the energy levels. The Bohr model was based on the assumed quantization of angular momentum
L
{\displaystyle L}
according to
L
=
n
h
2
π
=
n
ℏ
.
{\displaystyle L=n{\frac {h}{2\pi }}=n\hbar .}
According to de Broglie, the electron is described by a wave, and a whole number of wavelengths must fit along the circumference of the electron's orbit:
n
λ
=
2
π
r
.
{\displaystyle n\lambda =2\pi r.}
This approach essentially confined the electron wave in one dimension, along a circular orbit of radius
r
{\displaystyle r}
.
In 1921, prior to de Broglie, Arthur C. Lunn at the University of Chicago had used the same argument based on the completion of the relativistic energy–momentum 4-vector to derive what we now call the de Broglie relation. Unlike de Broglie, Lunn went on to formulate the differential equation now known as the Schrödinger equation and solve for its energy eigenvalues for the hydrogen atom; the paper was rejected by the Physical Review, according to Kamen.
Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they should satisfy some sort of wave equation. Inspired by Debye's remark, Schrödinger decided to find a proper 3-dimensional wave equation for the electron. He was guided by William Rowan Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system—the trajectories of light rays become sharp tracks that obey Fermat's principle, an analog of the principle of least action.
The equation he found is
i
ℏ
∂
∂
t
Ψ
(
r
,
t
)
=
−
ℏ
2
2
m
∇
2
Ψ
(
r
,
t
)
+
V
(
r
)
Ψ
(
r
,
t
)
.
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\Psi (\mathbf {r} ,t)+V(\mathbf {r} )\Psi (\mathbf {r} ,t).}
By that time Arnold Sommerfeld had refined the Bohr model with relativistic corrections. Schrödinger used the relativistic energy–momentum relation to find what is now known as the Klein–Gordon equation in a Coulomb potential (in natural units):
(
E
+
e
2
r
)
2
ψ
(
x
)
=
−
∇
2
ψ
(
x
)
+
m
2
ψ
(
x
)
.
{\displaystyle \left(E+{\frac {e^{2}}{r}}\right)^{2}\psi (x)=-\nabla ^{2}\psi (x)+m^{2}\psi (x).}
He found the standing waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself with a mistress in a mountain cabin in December 1925.
While at the cabin, Schrödinger decided that his earlier nonrelativistic calculations were novel enough to publish and decided to leave off the problem of relativistic corrections for the future. Despite the difficulties in solving the differential equation for hydrogen (he had sought help from his friend the mathematician Hermann Weyl: 3 ) Schrödinger showed that his nonrelativistic version of the wave equation produced the correct spectral energies of hydrogen in a paper published in 1926.: 1 Schrödinger computed the hydrogen spectral series by treating a hydrogen atom's electron as a wave
Ψ
(
x
,
t
)
{\displaystyle \Psi (\mathbf {x} ,t)}
, moving in a potential well
V
{\displaystyle V}
, created by the proton. This computation accurately reproduced the energy levels of the Bohr model.
The Schrödinger equation details the behavior of
Ψ
{\displaystyle \Psi }
but says nothing of its nature. Schrödinger tried to interpret the real part of
Ψ
∂
Ψ
∗
∂
t
{\displaystyle \Psi {\frac {\partial \Psi ^{*}}{\partial t}}}
as a charge density, and then revised this proposal, saying in his next paper that the modulus squared of
Ψ
{\displaystyle \Psi }
is a charge density. This approach was, however, unsuccessful. In 1926, just a few days after this paper was published, Max Born successfully interpreted
Ψ
{\displaystyle \Psi }
as the probability amplitude, whose modulus squared is equal to probability density.: 220 Later, Schrödinger himself explained this interpretation as follows:
The already ... mentioned psi-function.... is now the means for predicting probability of measurement results. In it is embodied the momentarily attained sum of theoretically based future expectation, somewhat as laid down in a catalog.
== Interpretation ==
The Schrödinger equation provides a way to calculate the wave function of a system and how it changes dynamically in time. However, the Schrödinger equation does not directly say what, exactly, the wave function is. The meaning of the Schrödinger equation and how the mathematical entities in it relate to physical reality depends upon the interpretation of quantum mechanics that one adopts.
In the views often grouped together as the Copenhagen interpretation, a system's wave function is a collection of statistical information about that system. The Schrödinger equation relates information about the system at one time to information about it at another. While the time-evolution process represented by the Schrödinger equation is continuous and deterministic, in that knowing the wave function at one instant is in principle sufficient to calculate it for all future times, wave functions can also change discontinuously and stochastically during a measurement. The wave function changes, according to this school of thought, because new information is available. The post-measurement wave function generally cannot be known prior to the measurement, but the probabilities for the different possibilities can be calculated using the Born rule. Other, more recent interpretations of quantum mechanics, such as relational quantum mechanics and QBism also give the Schrödinger equation a status of this sort.
Schrödinger himself suggested in 1952 that the different terms of a superposition evolving under the Schrödinger equation are "not alternatives but all really happen simultaneously". This has been interpreted as an early version of Everett's many-worlds interpretation. This interpretation, formulated independently in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This interpretation removes the axiom of wave function collapse, leaving only continuous evolution under the Schrödinger equation, and so all possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Why should we assign probabilities at all to outcomes that are certain to occur in some worlds, and why should the probabilities be given by the Born rule? Several ways to answer these questions in the many-worlds framework have been proposed, but there is no consensus on whether they are successful.
Bohmian mechanics reformulates quantum mechanics to make it deterministic, at the price of adding a force due to a "quantum potential". It attributes to each physical system not only a wave function but in addition a real position that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation.
== See also ==
== Notes ==
== References ==
== External links ==
"Schrödinger equation". Encyclopedia of Mathematics. EMS Press. 2001 [1994].
Quantum Cook Book (PDF) and PHYS 201: Fundamentals of Physics II by Ramamurti Shankar, Yale OpenCourseware
The Modern Revolution in Physics – an online textbook.
Quantum Physics I at MIT OpenCourseWare | Wikipedia/Schrodinger_equation |
In physics and materials science, elasticity is the ability of a body to resist a distorting influence and to return to its original size and shape when that influence or force is removed. Solid objects will deform when adequate loads are applied to them; if the material is elastic, the object will return to its initial shape and size after removal. This is in contrast to plasticity, in which the object fails to do so and instead remains in its deformed state.
The physical reasons for elastic behavior can be quite different for different materials. In metals, the atomic lattice changes size and shape when forces are applied (energy is added to the system). When forces are removed, the lattice goes back to the original lower energy state. For rubbers and other polymers, elasticity is caused by the stretching of polymer chains when forces are applied.
Hooke's law states that the force required to deform elastic objects should be directly proportional to the distance of deformation, regardless of how large that distance becomes. This is known as perfect elasticity, in which a given object will return to its original shape no matter how strongly it is deformed. This is an ideal concept only; most materials which possess elasticity in practice remain purely elastic only up to very small deformations, after which plastic (permanent) deformation occurs.
In engineering, the elasticity of a material is quantified by the elastic modulus such as the Young's modulus, bulk modulus or shear modulus which measure the amount of stress needed to achieve a unit of strain; a higher modulus indicates that the material is harder to deform. The SI unit of this modulus is the pascal (Pa). The material's elastic limit or yield strength is the maximum stress that can arise before the onset of plastic deformation. Its SI unit is also the pascal (Pa).
== Overview ==
When an elastic material is deformed due to an external force, it experiences internal resistance to the deformation and restores it to its original state if the external force is no longer applied. There are various elastic moduli, such as Young's modulus, the shear modulus, and the bulk modulus, all of which are measures of the inherent elastic properties of a material as a resistance to deformation under an applied load. The various moduli apply to different kinds of deformation. For instance, Young's modulus applies to extension/compression of a body, whereas the shear modulus applies to its shear. Young's modulus and shear modulus are only for solids, whereas the bulk modulus is for solids, liquids, and gases.
The elasticity of materials is described by a stress–strain curve, which shows the relation between stress (the average restorative internal force per unit area) and strain (the relative deformation). The curve is generally nonlinear, but it can (by use of a Taylor series) be approximated as linear for sufficiently small deformations (in which higher-order terms are negligible). If the material is isotropic, the linearized stress–strain relationship is called Hooke's law, which is often presumed to apply up to the elastic limit for most metals or crystalline materials whereas nonlinear elasticity is generally required to model large deformations of rubbery materials even in the elastic range. For even higher stresses, materials exhibit plastic behavior, that is, they deform irreversibly and do not return to their original shape after stress is no longer applied. For rubber-like materials such as elastomers, the slope of the stress–strain curve increases with stress, meaning that rubbers progressively become more difficult to stretch, while for most metals, the gradient decreases at very high stresses, meaning that they progressively become easier to stretch. Elasticity is not exhibited only by solids; non-Newtonian fluids, such as viscoelastic fluids, will also exhibit elasticity in certain conditions quantified by the Deborah number. In response to a small, rapidly applied and removed strain, these fluids may deform and then return to their original shape. Under larger strains, or strains applied for longer periods of time, these fluids may start to flow like a viscous liquid.
Because the elasticity of a material is described in terms of a stress–strain relation, it is essential that the terms stress and strain be defined without ambiguity. Typically, two types of relation are considered. The first type deals with materials that are elastic only for small strains. The second deals with materials that are not limited to small strains. Clearly, the second type of relation is more general in the sense that it must include the first type as a special case.
For small strains, the measure of stress that is used is the Cauchy stress while the measure of strain that is used is the infinitesimal strain tensor; the resulting (predicted) material behavior is termed linear elasticity, which (for isotropic media) is called the generalized Hooke's law. Cauchy elastic materials and hypoelastic materials are models that extend Hooke's law to allow for the possibility of large rotations, large distortions, and intrinsic or induced anisotropy.
For more general situations, any of a number of stress measures can be used, and it is generally desired (but not required) that the elastic stress–strain relation be phrased in terms of a finite strain measure that is work conjugate to the selected stress measure, i.e., the time integral of the inner product of the stress measure with the rate of the strain measure should be equal to the change in internal energy for any adiabatic process that remains below the elastic limit.
== Units ==
=== International System ===
The SI unit for elasticity and the elastic modulus is the pascal (Pa). This unit is defined as force per unit area, generally a measurement of pressure, which in mechanics corresponds to stress. The pascal and therefore elasticity have the dimension L−1⋅M⋅T−2.
For most commonly used engineering materials, the elastic modulus is on the scale of gigapascals (GPa, 109 Pa).
== Linear elasticity ==
As noted above, for small deformations, most elastic materials such as springs exhibit linear elasticity and can be described by a linear relation between the stress and strain. This relationship is known as Hooke's law. A geometry-dependent version of the idea was first formulated by Robert Hooke in 1675 as a Latin anagram, "ceiiinosssttuv". He published the answer in 1678: "Ut tensio, sic vis" meaning "As the extension, so the force", a linear relationship commonly referred to as Hooke's law. This law can be stated as a relationship between tensile force F and corresponding extension displacement
x
{\displaystyle x}
,
F
=
k
x
,
{\displaystyle F=kx,}
where k is a constant known as the rate or spring constant. It can also be stated as a relationship between stress
σ
{\displaystyle \sigma }
and strain
ε
{\displaystyle \varepsilon }
:
σ
=
E
ε
,
{\displaystyle \sigma =E\varepsilon ,}
where E is known as the Young's modulus.
Although the general proportionality constant between stress and strain in three dimensions is a 4th-order tensor called stiffness, systems that exhibit symmetry, such as a one-dimensional rod, can often be reduced to applications of Hooke's law.
== Finite elasticity ==
The elastic behavior of objects that undergo finite deformations has been described using a number of models, such as Cauchy elastic material models, Hypoelastic material models, and Hyperelastic material models. The deformation gradient (F) is the primary deformation measure used in finite strain theory.
=== Cauchy elastic materials ===
A material is said to be Cauchy-elastic if the Cauchy stress tensor σ is a function of the deformation gradient F alone:
σ
=
G
(
F
)
{\displaystyle \ {\boldsymbol {\sigma }}={\mathcal {G}}({\boldsymbol {F}})}
It is generally incorrect to state that Cauchy stress is a function of merely a strain tensor, as such a model lacks crucial information about material rotation needed to produce correct results for an anisotropic medium subjected to vertical extension in comparison to the same extension applied horizontally and then subjected to a 90-degree rotation; both these deformations have the same spatial strain tensors yet must produce different values of the Cauchy stress tensor.
Even though the stress in a Cauchy-elastic material depends only on the state of deformation, the work done by stresses might depend on the path of deformation. Therefore, Cauchy elasticity includes non-conservative "non-hyperelastic" models (in which work of deformation is path dependent) as well as conservative "hyperelastic material" models (for which stress can be derived from a scalar "elastic potential" function).
=== Hypoelastic materials ===
A hypoelastic material can be rigorously defined as one that is modeled using a constitutive equation satisfying the following two criteria:
The Cauchy stress
σ
{\displaystyle {\boldsymbol {\sigma }}}
at time
t
{\displaystyle t}
depends only on the order in which the body has occupied its past configurations, but not on the time rate at which these past configurations were traversed. As a special case, this criterion includes a Cauchy elastic material, for which the current stress depends only on the current configuration rather than the history of past configurations.
There is a tensor-valued function
G
{\displaystyle G}
such that
σ
˙
=
G
(
σ
,
L
)
,
{\displaystyle {\dot {\boldsymbol {\sigma }}}=G({\boldsymbol {\sigma }},{\boldsymbol {L}})\,,}
in which
σ
˙
{\displaystyle {\dot {\boldsymbol {\sigma }}}}
is the material rate of the Cauchy stress tensor, and
L
{\displaystyle {\boldsymbol {L}}}
is the spatial velocity gradient tensor.
If only these two original criteria are used to define hypoelasticity, then hyperelasticity would be included as a special case, which prompts some constitutive modelers to append a third criterion that specifically requires a hypoelastic model to not be hyperelastic (i.e., hypoelasticity implies that stress is not derivable from an energy potential). If this third criterion is adopted, it follows that a hypoelastic material might admit nonconservative adiabatic loading paths that start and end with the same deformation gradient but do not start and end at the same internal energy.
Note that the second criterion requires only that the function
G
{\displaystyle G}
exists. As detailed in the main hypoelastic material article, specific formulations of hypoelastic models typically employ so-called objective rates so that the
G
{\displaystyle G}
function exists only implicitly and is typically needed explicitly only for numerical stress updates performed via direct integration of the actual (not objective) stress rate.
=== Hyperelastic materials ===
Hyperelastic materials (also called Green elastic materials) are conservative models that are derived from a strain energy density function (W). A model is hyperelastic if and only if it is possible to express the Cauchy stress tensor as a function of the deformation gradient via a relationship of the form
σ
=
1
J
∂
W
∂
F
F
T
where
J
:=
det
F
.
{\displaystyle {\boldsymbol {\sigma }}={\cfrac {1}{J}}~{\cfrac {\partial W}{\partial {\boldsymbol {F}}}}{\boldsymbol {F}}^{\textsf {T}}\quad {\text{where}}\quad J:=\det {\boldsymbol {F}}\,.}
This formulation takes the energy potential (W) as a function of the deformation gradient (
F
{\displaystyle {\boldsymbol {F}}}
). By also requiring satisfaction of material objectivity, the energy potential may be alternatively regarded as a function of the Cauchy-Green deformation tensor (
C
:=
F
T
F
{\displaystyle {\boldsymbol {C}}:={\boldsymbol {F}}^{\textsf {T}}{\boldsymbol {F}}}
), in which case the hyperelastic model may be written alternatively as
σ
=
2
J
F
∂
W
∂
C
F
T
where
J
:=
det
F
.
{\displaystyle {\boldsymbol {\sigma }}={\cfrac {2}{J}}~{\boldsymbol {F}}{\cfrac {\partial W}{\partial {\boldsymbol {C}}}}{\boldsymbol {F}}^{\textsf {T}}\quad {\text{where}}\quad J:=\det {\boldsymbol {F}}\,.}
== Applications ==
Linear elasticity is used widely in the design and analysis of structures such as beams, plates and shells, and sandwich composites. This theory is also the basis of much of fracture mechanics.
Hyperelasticity is primarily used to determine the response of elastomer-based objects such as gaskets and of biological materials such as soft tissues and cell membranes.
== Factors affecting elasticity ==
In a given isotropic solid, with known theoretical elasticity for the bulk material in terms of Young's modulus,the effective elasticity will be governed by porosity. Generally a more porous material will exhibit lower stiffness. More specifically, the fraction of pores, their distribution at different sizes and the nature of the fluid with which they are filled give rise to different elastic behaviours in solids.
For isotropic materials containing cracks, the presence of fractures affects the Young and the shear moduli perpendicular to the planes of the cracks, which decrease (Young's modulus faster than the shear modulus) as the fracture density increases, indicating that the presence of cracks makes bodies brittler. Microscopically, the stress–strain relationship of materials is in general governed by the Helmholtz free energy, a thermodynamic quantity. Molecules settle in the configuration which minimizes the free energy, subject to constraints derived from their structure, and, depending on whether the energy or the entropy term dominates the free energy, materials can broadly be classified as energy-elastic and entropy-elastic. As such, microscopic factors affecting the free energy, such as the equilibrium distance between molecules, can affect the elasticity of materials: for instance, in inorganic materials, as the equilibrium distance between molecules at 0 K increases, the bulk modulus decreases. The effect of temperature on elasticity is difficult to isolate, because there are numerous factors affecting it. For instance, the bulk modulus of a material is dependent on the form of its lattice, its behavior under expansion, as well as the vibrations of the molecules, all of which are dependent on temperature.
== See also ==
== Notes ==
== References ==
== External links ==
The Feynman Lectures on Physics Vol. II Ch. 38: Elasticity | Wikipedia/Elastic_(solid_mechanics) |
Bessel functions, named after Friedrich Bessel who was the first to systematically study them in 1824, are canonical solutions y(x) of Bessel's differential equation
x
2
d
2
y
d
x
2
+
x
d
y
d
x
+
(
x
2
−
α
2
)
y
=
0
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}+\left(x^{2}-\alpha ^{2}\right)y=0}
for an arbitrary complex number
α
{\displaystyle \alpha }
, which represents the order of the Bessel function. Although
α
{\displaystyle \alpha }
and
−
α
{\displaystyle -\alpha }
produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of
α
{\displaystyle \alpha }
.
The most important cases are when
α
{\displaystyle \alpha }
is an integer or half-integer. Bessel functions for integer
α
{\displaystyle \alpha }
are also known as cylinder functions or the cylindrical harmonics because they appear in the solution to Laplace's equation in cylindrical coordinates. Spherical Bessel functions with half-integer
α
{\displaystyle \alpha }
are obtained when solving the Helmholtz equation in spherical coordinates.
== Applications ==
Bessel's equation arises when finding separable solutions to Laplace's equation and the Helmholtz equation in cylindrical or spherical coordinates. Bessel functions are therefore especially important for many problems of wave propagation and static potentials. In solving problems in cylindrical coordinate systems, one obtains Bessel functions of integer order (α = n); in spherical problems, one obtains half-integer orders (α = n + 1/2). For example:
Electromagnetic waves in a cylindrical waveguide
Pressure amplitudes of inviscid rotational flows
Heat conduction in a cylindrical object
Modes of vibration of a thin circular or annular acoustic membrane (such as a drumhead or other membranophone) or thicker plates such as sheet metal (see Kirchhoff–Love plate theory, Mindlin–Reissner plate theory)
Diffusion problems on a lattice
Solutions to the Schrödinger equation in spherical and cylindrical coordinates for a free particle
Position space representation of the Feynman propagator in quantum field theory
Solving for patterns of acoustical radiation
Frequency-dependent friction in circular pipelines
Dynamics of floating bodies
Angular resolution
Diffraction from helical objects, including DNA
Probability density function of product of two normally distributed random variables
Analyzing of the surface waves generated by microtremors, in geophysics and seismology.
Bessel functions also appear in other problems, such as signal processing (e.g., see FM audio synthesis, Kaiser window, or Bessel filter).
== Definitions ==
Because this is a linear differential equation, solutions can be scaled to any amplitude. The amplitudes chosen for the functions originate from the early work in which the functions appeared as solutions to definite integrals rather than solutions to differential equations. Because the differential equation is second-order, there must be two linearly independent solutions: one of the first kind and one of the second kind. Depending upon the circumstances, however, various formulations of these solutions are convenient. Different variations are summarized in the table below and described in the following sections.The subscript n is typically used in place of
α
{\displaystyle \alpha }
when
α
{\displaystyle \alpha }
is known to be an integer.
Bessel functions of the second kind and the spherical Bessel functions of the second kind are sometimes denoted by Nn and nn, respectively, rather than Yn and yn.
=== Bessel functions of the first kind: Jα ===
Bessel functions of the first kind, denoted as Jα(x), are solutions of Bessel's differential equation. For integer or positive α, Bessel functions of the first kind are finite at the origin (x = 0); while for negative non-integer α, Bessel functions of the first kind diverge as x approaches zero. It is possible to define the function by
x
α
{\displaystyle x^{\alpha }}
times a Maclaurin series (note that α need not be an integer, and non-integer powers are not permitted in a Taylor series), which can be found by applying the Frobenius method to Bessel's equation:
J
α
(
x
)
=
∑
m
=
0
∞
(
−
1
)
m
m
!
Γ
(
m
+
α
+
1
)
(
x
2
)
2
m
+
α
,
{\displaystyle J_{\alpha }(x)=\sum _{m=0}^{\infty }{\frac {(-1)^{m}}{m!\,\Gamma (m+\alpha +1)}}{\left({\frac {x}{2}}\right)}^{2m+\alpha },}
where Γ(z) is the gamma function, a shifted generalization of the factorial function to non-integer values. Some earlier authors define the Bessel function of the first kind differently, essentially without the division by
2
{\displaystyle 2}
in
x
/
2
{\displaystyle x/2}
; this definition is not used in this article. The Bessel function of the first kind is an entire function if α is an integer, otherwise it is a multivalued function with singularity at zero. The graphs of Bessel functions look roughly like oscillating sine or cosine functions that decay proportionally to
x
−
1
/
2
{\displaystyle x^{-{1}/{2}}}
(see also their asymptotic forms below), although their roots are not generally periodic, except asymptotically for large x. (The series indicates that −J1(x) is the derivative of J0(x), much like −sin x is the derivative of cos x; more generally, the derivative of Jn(x) can be expressed in terms of Jn ± 1(x) by the identities below.)
For non-integer α, the functions Jα(x) and J−α(x) are linearly independent, and are therefore the two solutions of the differential equation. On the other hand, for integer order n, the following relationship is valid (the gamma function has simple poles at each of the non-positive integers):
J
−
n
(
x
)
=
(
−
1
)
n
J
n
(
x
)
.
{\displaystyle J_{-n}(x)=(-1)^{n}J_{n}(x).}
This means that the two solutions are no longer linearly independent. In this case, the second linearly independent solution is then found to be the Bessel function of the second kind, as discussed below.
==== Bessel's integrals ====
Another definition of the Bessel function, for integer values of n, is possible using an integral representation:
J
n
(
x
)
=
1
π
∫
0
π
cos
(
n
τ
−
x
sin
τ
)
d
τ
=
1
π
Re
(
∫
0
π
e
i
(
n
τ
−
x
sin
τ
)
d
τ
)
,
{\displaystyle J_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(n\tau -x\sin \tau )\,d\tau ={\frac {1}{\pi }}\operatorname {Re} \left(\int _{0}^{\pi }e^{i(n\tau -x\sin \tau )}\,d\tau \right),}
which is also called Hansen-Bessel formula.
This was the approach that Bessel used, and from this definition he derived several properties of the function. The definition may be extended to non-integer orders by one of Schläfli's integrals, for Re(x) > 0:
J
α
(
x
)
=
1
π
∫
0
π
cos
(
α
τ
−
x
sin
τ
)
d
τ
−
sin
(
α
π
)
π
∫
0
∞
e
−
x
sinh
t
−
α
t
d
t
.
{\displaystyle J_{\alpha }(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(\alpha \tau -x\sin \tau )\,d\tau -{\frac {\sin(\alpha \pi )}{\pi }}\int _{0}^{\infty }e^{-x\sinh t-\alpha t}\,dt.}
==== Relation to hypergeometric series ====
The Bessel functions can be expressed in terms of the generalized hypergeometric series as
J
α
(
x
)
=
(
x
2
)
α
Γ
(
α
+
1
)
0
F
1
(
α
+
1
;
−
x
2
4
)
.
{\displaystyle J_{\alpha }(x)={\frac {\left({\frac {x}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\;_{0}F_{1}\left(\alpha +1;-{\frac {x^{2}}{4}}\right).}
This expression is related to the development of Bessel functions in terms of the Bessel–Clifford function.
==== Relation to Laguerre polynomials ====
In terms of the Laguerre polynomials Lk and arbitrarily chosen parameter t, the Bessel function can be expressed as
J
α
(
x
)
(
x
2
)
α
=
e
−
t
Γ
(
α
+
1
)
∑
k
=
0
∞
L
k
(
α
)
(
x
2
4
t
)
(
k
+
α
k
)
t
k
k
!
.
{\displaystyle {\frac {J_{\alpha }(x)}{\left({\frac {x}{2}}\right)^{\alpha }}}={\frac {e^{-t}}{\Gamma (\alpha +1)}}\sum _{k=0}^{\infty }{\frac {L_{k}^{(\alpha )}\left({\frac {x^{2}}{4t}}\right)}{\binom {k+\alpha }{k}}}{\frac {t^{k}}{k!}}.}
=== Bessel functions of the second kind: Yα ===
The Bessel functions of the second kind, denoted by Yα(x), occasionally denoted instead by Nα(x), are solutions of the Bessel differential equation that have a singularity at the origin (x = 0) and are multivalued. These are sometimes called Weber functions, as they were introduced by H. M. Weber (1873), and also Neumann functions after Carl Neumann.
For non-integer α, Yα(x) is related to Jα(x) by
Y
α
(
x
)
=
J
α
(
x
)
cos
(
α
π
)
−
J
−
α
(
x
)
sin
(
α
π
)
.
{\displaystyle Y_{\alpha }(x)={\frac {J_{\alpha }(x)\cos(\alpha \pi )-J_{-\alpha }(x)}{\sin(\alpha \pi )}}.}
In the case of integer order n, the function is defined by taking the limit as a non-integer α tends to n:
Y
n
(
x
)
=
lim
α
→
n
Y
α
(
x
)
.
{\displaystyle Y_{n}(x)=\lim _{\alpha \to n}Y_{\alpha }(x).}
If n is a nonnegative integer, we have the series
Y
n
(
z
)
=
−
(
z
2
)
−
n
π
∑
k
=
0
n
−
1
(
n
−
k
−
1
)
!
k
!
(
z
2
4
)
k
+
2
π
J
n
(
z
)
ln
z
2
−
(
z
2
)
n
π
∑
k
=
0
∞
(
ψ
(
k
+
1
)
+
ψ
(
n
+
k
+
1
)
)
(
−
z
2
4
)
k
k
!
(
n
+
k
)
!
{\displaystyle Y_{n}(z)=-{\frac {\left({\frac {z}{2}}\right)^{-n}}{\pi }}\sum _{k=0}^{n-1}{\frac {(n-k-1)!}{k!}}\left({\frac {z^{2}}{4}}\right)^{k}+{\frac {2}{\pi }}J_{n}(z)\ln {\frac {z}{2}}-{\frac {\left({\frac {z}{2}}\right)^{n}}{\pi }}\sum _{k=0}^{\infty }(\psi (k+1)+\psi (n+k+1)){\frac {\left(-{\frac {z^{2}}{4}}\right)^{k}}{k!(n+k)!}}}
where
ψ
(
z
)
{\displaystyle \psi (z)}
is the digamma function, the logarithmic derivative of the gamma function.
There is also a corresponding integral formula (for Re(x) > 0):
Y
n
(
x
)
=
1
π
∫
0
π
sin
(
x
sin
θ
−
n
θ
)
d
θ
−
1
π
∫
0
∞
(
e
n
t
+
(
−
1
)
n
e
−
n
t
)
e
−
x
sinh
t
d
t
.
{\displaystyle Y_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\sin(x\sin \theta -n\theta )\,d\theta -{\frac {1}{\pi }}\int _{0}^{\infty }\left(e^{nt}+(-1)^{n}e^{-nt}\right)e^{-x\sinh t}\,dt.}
In the case where n = 0: (with
γ
{\displaystyle \gamma }
being Euler's constant)
Y
0
(
x
)
=
4
π
2
∫
0
1
2
π
cos
(
x
cos
θ
)
(
γ
+
ln
(
2
x
sin
2
θ
)
)
d
θ
.
{\displaystyle Y_{0}\left(x\right)={\frac {4}{\pi ^{2}}}\int _{0}^{{\frac {1}{2}}\pi }\cos \left(x\cos \theta \right)\left(\gamma +\ln \left(2x\sin ^{2}\theta \right)\right)\,d\theta .}
Yα(x) is necessary as the second linearly independent solution of the Bessel's equation when α is an integer. But Yα(x) has more meaning than that. It can be considered as a "natural" partner of Jα(x). See also the subsection on Hankel functions below.
When α is an integer, moreover, as was similarly the case for the functions of the first kind, the following relationship is valid:
Y
−
n
(
x
)
=
(
−
1
)
n
Y
n
(
x
)
.
{\displaystyle Y_{-n}(x)=(-1)^{n}Y_{n}(x).}
Both Jα(x) and Yα(x) are holomorphic functions of x on the complex plane cut along the negative real axis. When α is an integer, the Bessel functions J are entire functions of x. If x is held fixed at a non-zero value, then the Bessel functions are entire functions of α.
The Bessel functions of the second kind when α is an integer is an example of the second kind of solution in Fuchs's theorem.
=== Hankel functions: H(1)α, H(2)α ===
Another important formulation of the two linearly independent solutions to Bessel's equation are the Hankel functions of the first and second kind, H(1)α(x) and H(2)α(x), defined as
H
α
(
1
)
(
x
)
=
J
α
(
x
)
+
i
Y
α
(
x
)
,
H
α
(
2
)
(
x
)
=
J
α
(
x
)
−
i
Y
α
(
x
)
,
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&=J_{\alpha }(x)+iY_{\alpha }(x),\\[5pt]H_{\alpha }^{(2)}(x)&=J_{\alpha }(x)-iY_{\alpha }(x),\end{aligned}}}
where i is the imaginary unit. These linear combinations are also known as Bessel functions of the third kind; they are two linearly independent solutions of Bessel's differential equation. They are named after Hermann Hankel.
These forms of linear combination satisfy numerous simple-looking properties, like asymptotic formulae or integral representations. Here, "simple" means an appearance of a factor of the form ei f(x). For real
x
>
0
{\displaystyle x>0}
where
J
α
(
x
)
{\displaystyle J_{\alpha }(x)}
,
Y
α
(
x
)
{\displaystyle Y_{\alpha }(x)}
are real-valued, the Bessel functions of the first and second kind are the real and imaginary parts, respectively, of the first Hankel function and the real and negative imaginary parts of the second Hankel function. Thus, the above formulae are analogs of Euler's formula, substituting H(1)α(x), H(2)α(x) for
e
±
i
x
{\displaystyle e^{\pm ix}}
and
J
α
(
x
)
{\displaystyle J_{\alpha }(x)}
,
Y
α
(
x
)
{\displaystyle Y_{\alpha }(x)}
for
cos
(
x
)
{\displaystyle \cos(x)}
,
sin
(
x
)
{\displaystyle \sin(x)}
, as explicitly shown in the asymptotic expansion.
The Hankel functions are used to express outward- and inward-propagating cylindrical-wave solutions of the cylindrical wave equation, respectively (or vice versa, depending on the sign convention for the frequency).
Using the previous relationships, they can be expressed as
H
α
(
1
)
(
x
)
=
J
−
α
(
x
)
−
e
−
α
π
i
J
α
(
x
)
i
sin
α
π
,
H
α
(
2
)
(
x
)
=
J
−
α
(
x
)
−
e
α
π
i
J
α
(
x
)
−
i
sin
α
π
.
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {J_{-\alpha }(x)-e^{-\alpha \pi i}J_{\alpha }(x)}{i\sin \alpha \pi }},\\[5pt]H_{\alpha }^{(2)}(x)&={\frac {J_{-\alpha }(x)-e^{\alpha \pi i}J_{\alpha }(x)}{-i\sin \alpha \pi }}.\end{aligned}}}
If α is an integer, the limit has to be calculated. The following relationships are valid, whether α is an integer or not:
H
−
α
(
1
)
(
x
)
=
e
α
π
i
H
α
(
1
)
(
x
)
,
H
−
α
(
2
)
(
x
)
=
e
−
α
π
i
H
α
(
2
)
(
x
)
.
{\displaystyle {\begin{aligned}H_{-\alpha }^{(1)}(x)&=e^{\alpha \pi i}H_{\alpha }^{(1)}(x),\\[6mu]H_{-\alpha }^{(2)}(x)&=e^{-\alpha \pi i}H_{\alpha }^{(2)}(x).\end{aligned}}}
In particular, if α = m + 1/2 with m a nonnegative integer, the above relations imply directly that
J
−
(
m
+
1
2
)
(
x
)
=
(
−
1
)
m
+
1
Y
m
+
1
2
(
x
)
,
Y
−
(
m
+
1
2
)
(
x
)
=
(
−
1
)
m
J
m
+
1
2
(
x
)
.
{\displaystyle {\begin{aligned}J_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m+1}Y_{m+{\frac {1}{2}}}(x),\\[5pt]Y_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m}J_{m+{\frac {1}{2}}}(x).\end{aligned}}}
These are useful in developing the spherical Bessel functions (see below).
The Hankel functions admit the following integral representations for Re(x) > 0:
H
α
(
1
)
(
x
)
=
1
π
i
∫
−
∞
+
∞
+
π
i
e
x
sinh
t
−
α
t
d
t
,
H
α
(
2
)
(
x
)
=
−
1
π
i
∫
−
∞
+
∞
−
π
i
e
x
sinh
t
−
α
t
d
t
,
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {1}{\pi i}}\int _{-\infty }^{+\infty +\pi i}e^{x\sinh t-\alpha t}\,dt,\\[5pt]H_{\alpha }^{(2)}(x)&=-{\frac {1}{\pi i}}\int _{-\infty }^{+\infty -\pi i}e^{x\sinh t-\alpha t}\,dt,\end{aligned}}}
where the integration limits indicate integration along a contour that can be chosen as follows: from −∞ to 0 along the negative real axis, from 0 to ±πi along the imaginary axis, and from ±πi to +∞ ± πi along a contour parallel to the real axis.
=== Modified Bessel functions: Iα, Kα ===
The Bessel functions are valid even for complex arguments x, and an important special case is that of a purely imaginary argument. In this case, the solutions to the Bessel equation are called the modified Bessel functions (or occasionally the hyperbolic Bessel functions) of the first and second kind and are defined as
I
α
(
x
)
=
i
−
α
J
α
(
i
x
)
=
∑
m
=
0
∞
1
m
!
Γ
(
m
+
α
+
1
)
(
x
2
)
2
m
+
α
,
K
α
(
x
)
=
π
2
I
−
α
(
x
)
−
I
α
(
x
)
sin
α
π
,
{\displaystyle {\begin{aligned}I_{\alpha }(x)&=i^{-\alpha }J_{\alpha }(ix)=\sum _{m=0}^{\infty }{\frac {1}{m!\,\Gamma (m+\alpha +1)}}\left({\frac {x}{2}}\right)^{2m+\alpha },\\[5pt]K_{\alpha }(x)&={\frac {\pi }{2}}{\frac {I_{-\alpha }(x)-I_{\alpha }(x)}{\sin \alpha \pi }},\end{aligned}}}
when α is not an integer. When α is an integer, then the limit is used. These are chosen to be real-valued for real and positive arguments x. The series expansion for Iα(x) is thus similar to that for Jα(x), but without the alternating (−1)m factor.
K
α
{\displaystyle K_{\alpha }}
can be expressed in terms of Hankel functions:
K
α
(
x
)
=
{
π
2
i
α
+
1
H
α
(
1
)
(
i
x
)
−
π
<
arg
x
≤
π
2
π
2
(
−
i
)
α
+
1
H
α
(
2
)
(
−
i
x
)
−
π
2
<
arg
x
≤
π
{\displaystyle K_{\alpha }(x)={\begin{cases}{\frac {\pi }{2}}i^{\alpha +1}H_{\alpha }^{(1)}(ix)&-\pi <\arg x\leq {\frac {\pi }{2}}\\{\frac {\pi }{2}}(-i)^{\alpha +1}H_{\alpha }^{(2)}(-ix)&-{\frac {\pi }{2}}<\arg x\leq \pi \end{cases}}}
Using these two formulae the result to
J
α
2
(
z
)
{\displaystyle J_{\alpha }^{2}(z)}
+
Y
α
2
(
z
)
{\displaystyle Y_{\alpha }^{2}(z)}
, commonly known as Nicholson's integral or Nicholson's formula, can be obtained to give the following
J
α
2
(
x
)
+
Y
α
2
(
x
)
=
8
π
2
∫
0
∞
cosh
(
2
α
t
)
K
0
(
2
x
sinh
t
)
d
t
,
{\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8}{\pi ^{2}}}\int _{0}^{\infty }\cosh(2\alpha t)K_{0}(2x\sinh t)\,dt,}
given that the condition Re(x) > 0 is met. It can also be shown that
J
α
2
(
x
)
+
Y
α
2
(
x
)
=
8
cos
(
α
π
)
π
2
∫
0
∞
K
2
α
(
2
x
sinh
t
)
d
t
,
{\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8\cos(\alpha \pi )}{\pi ^{2}}}\int _{0}^{\infty }K_{2\alpha }(2x\sinh t)\,dt,}
only when |Re(α)| < 1/2 and Re(x) ≥ 0 but not when x = 0.
We can express the first and second Bessel functions in terms of the modified Bessel functions (these are valid if −π < arg z ≤ π/2):
J
α
(
i
z
)
=
e
α
π
i
2
I
α
(
z
)
,
Y
α
(
i
z
)
=
e
(
α
+
1
)
π
i
2
I
α
(
z
)
−
2
π
e
−
α
π
i
2
K
α
(
z
)
.
{\displaystyle {\begin{aligned}J_{\alpha }(iz)&=e^{\frac {\alpha \pi i}{2}}I_{\alpha }(z),\\[1ex]Y_{\alpha }(iz)&=e^{\frac {(\alpha +1)\pi i}{2}}I_{\alpha }(z)-{\tfrac {2}{\pi }}e^{-{\frac {\alpha \pi i}{2}}}K_{\alpha }(z).\end{aligned}}}
Iα(x) and Kα(x) are the two linearly independent solutions to the modified Bessel's equation:
x
2
d
2
y
d
x
2
+
x
d
y
d
x
−
(
x
2
+
α
2
)
y
=
0.
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}-\left(x^{2}+\alpha ^{2}\right)y=0.}
Unlike the ordinary Bessel functions, which are oscillating as functions of a real argument, Iα and Kα are exponentially growing and decaying functions respectively. Like the ordinary Bessel function Jα, the function Iα goes to zero at x = 0 for α > 0 and is finite at x = 0 for α = 0. Analogously, Kα diverges at x = 0 with the singularity being of logarithmic type for K0, and 1/2Γ(|α|)(2/x)|α| otherwise.
Two integral formulas for the modified Bessel functions are (for Re(x) > 0):
I
α
(
x
)
=
1
π
∫
0
π
e
x
cos
θ
cos
α
θ
d
θ
−
sin
α
π
π
∫
0
∞
e
−
x
cosh
t
−
α
t
d
t
,
K
α
(
x
)
=
∫
0
∞
e
−
x
cosh
t
cosh
α
t
d
t
.
{\displaystyle {\begin{aligned}I_{\alpha }(x)&={\frac {1}{\pi }}\int _{0}^{\pi }e^{x\cos \theta }\cos \alpha \theta \,d\theta -{\frac {\sin \alpha \pi }{\pi }}\int _{0}^{\infty }e^{-x\cosh t-\alpha t}\,dt,\\[5pt]K_{\alpha }(x)&=\int _{0}^{\infty }e^{-x\cosh t}\cosh \alpha t\,dt.\end{aligned}}}
Bessel functions can be described as Fourier transforms of powers of quadratic functions. For example (for Re(ω) > 0):
2
K
0
(
ω
)
=
∫
−
∞
∞
e
i
ω
t
t
2
+
1
d
t
.
{\displaystyle 2\,K_{0}(\omega )=\int _{-\infty }^{\infty }{\frac {e^{i\omega t}}{\sqrt {t^{2}+1}}}\,dt.}
It can be proven by showing equality to the above integral definition for K0. This is done by integrating a closed curve in the first quadrant of the complex plane.
Modified Bessel functions of the second kind may be represented with Bassett's integral
K
n
(
x
z
)
=
Γ
(
n
+
1
2
)
(
2
z
)
n
π
x
n
∫
0
∞
cos
(
x
t
)
d
t
(
t
2
+
z
2
)
n
+
1
2
.
{\displaystyle K_{n}(xz)={\frac {\Gamma \left(n+{\frac {1}{2}}\right)(2z)^{n}}{{\sqrt {\pi }}x^{n}}}\int _{0}^{\infty }{\frac {\cos(xt)\,dt}{(t^{2}+z^{2})^{n+{\frac {1}{2}}}}}.}
Modified Bessel functions K1/3 and K2/3 can be represented in terms of rapidly convergent integrals
K
1
3
(
ξ
)
=
3
∫
0
∞
exp
(
−
ξ
(
1
+
4
x
2
3
)
1
+
x
2
3
)
d
x
,
K
2
3
(
ξ
)
=
1
3
∫
0
∞
3
+
2
x
2
1
+
x
2
3
exp
(
−
ξ
(
1
+
4
x
2
3
)
1
+
x
2
3
)
d
x
.
{\displaystyle {\begin{aligned}K_{\frac {1}{3}}(\xi )&={\sqrt {3}}\int _{0}^{\infty }\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx,\\[5pt]K_{\frac {2}{3}}(\xi )&={\frac {1}{\sqrt {3}}}\int _{0}^{\infty }{\frac {3+2x^{2}}{\sqrt {1+{\frac {x^{2}}{3}}}}}\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx.\end{aligned}}}
The modified Bessel function
K
1
2
(
ξ
)
=
(
2
ξ
/
π
)
−
1
/
2
exp
(
−
ξ
)
{\displaystyle K_{\frac {1}{2}}(\xi )=(2\xi /\pi )^{-1/2}\exp(-\xi )}
is useful to represent the Laplace distribution as an Exponential-scale mixture of normal distributions.
The modified Bessel function of the second kind has also been called by the following names (now rare):
Basset function after Alfred Barnard Basset
Modified Bessel function of the third kind
Modified Hankel function
Macdonald function after Hector Munro Macdonald
=== Spherical Bessel functions: jn, yn ===
When solving the Helmholtz equation in spherical coordinates by separation of variables, the radial equation has the form
x
2
d
2
y
d
x
2
+
2
x
d
y
d
x
+
(
x
2
−
n
(
n
+
1
)
)
y
=
0.
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+2x{\frac {dy}{dx}}+\left(x^{2}-n(n+1)\right)y=0.}
The two linearly independent solutions to this equation are called the spherical Bessel functions jn and yn, and are related to the ordinary Bessel functions Jn and Yn by
j
n
(
x
)
=
π
2
x
J
n
+
1
2
(
x
)
,
y
n
(
x
)
=
π
2
x
Y
n
+
1
2
(
x
)
=
(
−
1
)
n
+
1
π
2
x
J
−
n
−
1
2
(
x
)
.
{\displaystyle {\begin{aligned}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x),\\y_{n}(x)&={\sqrt {\frac {\pi }{2x}}}Y_{n+{\frac {1}{2}}}(x)=(-1)^{n+1}{\sqrt {\frac {\pi }{2x}}}J_{-n-{\frac {1}{2}}}(x).\end{aligned}}}
yn is also denoted nn or ηn; some authors call these functions the spherical Neumann functions.
From the relations to the ordinary Bessel functions it is directly seen that:
j
n
(
x
)
=
(
−
1
)
n
y
−
n
−
1
(
x
)
y
n
(
x
)
=
(
−
1
)
n
+
1
j
−
n
−
1
(
x
)
{\displaystyle {\begin{aligned}j_{n}(x)&=(-1)^{n}y_{-n-1}(x)\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)\end{aligned}}}
The spherical Bessel functions can also be written as (Rayleigh's formulas)
j
n
(
x
)
=
(
−
x
)
n
(
1
x
d
d
x
)
n
sin
x
x
,
y
n
(
x
)
=
−
(
−
x
)
n
(
1
x
d
d
x
)
n
cos
x
x
.
{\displaystyle {\begin{aligned}j_{n}(x)&=(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\sin x}{x}},\\y_{n}(x)&=-(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\cos x}{x}}.\end{aligned}}}
The zeroth spherical Bessel function j0(x) is also known as the (unnormalized) sinc function. The first few spherical Bessel functions are:
j
0
(
x
)
=
sin
x
x
.
j
1
(
x
)
=
sin
x
x
2
−
cos
x
x
,
j
2
(
x
)
=
(
3
x
2
−
1
)
sin
x
x
−
3
cos
x
x
2
,
j
3
(
x
)
=
(
15
x
3
−
6
x
)
sin
x
x
−
(
15
x
2
−
1
)
cos
x
x
{\displaystyle {\begin{aligned}j_{0}(x)&={\frac {\sin x}{x}}.\\j_{1}(x)&={\frac {\sin x}{x^{2}}}-{\frac {\cos x}{x}},\\j_{2}(x)&=\left({\frac {3}{x^{2}}}-1\right){\frac {\sin x}{x}}-{\frac {3\cos x}{x^{2}}},\\j_{3}(x)&=\left({\frac {15}{x^{3}}}-{\frac {6}{x}}\right){\frac {\sin x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\cos x}{x}}\end{aligned}}}
and
y
0
(
x
)
=
−
j
−
1
(
x
)
=
−
cos
x
x
,
y
1
(
x
)
=
j
−
2
(
x
)
=
−
cos
x
x
2
−
sin
x
x
,
y
2
(
x
)
=
−
j
−
3
(
x
)
=
(
−
3
x
2
+
1
)
cos
x
x
−
3
sin
x
x
2
,
y
3
(
x
)
=
j
−
4
(
x
)
=
(
−
15
x
3
+
6
x
)
cos
x
x
−
(
15
x
2
−
1
)
sin
x
x
.
{\displaystyle {\begin{aligned}y_{0}(x)&=-j_{-1}(x)=-{\frac {\cos x}{x}},\\y_{1}(x)&=j_{-2}(x)=-{\frac {\cos x}{x^{2}}}-{\frac {\sin x}{x}},\\y_{2}(x)&=-j_{-3}(x)=\left(-{\frac {3}{x^{2}}}+1\right){\frac {\cos x}{x}}-{\frac {3\sin x}{x^{2}}},\\y_{3}(x)&=j_{-4}(x)=\left(-{\frac {15}{x^{3}}}+{\frac {6}{x}}\right){\frac {\cos x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\sin x}{x}}.\end{aligned}}}
The first few non-zero roots of the first few spherical Bessel functions are:
==== Generating function ====
The spherical Bessel functions have the generating functions
1
z
cos
(
z
2
−
2
z
t
)
=
∑
n
=
0
∞
t
n
n
!
j
n
−
1
(
z
)
,
1
z
sin
(
z
2
−
2
z
t
)
=
∑
n
=
0
∞
t
n
n
!
y
n
−
1
(
z
)
.
{\displaystyle {\begin{aligned}{\frac {1}{z}}\cos \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}j_{n-1}(z),\\{\frac {1}{z}}\sin \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}y_{n-1}(z).\end{aligned}}}
==== Finite series expansions ====
In contrast to the whole integer Bessel functions Jn(x), Yn(x), the spherical Bessel functions jn(x), yn(x) have a finite series expression:
j
n
(
x
)
=
π
2
x
J
n
+
1
2
(
x
)
=
=
1
2
x
[
e
i
x
∑
r
=
0
n
i
r
−
n
−
1
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
+
e
−
i
x
∑
r
=
0
n
(
−
i
)
r
−
n
−
1
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
]
=
1
x
[
sin
(
x
−
n
π
2
)
∑
r
=
0
[
n
2
]
(
−
1
)
r
(
n
+
2
r
)
!
(
2
r
)
!
(
n
−
2
r
)
!
(
2
x
)
2
r
+
cos
(
x
−
n
π
2
)
∑
r
=
0
[
n
−
1
2
]
(
−
1
)
r
(
n
+
2
r
+
1
)
!
(
2
r
+
1
)
!
(
n
−
2
r
−
1
)
!
(
2
x
)
2
r
+
1
]
y
n
(
x
)
=
(
−
1
)
n
+
1
j
−
n
−
1
(
x
)
=
(
−
1
)
n
+
1
π
2
x
J
−
(
n
+
1
2
)
(
x
)
=
=
(
−
1
)
n
+
1
2
x
[
e
i
x
∑
r
=
0
n
i
r
+
n
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
+
e
−
i
x
∑
r
=
0
n
(
−
i
)
r
+
n
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
]
=
=
(
−
1
)
n
+
1
x
[
cos
(
x
+
n
π
2
)
∑
r
=
0
[
n
2
]
(
−
1
)
r
(
n
+
2
r
)
!
(
2
r
)
!
(
n
−
2
r
)
!
(
2
x
)
2
r
−
sin
(
x
+
n
π
2
)
∑
r
=
0
[
n
−
1
2
]
(
−
1
)
r
(
n
+
2
r
+
1
)
!
(
2
r
+
1
)
!
(
n
−
2
r
−
1
)
!
(
2
x
)
2
r
+
1
]
{\displaystyle {\begin{alignedat}{2}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x)=\\&={\frac {1}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]\\&={\frac {1}{x}}\left[\sin \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}+\cos \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)=(-1)^{n+1}{\frac {\pi }{2x}}J_{-\left(n+{\frac {1}{2}}\right)}(x)=\\&={\frac {(-1)^{n+1}}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]=\\&={\frac {(-1)^{n+1}}{x}}\left[\cos \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}-\sin \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\end{alignedat}}}
==== Differential relations ====
In the following, fn is any of jn, yn, h(1)n, h(2)n for n = 0, ±1, ±2, ...
(
1
z
d
d
z
)
m
(
z
n
+
1
f
n
(
z
)
)
=
z
n
−
m
+
1
f
n
−
m
(
z
)
,
(
1
z
d
d
z
)
m
(
z
−
n
f
n
(
z
)
)
=
(
−
1
)
m
z
−
n
−
m
f
n
+
m
(
z
)
.
{\displaystyle {\begin{aligned}\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{n+1}f_{n}(z)\right)&=z^{n-m+1}f_{n-m}(z),\\\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{-n}f_{n}(z)\right)&=(-1)^{m}z^{-n-m}f_{n+m}(z).\end{aligned}}}
=== Spherical Hankel functions: h(1)n, h(2)n ===
There are also spherical analogues of the Hankel functions:
h
n
(
1
)
(
x
)
=
j
n
(
x
)
+
i
y
n
(
x
)
,
h
n
(
2
)
(
x
)
=
j
n
(
x
)
−
i
y
n
(
x
)
.
{\displaystyle {\begin{aligned}h_{n}^{(1)}(x)&=j_{n}(x)+iy_{n}(x),\\h_{n}^{(2)}(x)&=j_{n}(x)-iy_{n}(x).\end{aligned}}}
There are simple closed-form expressions for the Bessel functions of half-integer order in terms of the standard trigonometric functions, and therefore for the spherical Bessel functions. In particular, for non-negative integers n:
h
n
(
1
)
(
x
)
=
(
−
i
)
n
+
1
e
i
x
x
∑
m
=
0
n
i
m
m
!
(
2
x
)
m
(
n
+
m
)
!
(
n
−
m
)
!
,
{\displaystyle h_{n}^{(1)}(x)=(-i)^{n+1}{\frac {e^{ix}}{x}}\sum _{m=0}^{n}{\frac {i^{m}}{m!\,(2x)^{m}}}{\frac {(n+m)!}{(n-m)!}},}
and h(2)n is the complex-conjugate of this (for real x). It follows, for example, that j0(x) = sin x/x and y0(x) = −cos x/x, and so on.
The spherical Hankel functions appear in problems involving spherical wave propagation, for example in the multipole expansion of the electromagnetic field.
=== Riccati–Bessel functions: Sn, Cn, ξn, ζn ===
Riccati–Bessel functions only slightly differ from spherical Bessel functions:
S
n
(
x
)
=
x
j
n
(
x
)
=
π
x
2
J
n
+
1
2
(
x
)
C
n
(
x
)
=
−
x
y
n
(
x
)
=
−
π
x
2
Y
n
+
1
2
(
x
)
ξ
n
(
x
)
=
x
h
n
(
1
)
(
x
)
=
π
x
2
H
n
+
1
2
(
1
)
(
x
)
=
S
n
(
x
)
−
i
C
n
(
x
)
ζ
n
(
x
)
=
x
h
n
(
2
)
(
x
)
=
π
x
2
H
n
+
1
2
(
2
)
(
x
)
=
S
n
(
x
)
+
i
C
n
(
x
)
{\displaystyle {\begin{aligned}S_{n}(x)&=xj_{n}(x)={\sqrt {\frac {\pi x}{2}}}J_{n+{\frac {1}{2}}}(x)\\C_{n}(x)&=-xy_{n}(x)=-{\sqrt {\frac {\pi x}{2}}}Y_{n+{\frac {1}{2}}}(x)\\\xi _{n}(x)&=xh_{n}^{(1)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(1)}(x)=S_{n}(x)-iC_{n}(x)\\\zeta _{n}(x)&=xh_{n}^{(2)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(2)}(x)=S_{n}(x)+iC_{n}(x)\end{aligned}}}
They satisfy the differential equation
x
2
d
2
y
d
x
2
+
(
x
2
−
n
(
n
+
1
)
)
y
=
0.
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+\left(x^{2}-n(n+1)\right)y=0.}
For example, this kind of differential equation appears in quantum mechanics while solving the radial component of the Schrödinger equation with hypothetical cylindrical infinite potential barrier. This differential equation, and the Riccati–Bessel solutions, also arises in the problem of scattering of electromagnetic waves by a sphere, known as Mie scattering after the first published solution by Mie (1908). See e.g., Du (2004) for recent developments and references.
Following Debye (1909), the notation ψn, χn is sometimes used instead of Sn, Cn.
== Asymptotic forms ==
The Bessel functions have the following asymptotic forms. For small arguments
0
<
z
≪
α
+
1
{\displaystyle 0<z\ll {\sqrt {\alpha +1}}}
, one obtains, when
α
{\displaystyle \alpha }
is not a negative integer:
J
α
(
z
)
∼
1
Γ
(
α
+
1
)
(
z
2
)
α
.
{\displaystyle J_{\alpha }(z)\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha }.}
When α is a negative integer, we have
J
α
(
z
)
∼
(
−
1
)
α
(
−
α
)
!
(
2
z
)
α
.
{\displaystyle J_{\alpha }(z)\sim {\frac {(-1)^{\alpha }}{(-\alpha )!}}\left({\frac {2}{z}}\right)^{\alpha }.}
For the Bessel function of the second kind we have three cases:
Y
α
(
z
)
∼
{
2
π
(
ln
(
z
2
)
+
γ
)
if
α
=
0
−
Γ
(
α
)
π
(
2
z
)
α
+
1
Γ
(
α
+
1
)
(
z
2
)
α
cot
(
α
π
)
if
α
is a positive integer (one term dominates unless
α
is imaginary)
,
−
(
−
1
)
α
Γ
(
−
α
)
π
(
z
2
)
α
if
α
is a negative integer,
{\displaystyle Y_{\alpha }(z)\sim {\begin{cases}{\dfrac {2}{\pi }}\left(\ln \left({\dfrac {z}{2}}\right)+\gamma \right)&{\text{if }}\alpha =0\\[1ex]-{\dfrac {\Gamma (\alpha )}{\pi }}\left({\dfrac {2}{z}}\right)^{\alpha }+{\dfrac {1}{\Gamma (\alpha +1)}}\left({\dfrac {z}{2}}\right)^{\alpha }\cot(\alpha \pi )&{\text{if }}\alpha {\text{ is a positive integer (one term dominates unless }}\alpha {\text{ is imaginary)}},\\[1ex]-{\dfrac {(-1)^{\alpha }\Gamma (-\alpha )}{\pi }}\left({\dfrac {z}{2}}\right)^{\alpha }&{\text{if }}\alpha {\text{ is a negative integer,}}\end{cases}}}
where γ is the Euler–Mascheroni constant (0.5772...).
For large real arguments z ≫ |α2 − 1/4|, one cannot write a true asymptotic form for Bessel functions of the first and second kind (unless α is half-integer) because they have zeros all the way out to infinity, which would have to be matched exactly by any asymptotic expansion. However, for a given value of arg z one can write an equation containing a term of order |z|−1:
J
α
(
z
)
=
2
π
z
(
cos
(
z
−
α
π
2
−
π
4
)
+
e
|
Im
(
z
)
|
O
(
|
z
|
−
1
)
)
for
|
arg
z
|
<
π
,
Y
α
(
z
)
=
2
π
z
(
sin
(
z
−
α
π
2
−
π
4
)
+
e
|
Im
(
z
)
|
O
(
|
z
|
−
1
)
)
for
|
arg
z
|
<
π
.
{\displaystyle {\begin{aligned}J_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\cos \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi ,\\Y_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\sin \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi .\end{aligned}}}
(For α = 1/2, the last terms in these formulas drop out completely; see the spherical Bessel functions above.)
The asymptotic forms for the Hankel functions are:
H
α
(
1
)
(
z
)
∼
2
π
z
e
i
(
z
−
α
π
2
−
π
4
)
for
−
π
<
arg
z
<
2
π
,
H
α
(
2
)
(
z
)
∼
2
π
z
e
−
i
(
z
−
α
π
2
−
π
4
)
for
−
2
π
<
arg
z
<
π
.
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<2\pi ,\\H_{\alpha }^{(2)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-2\pi <\arg z<\pi .\end{aligned}}}
These can be extended to other values of arg z using equations relating H(1)α(zeimπ) and H(2)α(zeimπ) to H(1)α(z) and H(2)α(z).
It is interesting that although the Bessel function of the first kind is the average of the two Hankel functions, Jα(z) is not asymptotic to the average of these two asymptotic forms when z is negative (because one or the other will not be correct there, depending on the arg z used). But the asymptotic forms for the Hankel functions permit us to write asymptotic forms for the Bessel functions of first and second kinds for complex (non-real) z so long as |z| goes to infinity at a constant phase angle arg z (using the square root having positive real part):
J
α
(
z
)
∼
1
2
π
z
e
i
(
z
−
α
π
2
−
π
4
)
for
−
π
<
arg
z
<
0
,
J
α
(
z
)
∼
1
2
π
z
e
−
i
(
z
−
α
π
2
−
π
4
)
for
0
<
arg
z
<
π
,
Y
α
(
z
)
∼
−
i
1
2
π
z
e
i
(
z
−
α
π
2
−
π
4
)
for
−
π
<
arg
z
<
0
,
Y
α
(
z
)
∼
i
1
2
π
z
e
−
i
(
z
−
α
π
2
−
π
4
)
for
0
<
arg
z
<
π
.
{\displaystyle {\begin{aligned}J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi ,\\[1ex]Y_{\alpha }(z)&\sim -i{\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]Y_{\alpha }(z)&\sim i{\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi .\end{aligned}}}
For the modified Bessel functions, Hankel developed asymptotic expansions as well:
I
α
(
z
)
∼
e
z
2
π
z
(
1
−
4
α
2
−
1
8
z
+
(
4
α
2
−
1
)
(
4
α
2
−
9
)
2
!
(
8
z
)
2
−
(
4
α
2
−
1
)
(
4
α
2
−
9
)
(
4
α
2
−
25
)
3
!
(
8
z
)
3
+
⋯
)
for
|
arg
z
|
<
π
2
,
K
α
(
z
)
∼
π
2
z
e
−
z
(
1
+
4
α
2
−
1
8
z
+
(
4
α
2
−
1
)
(
4
α
2
−
9
)
2
!
(
8
z
)
2
+
(
4
α
2
−
1
)
(
4
α
2
−
9
)
(
4
α
2
−
25
)
3
!
(
8
z
)
3
+
⋯
)
for
|
arg
z
|
<
3
π
2
.
{\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {e^{z}}{\sqrt {2\pi z}}}\left(1-{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}-{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {\pi }{2}},\\K_{\alpha }(z)&\sim {\sqrt {\frac {\pi }{2z}}}e^{-z}\left(1+{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {3\pi }{2}}.\end{aligned}}}
There is also the asymptotic form (for large real
z
{\displaystyle z}
)
I
α
(
z
)
=
1
2
π
z
1
+
α
2
z
2
4
exp
(
−
α
arcsinh
(
α
z
)
+
z
1
+
α
2
z
2
)
(
1
+
O
(
1
z
1
+
α
2
z
2
)
)
.
{\displaystyle {\begin{aligned}I_{\alpha }(z)={\frac {1}{{\sqrt {2\pi z}}{\sqrt[{4}]{1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\exp \left(-\alpha \operatorname {arcsinh} \left({\frac {\alpha }{z}}\right)+z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}\right)\left(1+{\mathcal {O}}\left({\frac {1}{z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\right)\right).\end{aligned}}}
When α = 1/2, all the terms except the first vanish, and we have
I
1
/
2
(
z
)
=
2
π
sinh
(
z
)
z
∼
e
z
2
π
z
for
|
arg
z
|
<
π
2
,
K
1
/
2
(
z
)
=
π
2
e
−
z
z
.
{\displaystyle {\begin{aligned}I_{{1}/{2}}(z)&={\sqrt {\frac {2}{\pi }}}{\frac {\sinh(z)}{\sqrt {z}}}\sim {\frac {e^{z}}{\sqrt {2\pi z}}}&&{\text{for }}\left|\arg z\right|<{\tfrac {\pi }{2}},\\[1ex]K_{{1}/{2}}(z)&={\sqrt {\frac {\pi }{2}}}{\frac {e^{-z}}{\sqrt {z}}}.\end{aligned}}}
For small arguments
0
<
|
z
|
≪
α
+
1
{\displaystyle 0<|z|\ll {\sqrt {\alpha +1}}}
, we have
I
α
(
z
)
∼
1
Γ
(
α
+
1
)
(
z
2
)
α
,
K
α
(
z
)
∼
{
−
ln
(
z
2
)
−
γ
if
α
=
0
Γ
(
α
)
2
(
2
z
)
α
if
α
>
0
{\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha },\\[1ex]K_{\alpha }(z)&\sim {\begin{cases}-\ln \left({\dfrac {z}{2}}\right)-\gamma &{\text{if }}\alpha =0\\[1ex]{\frac {\Gamma (\alpha )}{2}}\left({\dfrac {2}{z}}\right)^{\alpha }&{\text{if }}\alpha >0\end{cases}}\end{aligned}}}
== Properties ==
For integer order α = n, Jn is often defined via a Laurent series for a generating function:
e
x
2
(
t
−
1
t
)
=
∑
n
=
−
∞
∞
J
n
(
x
)
t
n
{\displaystyle e^{{\frac {x}{2}}\left(t-{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }J_{n}(x)t^{n}}
an approach used by P. A. Hansen in 1843. (This can be generalized to non-integer order by contour integration or other methods.)
Infinite series of Bessel functions in the form
∑
ν
=
−
∞
∞
J
N
ν
+
p
(
x
)
{\textstyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)}
where
ν
,
p
∈
Z
,
N
∈
Z
+
\nu ,p\in \mathbb {Z} ,\ N\in \mathbb {Z} ^{+}
arise in many physical systems and are defined in closed form by the Sung series. For example, when N = 3:
∑
ν
=
−
∞
∞
J
3
ν
+
p
(
x
)
=
1
3
[
1
+
2
cos
(
x
3
/
2
−
2
π
p
/
3
)
]
{\textstyle \sum _{\nu =-\infty }^{\infty }J_{3\nu +p}(x)={\frac {1}{3}}\left[1+2\cos {(x{\sqrt {3}}/2-2\pi p/3)}\right]}
. More generally, the Sung series and the alternating Sung series are written as:
∑
ν
=
−
∞
∞
J
N
ν
+
p
(
x
)
=
1
N
∑
q
=
0
N
−
1
e
i
x
sin
2
π
q
/
N
e
−
i
2
π
p
q
/
N
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {2\pi q/N}}e^{-i2\pi pq/N}}
∑
ν
=
−
∞
∞
(
−
1
)
ν
J
N
ν
+
p
(
x
)
=
1
N
∑
q
=
0
N
−
1
e
i
x
sin
(
2
q
+
1
)
π
/
N
e
−
i
(
2
q
+
1
)
π
p
/
N
{\displaystyle \sum _{\nu =-\infty }^{\infty }(-1)^{\nu }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {(2q+1)\pi /N}}e^{-i(2q+1)\pi p/N}}
A series expansion using Bessel functions (Kapteyn series) is
1
1
−
z
=
1
+
2
∑
n
=
1
∞
J
n
(
n
z
)
.
{\displaystyle {\frac {1}{1-z}}=1+2\sum _{n=1}^{\infty }J_{n}(nz).}
Another important relation for integer orders is the Jacobi–Anger expansion:
e
i
z
cos
ϕ
=
∑
n
=
−
∞
∞
i
n
J
n
(
z
)
e
i
n
ϕ
{\displaystyle e^{iz\cos \phi }=\sum _{n=-\infty }^{\infty }i^{n}J_{n}(z)e^{in\phi }}
and
e
±
i
z
sin
ϕ
=
J
0
(
z
)
+
2
∑
n
=
1
∞
J
2
n
(
z
)
cos
(
2
n
ϕ
)
±
2
i
∑
n
=
0
∞
J
2
n
+
1
(
z
)
sin
(
(
2
n
+
1
)
ϕ
)
{\displaystyle e^{\pm iz\sin \phi }=J_{0}(z)+2\sum _{n=1}^{\infty }J_{2n}(z)\cos(2n\phi )\pm 2i\sum _{n=0}^{\infty }J_{2n+1}(z)\sin((2n+1)\phi )}
which is used to expand a plane wave as a sum of cylindrical waves, or to find the Fourier series of a tone-modulated FM signal.
More generally, a series
f
(
z
)
=
a
0
ν
J
ν
(
z
)
+
2
⋅
∑
k
=
1
∞
a
k
ν
J
ν
+
k
(
z
)
{\displaystyle f(z)=a_{0}^{\nu }J_{\nu }(z)+2\cdot \sum _{k=1}^{\infty }a_{k}^{\nu }J_{\nu +k}(z)}
is called Neumann expansion of f. The coefficients for ν = 0 have the explicit form
a
k
0
=
1
2
π
i
∫
|
z
|
=
c
f
(
z
)
O
k
(
z
)
d
z
{\displaystyle a_{k}^{0}={\frac {1}{2\pi i}}\int _{|z|=c}f(z)O_{k}(z)\,dz}
where Ok is Neumann's polynomial.
Selected functions admit the special representation
f
(
z
)
=
∑
k
=
0
∞
a
k
ν
J
ν
+
2
k
(
z
)
{\displaystyle f(z)=\sum _{k=0}^{\infty }a_{k}^{\nu }J_{\nu +2k}(z)}
with
a
k
ν
=
2
(
ν
+
2
k
)
∫
0
∞
f
(
z
)
J
ν
+
2
k
(
z
)
z
d
z
{\displaystyle a_{k}^{\nu }=2(\nu +2k)\int _{0}^{\infty }f(z){\frac {J_{\nu +2k}(z)}{z}}\,dz}
due to the orthogonality relation
∫
0
∞
J
α
(
z
)
J
β
(
z
)
d
z
z
=
2
π
sin
(
π
2
(
α
−
β
)
)
α
2
−
β
2
{\displaystyle \int _{0}^{\infty }J_{\alpha }(z)J_{\beta }(z){\frac {dz}{z}}={\frac {2}{\pi }}{\frac {\sin \left({\frac {\pi }{2}}(\alpha -\beta )\right)}{\alpha ^{2}-\beta ^{2}}}}
More generally, if f has a branch-point near the origin of such a nature that
f
(
z
)
=
∑
k
=
0
a
k
J
ν
+
k
(
z
)
{\displaystyle f(z)=\sum _{k=0}a_{k}J_{\nu +k}(z)}
then
L
{
∑
k
=
0
a
k
J
ν
+
k
}
(
s
)
=
1
1
+
s
2
∑
k
=
0
a
k
(
s
+
1
+
s
2
)
ν
+
k
{\displaystyle {\mathcal {L}}\left\{\sum _{k=0}a_{k}J_{\nu +k}\right\}(s)={\frac {1}{\sqrt {1+s^{2}}}}\sum _{k=0}{\frac {a_{k}}{\left(s+{\sqrt {1+s^{2}}}\right)^{\nu +k}}}}
or
∑
k
=
0
a
k
ξ
ν
+
k
=
1
+
ξ
2
2
ξ
L
{
f
}
(
1
−
ξ
2
2
ξ
)
{\displaystyle \sum _{k=0}a_{k}\xi ^{\nu +k}={\frac {1+\xi ^{2}}{2\xi }}{\mathcal {L}}\{f\}\left({\frac {1-\xi ^{2}}{2\xi }}\right)}
where
L
{
f
}
{\displaystyle {\mathcal {L}}\{f\}}
is the Laplace transform of f.
Another way to define the Bessel functions is the Poisson representation formula and the Mehler-Sonine formula:
J
ν
(
z
)
=
(
z
2
)
ν
Γ
(
ν
+
1
2
)
π
∫
−
1
1
e
i
z
s
(
1
−
s
2
)
ν
−
1
2
d
s
=
2
(
z
2
)
ν
⋅
π
⋅
Γ
(
1
2
−
ν
)
∫
1
∞
sin
z
u
(
u
2
−
1
)
ν
+
1
2
d
u
{\displaystyle {\begin{aligned}J_{\nu }(z)&={\frac {\left({\frac {z}{2}}\right)^{\nu }}{\Gamma \left(\nu +{\frac {1}{2}}\right){\sqrt {\pi }}}}\int _{-1}^{1}e^{izs}\left(1-s^{2}\right)^{\nu -{\frac {1}{2}}}\,ds\\[5px]&={\frac {2}{{\left({\frac {z}{2}}\right)}^{\nu }\cdot {\sqrt {\pi }}\cdot \Gamma \left({\frac {1}{2}}-\nu \right)}}\int _{1}^{\infty }{\frac {\sin zu}{\left(u^{2}-1\right)^{\nu +{\frac {1}{2}}}}}\,du\end{aligned}}}
where ν > −1/2 and z ∈ C.
This formula is useful especially when working with Fourier transforms.
Because Bessel's equation becomes Hermitian (self-adjoint) if it is divided by x, the solutions must satisfy an orthogonality relationship for appropriate boundary conditions. In particular, it follows that:
∫
0
1
x
J
α
(
x
u
α
,
m
)
J
α
(
x
u
α
,
n
)
d
x
=
δ
m
,
n
2
[
J
α
+
1
(
u
α
,
m
)
]
2
=
δ
m
,
n
2
[
J
α
′
(
u
α
,
m
)
]
2
{\displaystyle \int _{0}^{1}xJ_{\alpha }\left(xu_{\alpha ,m}\right)J_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[J_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}={\frac {\delta _{m,n}}{2}}\left[J_{\alpha }'\left(u_{\alpha ,m}\right)\right]^{2}}
where α > −1, δm,n is the Kronecker delta, and uα,m is the mth zero of Jα(x). This orthogonality relation can then be used to extract the coefficients in the Fourier–Bessel series, where a function is expanded in the basis of the functions Jα(x uα,m) for fixed α and varying m.
An analogous relationship for the spherical Bessel functions follows immediately:
∫
0
1
x
2
j
α
(
x
u
α
,
m
)
j
α
(
x
u
α
,
n
)
d
x
=
δ
m
,
n
2
[
j
α
+
1
(
u
α
,
m
)
]
2
{\displaystyle \int _{0}^{1}x^{2}j_{\alpha }\left(xu_{\alpha ,m}\right)j_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[j_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}}
If one defines a boxcar function of x that depends on a small parameter ε as:
f
ε
(
x
)
=
1
ε
rect
(
x
−
1
ε
)
{\displaystyle f_{\varepsilon }(x)={\frac {1}{\varepsilon }}\operatorname {rect} \left({\frac {x-1}{\varepsilon }}\right)}
(where rect is the rectangle function) then the Hankel transform of it (of any given order α > −1/2), gε(k), approaches Jα(k) as ε approaches zero, for any given k. Conversely, the Hankel transform (of the same order) of gε(k) is fε(x):
∫
0
∞
k
J
α
(
k
x
)
g
ε
(
k
)
d
k
=
f
ε
(
x
)
{\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)g_{\varepsilon }(k)\,dk=f_{\varepsilon }(x)}
which is zero everywhere except near 1. As ε approaches zero, the right-hand side approaches δ(x − 1), where δ is the Dirac delta function. This admits the limit (in the distributional sense):
∫
0
∞
k
J
α
(
k
x
)
J
α
(
k
)
d
k
=
δ
(
x
−
1
)
{\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)J_{\alpha }(k)\,dk=\delta (x-1)}
A change of variables then yields the closure equation:
∫
0
∞
x
J
α
(
u
x
)
J
α
(
v
x
)
d
x
=
1
u
δ
(
u
−
v
)
{\displaystyle \int _{0}^{\infty }xJ_{\alpha }(ux)J_{\alpha }(vx)\,dx={\frac {1}{u}}\delta (u-v)}
for α > −1/2. The Hankel transform can express a fairly arbitrary function as an integral of Bessel functions of different scales. For the spherical Bessel functions the orthogonality relation is:
∫
0
∞
x
2
j
α
(
u
x
)
j
α
(
v
x
)
d
x
=
π
2
u
v
δ
(
u
−
v
)
{\displaystyle \int _{0}^{\infty }x^{2}j_{\alpha }(ux)j_{\alpha }(vx)\,dx={\frac {\pi }{2uv}}\delta (u-v)}
for α > −1.
Another important property of Bessel's equations, which follows from Abel's identity, involves the Wronskian of the solutions:
A
α
(
x
)
d
B
α
d
x
−
d
A
α
d
x
B
α
(
x
)
=
C
α
x
{\displaystyle A_{\alpha }(x){\frac {dB_{\alpha }}{dx}}-{\frac {dA_{\alpha }}{dx}}B_{\alpha }(x)={\frac {C_{\alpha }}{x}}}
where Aα and Bα are any two solutions of Bessel's equation, and Cα is a constant independent of x (which depends on α and on the particular Bessel functions considered). In particular,
J
α
(
x
)
d
Y
α
d
x
−
d
J
α
d
x
Y
α
(
x
)
=
2
π
x
{\displaystyle J_{\alpha }(x){\frac {dY_{\alpha }}{dx}}-{\frac {dJ_{\alpha }}{dx}}Y_{\alpha }(x)={\frac {2}{\pi x}}}
and
I
α
(
x
)
d
K
α
d
x
−
d
I
α
d
x
K
α
(
x
)
=
−
1
x
,
{\displaystyle I_{\alpha }(x){\frac {dK_{\alpha }}{dx}}-{\frac {dI_{\alpha }}{dx}}K_{\alpha }(x)=-{\frac {1}{x}},}
for α > −1.
For α > −1, the even entire function of genus 1, x−αJα(x), has only real zeros. Let
0
<
j
α
,
1
<
j
α
,
2
<
⋯
<
j
α
,
n
<
⋯
{\displaystyle 0<j_{\alpha ,1}<j_{\alpha ,2}<\cdots <j_{\alpha ,n}<\cdots }
be all its positive zeros, then
J
α
(
z
)
=
(
z
2
)
α
Γ
(
α
+
1
)
∏
n
=
1
∞
(
1
−
z
2
j
α
,
n
2
)
{\displaystyle J_{\alpha }(z)={\frac {\left({\frac {z}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{j_{\alpha ,n}^{2}}}\right)}
(There are a large number of other known integrals and identities that are not reproduced here, but which can be found in the references.)
=== Recurrence relations ===
The functions Jα, Yα, H(1)α, and H(2)α all satisfy the recurrence relations
2
α
x
Z
α
(
x
)
=
Z
α
−
1
(
x
)
+
Z
α
+
1
(
x
)
{\displaystyle {\frac {2\alpha }{x}}Z_{\alpha }(x)=Z_{\alpha -1}(x)+Z_{\alpha +1}(x)}
and
2
d
Z
α
(
x
)
d
x
=
Z
α
−
1
(
x
)
−
Z
α
+
1
(
x
)
,
{\displaystyle 2{\frac {dZ_{\alpha }(x)}{dx}}=Z_{\alpha -1}(x)-Z_{\alpha +1}(x),}
where Z denotes J, Y, H(1), or H(2). These two identities are often combined, e.g. added or subtracted, to yield various other relations. In this way, for example, one can compute Bessel functions of higher orders (or higher derivatives) given the values at lower orders (or lower derivatives). In particular, it follows that
(
1
x
d
d
x
)
m
[
x
α
Z
α
(
x
)
]
=
x
α
−
m
Z
α
−
m
(
x
)
,
(
1
x
d
d
x
)
m
[
Z
α
(
x
)
x
α
]
=
(
−
1
)
m
Z
α
+
m
(
x
)
x
α
+
m
.
{\displaystyle {\begin{aligned}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[x^{\alpha }Z_{\alpha }(x)\right]&=x^{\alpha -m}Z_{\alpha -m}(x),\\\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[{\frac {Z_{\alpha }(x)}{x^{\alpha }}}\right]&=(-1)^{m}{\frac {Z_{\alpha +m}(x)}{x^{\alpha +m}}}.\end{aligned}}}
Using the previous relations one can arrive to similar relations for the Spherical Bessel functions:
2
α
+
1
x
j
α
(
x
)
=
j
α
−
1
+
j
α
+
1
{\displaystyle {\frac {2\alpha +1}{x}}j_{\alpha }(x)=j_{\alpha -1}+j_{\alpha +1}}
and
d
j
α
(
x
)
d
x
=
j
α
−
1
−
α
+
1
x
j
α
{\displaystyle {\frac {dj_{\alpha }(x)}{dx}}=j_{\alpha -1}-{\frac {\alpha +1}{x}}j_{\alpha }}
Modified Bessel functions follow similar relations:
e
(
x
2
)
(
t
+
1
t
)
=
∑
n
=
−
∞
∞
I
n
(
x
)
t
n
{\displaystyle e^{\left({\frac {x}{2}}\right)\left(t+{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }I_{n}(x)t^{n}}
and
e
z
cos
θ
=
I
0
(
z
)
+
2
∑
n
=
1
∞
I
n
(
z
)
cos
n
θ
{\displaystyle e^{z\cos \theta }=I_{0}(z)+2\sum _{n=1}^{\infty }I_{n}(z)\cos n\theta }
and
1
2
π
∫
0
2
π
e
z
cos
(
m
θ
)
+
y
cos
θ
d
θ
=
I
0
(
z
)
I
0
(
y
)
+
2
∑
n
=
1
∞
I
n
(
z
)
I
m
n
(
y
)
.
{\displaystyle {\frac {1}{2\pi }}\int _{0}^{2\pi }e^{z\cos(m\theta )+y\cos \theta }d\theta =I_{0}(z)I_{0}(y)+2\sum _{n=1}^{\infty }I_{n}(z)I_{mn}(y).}
The recurrence relation reads
C
α
−
1
(
x
)
−
C
α
+
1
(
x
)
=
2
α
x
C
α
(
x
)
,
C
α
−
1
(
x
)
+
C
α
+
1
(
x
)
=
2
d
d
x
C
α
(
x
)
,
{\displaystyle {\begin{aligned}C_{\alpha -1}(x)-C_{\alpha +1}(x)&={\frac {2\alpha }{x}}C_{\alpha }(x),\\[1ex]C_{\alpha -1}(x)+C_{\alpha +1}(x)&=2{\frac {d}{dx}}C_{\alpha }(x),\end{aligned}}}
where Cα denotes Iα or eαiπKα. These recurrence relations are useful for discrete diffusion problems.
=== Transcendence ===
In 1929, Carl Ludwig Siegel proved that Jν(x), J'ν(x), and the logarithmic derivative J'ν(x)/Jν(x) are transcendental numbers when ν is rational and x is algebraic and nonzero. The same proof also implies that
Γ
(
v
+
1
)
(
2
/
x
)
v
J
v
(
x
)
{\displaystyle \Gamma (v+1)(2/x)^{v}J_{v}(x)}
is transcendental under the same assumptions.
=== Sums with Bessel functions ===
The product of two Bessel functions admits the following sum:
∑
ν
=
−
∞
∞
J
ν
(
x
)
J
n
−
ν
(
y
)
=
J
n
(
x
+
y
)
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{n-\nu }(y)=J_{n}(x+y),}
∑
ν
=
−
∞
∞
J
ν
(
x
)
J
ν
+
n
(
y
)
=
J
n
(
y
−
x
)
.
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(y)=J_{n}(y-x).}
From these equalities it follows that
∑
ν
=
−
∞
∞
J
ν
(
x
)
J
ν
+
n
(
x
)
=
δ
n
,
0
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(x)=\delta _{n,0}}
and as a consequence
∑
ν
=
−
∞
∞
J
ν
2
(
x
)
=
1.
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }^{2}(x)=1.}
These sums can be extended to include a term multiplier that is a polynomial function of the index. For example,
∑
ν
=
−
∞
∞
ν
J
ν
(
x
)
J
ν
+
n
(
x
)
=
x
2
(
δ
n
,
1
+
δ
n
,
−
1
)
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,1}+\delta _{n,-1}\right),}
∑
ν
=
−
∞
∞
ν
J
ν
2
(
x
)
=
0
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }^{2}(x)=0,}
∑
ν
=
−
∞
∞
ν
2
J
ν
(
x
)
J
ν
+
n
(
x
)
=
x
2
(
δ
n
,
−
1
−
δ
n
,
1
)
+
x
2
4
(
δ
n
,
−
2
+
2
δ
n
,
0
+
δ
n
,
2
)
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,-1}-\delta _{n,1}\right)+{\frac {x^{2}}{4}}\left(\delta _{n,-2}+2\delta _{n,0}+\delta _{n,2}\right),}
∑
ν
=
−
∞
∞
ν
2
J
ν
2
(
x
)
=
x
2
2
.
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }^{2}(x)={\frac {x^{2}}{2}}.}
== Multiplication theorem ==
The Bessel functions obey a multiplication theorem
λ
−
ν
J
ν
(
λ
z
)
=
∑
n
=
0
∞
1
n
!
(
(
1
−
λ
2
)
z
2
)
n
J
ν
+
n
(
z
)
,
{\displaystyle \lambda ^{-\nu }J_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(1-\lambda ^{2}\right)z}{2}}\right)^{n}J_{\nu +n}(z),}
where λ and ν may be taken as arbitrary complex numbers. For |λ2 − 1| < 1, the above expression also holds if J is replaced by Y. The analogous identities for modified Bessel functions and |λ2 − 1| < 1 are
λ
−
ν
I
ν
(
λ
z
)
=
∑
n
=
0
∞
1
n
!
(
(
λ
2
−
1
)
z
2
)
n
I
ν
+
n
(
z
)
{\displaystyle \lambda ^{-\nu }I_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}I_{\nu +n}(z)}
and
λ
−
ν
K
ν
(
λ
z
)
=
∑
n
=
0
∞
(
−
1
)
n
n
!
(
(
λ
2
−
1
)
z
2
)
n
K
ν
+
n
(
z
)
.
{\displaystyle \lambda ^{-\nu }K_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}K_{\nu +n}(z).}
== Zeros of the Bessel function ==
=== Bourget's hypothesis ===
Bessel himself originally proved that for nonnegative integers n, the equation Jn(x) = 0 has an infinite number of solutions in x. When the functions Jn(x) are plotted on the same graph, though, none of the zeros seem to coincide for different values of n except for the zero at x = 0. This phenomenon is known as Bourget's hypothesis after the 19th-century French mathematician who studied Bessel functions. Specifically it states that for any integers n ≥ 0 and m ≥ 1, the functions Jn(x) and Jn + m(x) have no common zeros other than the one at x = 0. The hypothesis was proved by Carl Ludwig Siegel in 1929.
=== Transcendence ===
Siegel proved in 1929 that when ν is rational, all nonzero roots of Jν(x) and J'ν(x) are transcendental, as are all the roots of Kν(x). It is also known that all roots of the higher derivatives
J
ν
(
n
)
(
x
)
{\displaystyle J_{\nu }^{(n)}(x)}
for n ≤ 18 are transcendental, except for the special values
J
1
(
3
)
(
±
3
)
=
0
{\displaystyle J_{1}^{(3)}(\pm {\sqrt {3}})=0}
and
J
0
(
4
)
(
±
3
)
=
0
{\displaystyle J_{0}^{(4)}(\pm {\sqrt {3}})=0}
.
=== Numerical approaches ===
For numerical studies about the zeros of the Bessel function, see Gil, Segura & Temme (2007), Kravanja et al. (1998) and Moler (2004).
=== Numerical values ===
The first zeros in J0 (i.e., j0,1, j0,2 and j0,3) occur at arguments of approximately 2.40483, 5.52008 and 8.65373, respectively.
== History ==
=== Waves and elasticity problems ===
The first appearance of a Bessel function appears in the work of Daniel Bernoulli in 1732, while working on the analysis of a vibrating string, a problem that was tackled before by his father Johann Bernoulli. Daniel considered a flexible chain suspended from a fixed point above and free at its lower end. The solution of the differential equation led to the introduction of a function that is now considered
J
0
(
x
)
{\displaystyle J_{0}(x)}
. Bernoulli also developed a method to find the zeros of the function.
Leonhard Euler in 1736, found a link between other functions (now known as Laguerre polynomials) and Bernoulli's solution. Euler also introduced a non-uniform chain that lead to the introduction of functions now related to modified Bessel functions
I
n
(
x
)
{\displaystyle I_{n}(x)}
.
In the middle of the eighteen century, Jean le Rond d'Alembert had found a formula to solve the wave equation. By 1771 there was dispute between Bernoulli, Euler, d'Alembert and Joseph-Louis Lagrange on the nature of the solutions vibrating strings.
Euler worked in 1778 on buckling, introducing the concept of Euler's critical load. To solve the problem he introduced the series for
J
±
1
/
3
(
x
)
{\displaystyle J_{\pm 1/3}(x)}
. Euler also worked out the solutions of vibrating 2D membranes in cylindrical coordinates in 1780. In order to solve his differential equation he introduced a power series associated to
J
n
(
x
)
{\displaystyle J_{n}(x)}
, for integer n.
During the end of the 19th century Lagrange, Pierre-Simon Laplace and Marc-Antoine Parseval also found equivalents to the Bessel functions. Parseval for example found an integral representation of
J
0
(
x
)
{\displaystyle J_{0}(x)}
using cosine.
At the beginning of the 1800s, Joseph Fourier used
J
0
(
x
)
{\displaystyle J_{0}(x)}
to solve the heat equation in a problem with cylindrical symmetry. Fourier won a prize of the French Academy of Sciences for this work in 1811. But most of the details of his work, including the use of a Fourier series, remained unpublished until 1822. Poisson in rivalry with Fourier, extended Fourier's work in 1823, introducing new properties of Bessel functions including Bessel functions of half-integer order (now known as spherical Bessel functions).
=== Astronomical problems ===
In 1770, Lagrangre introduced the series expansion of Bessel functions to solve Kepler's equation, a trascendental equation in astronomy. Friedrich Wilhelm Bessel had seen Lagrange's solution but found it difficult to handle. In 1813 in a letter to Carl Friedrich Gauss, Bessel simplified the calculation using trigonometric functions. Bessel published his work in 1819, independently introducing the method of Fourier series unaware of the work of Fourier which was published later.
In 1824, Bessel carried out a systematic investigation of the functions, which earned the functions his name. In older literature the functions were called cylindrical functions or even Bessel–Fourier functions.
== See also ==
== Notes ==
== References ==
== External links == | Wikipedia/Bessel_equation |
In linear algebra, linear transformations can be represented by matrices. If
T
{\displaystyle T}
is a linear transformation mapping
R
n
{\displaystyle \mathbb {R} ^{n}}
to
R
m
{\displaystyle \mathbb {R} ^{m}}
and
x
{\displaystyle \mathbf {x} }
is a column vector with
n
{\displaystyle n}
entries, then there exists an
m
×
n
{\displaystyle m\times n}
matrix
A
{\displaystyle A}
, called the transformation matrix of
T
{\displaystyle T}
, such that:
T
(
x
)
=
A
x
{\displaystyle T(\mathbf {x} )=A\mathbf {x} }
Note that
A
{\displaystyle A}
has
m
{\displaystyle m}
rows and
n
{\displaystyle n}
columns, whereas the transformation
T
{\displaystyle T}
is from
R
n
{\displaystyle \mathbb {R} ^{n}}
to
R
m
{\displaystyle \mathbb {R} ^{m}}
. There are alternative expressions of transformation matrices involving row vectors that are preferred by some authors.
== Uses ==
Matrices allow arbitrary linear transformations to be displayed in a consistent format, suitable for computation. This also allows transformations to be composed easily (by multiplying their matrices).
Linear transformations are not the only ones that can be represented by matrices. Some transformations that are non-linear on an n-dimensional Euclidean space Rn can be represented as linear transformations on the n+1-dimensional space Rn+1. These include both affine transformations (such as translation) and projective transformations. For this reason, 4×4 transformation matrices are widely used in 3D computer graphics. These n+1-dimensional transformation matrices are called, depending on their application, affine transformation matrices, projective transformation matrices, or more generally non-linear transformation matrices. With respect to an n-dimensional matrix, an n+1-dimensional matrix can be described as an augmented matrix.
In the physical sciences, an active transformation is one which actually changes the physical position of a system, and makes sense even in the absence of a coordinate system whereas a passive transformation is a change in the coordinate description of the physical system (change of basis). The distinction between active and passive transformations is important. By default, by transformation, mathematicians usually mean active transformations, while physicists could mean either.
Put differently, a passive transformation refers to description of the same object as viewed from two different coordinate frames.
== Finding the matrix of a transformation ==
If one has a linear transformation
T
(
x
)
{\displaystyle T(x)}
in functional form, it is easy to determine the transformation matrix A by transforming each of the vectors of the standard basis by T, then inserting the result into the columns of a matrix. In other words,
A
=
[
T
(
e
1
)
T
(
e
2
)
⋯
T
(
e
n
)
]
{\displaystyle A={\begin{bmatrix}T(\mathbf {e} _{1})&T(\mathbf {e} _{2})&\cdots &T(\mathbf {e} _{n})\end{bmatrix}}}
For example, the function
T
(
x
)
=
5
x
{\displaystyle T(x)=5x}
is a linear transformation. Applying the above process (suppose that n = 2 in this case) reveals that:
T
(
x
)
=
5
x
=
5
I
x
=
[
5
0
0
5
]
x
{\displaystyle T(\mathbf {x} )=5\mathbf {x} =5I\mathbf {x} ={\begin{bmatrix}5&0\\0&5\end{bmatrix}}\mathbf {x} }
The matrix representation of vectors and operators depends on the chosen basis; a similar matrix will result from an alternate basis. Nevertheless, the method to find the components remains the same.
To elaborate, vector
v
{\displaystyle \mathbf {v} }
can be represented in basis vectors,
E
=
[
e
1
e
2
⋯
e
n
]
{\displaystyle E={\begin{bmatrix}\mathbf {e} _{1}&\mathbf {e} _{2}&\cdots &\mathbf {e} _{n}\end{bmatrix}}}
with coordinates
[
v
]
E
=
[
v
1
v
2
⋯
v
n
]
T
{\displaystyle [\mathbf {v} ]_{E}={\begin{bmatrix}v_{1}&v_{2}&\cdots &v_{n}\end{bmatrix}}^{\mathrm {T} }}
:
v
=
v
1
e
1
+
v
2
e
2
+
⋯
+
v
n
e
n
=
∑
i
v
i
e
i
=
E
[
v
]
E
{\displaystyle \mathbf {v} =v_{1}\mathbf {e} _{1}+v_{2}\mathbf {e} _{2}+\cdots +v_{n}\mathbf {e} _{n}=\sum _{i}v_{i}\mathbf {e} _{i}=E[\mathbf {v} ]_{E}}
Now, express the result of the transformation matrix A upon
v
{\displaystyle \mathbf {v} }
, in the given basis:
A
(
v
)
=
A
(
∑
i
v
i
e
i
)
=
∑
i
v
i
A
(
e
i
)
=
[
A
(
e
1
)
A
(
e
2
)
⋯
A
(
e
n
)
]
[
v
]
E
=
A
⋅
[
v
]
E
=
[
e
1
e
2
⋯
e
n
]
[
a
1
,
1
a
1
,
2
⋯
a
1
,
n
a
2
,
1
a
2
,
2
⋯
a
2
,
n
⋮
⋮
⋱
⋮
a
n
,
1
a
n
,
2
⋯
a
n
,
n
]
[
v
1
v
2
⋮
v
n
]
{\displaystyle {\begin{aligned}A(\mathbf {v} )&=A\left(\sum _{i}v_{i}\mathbf {e} _{i}\right)=\sum _{i}{v_{i}A(\mathbf {e} _{i})}\\&={\begin{bmatrix}A(\mathbf {e} _{1})&A(\mathbf {e} _{2})&\cdots &A(\mathbf {e} _{n})\end{bmatrix}}[\mathbf {v} ]_{E}=A\cdot [\mathbf {v} ]_{E}\\[3pt]&={\begin{bmatrix}\mathbf {e} _{1}&\mathbf {e} _{2}&\cdots &\mathbf {e} _{n}\end{bmatrix}}{\begin{bmatrix}a_{1,1}&a_{1,2}&\cdots &a_{1,n}\\a_{2,1}&a_{2,2}&\cdots &a_{2,n}\\\vdots &\vdots &\ddots &\vdots \\a_{n,1}&a_{n,2}&\cdots &a_{n,n}\\\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n}\end{bmatrix}}\end{aligned}}}
The
a
i
,
j
{\displaystyle a_{i,j}}
elements of matrix A are determined for a given basis E by applying A to every
e
j
=
[
0
0
⋯
(
v
j
=
1
)
⋯
0
]
T
{\displaystyle \mathbf {e} _{j}={\begin{bmatrix}0&0&\cdots &(v_{j}=1)&\cdots &0\end{bmatrix}}^{\mathrm {T} }}
, and observing the response vector
A
e
j
=
a
1
,
j
e
1
+
a
2
,
j
e
2
+
⋯
+
a
n
,
j
e
n
=
∑
i
a
i
,
j
e
i
.
{\displaystyle A\mathbf {e} _{j}=a_{1,j}\mathbf {e} _{1}+a_{2,j}\mathbf {e} _{2}+\cdots +a_{n,j}\mathbf {e} _{n}=\sum _{i}a_{i,j}\mathbf {e} _{i}.}
This equation defines the wanted elements,
a
i
,
j
{\displaystyle a_{i,j}}
, of j-th column of the matrix A.
=== Eigenbasis and diagonal matrix ===
Yet, there is a special basis for an operator in which the components form a diagonal matrix and, thus, multiplication complexity reduces to n. Being diagonal means that all coefficients
a
i
,
j
{\displaystyle a_{i,j}}
except
a
i
,
i
{\displaystyle a_{i,i}}
are zeros leaving only one term in the sum
∑
a
i
,
j
e
i
{\textstyle \sum a_{i,j}\mathbf {e} _{i}}
above. The surviving diagonal elements,
a
i
,
i
{\displaystyle a_{i,i}}
, are known as eigenvalues and designated with
λ
i
{\displaystyle \lambda _{i}}
in the defining equation, which reduces to
A
e
i
=
λ
i
e
i
{\displaystyle A\mathbf {e} _{i}=\lambda _{i}\mathbf {e} _{i}}
. The resulting equation is known as eigenvalue equation. The eigenvectors and eigenvalues are derived from it via the characteristic polynomial.
With diagonalization, it is often possible to translate to and from eigenbases.
== Examples in 2 dimensions ==
Most common geometric transformations that keep the origin fixed are linear, including rotation, scaling, shearing, reflection, and orthogonal projection; if an affine transformation is not a pure translation it keeps some point fixed, and that point can be chosen as origin to make the transformation linear. In two dimensions, linear transformations can be represented using a 2×2 transformation matrix.
=== Stretching ===
A stretch in the xy-plane is a linear transformation which enlarges all distances in a particular direction by a constant factor but does not affect distances in the perpendicular direction. We only consider stretches along the x-axis and y-axis. A stretch along the x-axis has the form x' = kx; y' = y for some positive constant k. (Note that if k > 1, then this really is a "stretch"; if k < 1, it is technically a "compression", but we still call it a stretch. Also, if k = 1, then the transformation is an identity, i.e. it has no effect.)
The matrix associated with a stretch by a factor k along the x-axis is given by:
[
k
0
0
1
]
{\displaystyle {\begin{bmatrix}k&0\\0&1\end{bmatrix}}}
Similarly, a stretch by a factor k along the y-axis has the form x' = x; y' = ky, so the matrix associated with this transformation is
[
1
0
0
k
]
{\displaystyle {\begin{bmatrix}1&0\\0&k\end{bmatrix}}}
=== Squeezing ===
If the two stretches above are combined with reciprocal values, then the transformation matrix represents a squeeze mapping:
[
k
0
0
1
/
k
]
.
{\displaystyle {\begin{bmatrix}k&0\\0&1/k\end{bmatrix}}.}
A square with sides parallel to the axes is transformed to a rectangle that has the same area as the square. The reciprocal stretch and compression leave the area invariant.
=== Rotation ===
For rotation by an angle θ counterclockwise (positive direction) about the origin the functional form is
x
′
=
x
cos
θ
−
y
sin
θ
{\displaystyle x'=x\cos \theta -y\sin \theta }
and
y
′
=
x
sin
θ
+
y
cos
θ
{\displaystyle y'=x\sin \theta +y\cos \theta }
. Written in matrix form, this becomes:
[
x
′
y
′
]
=
[
cos
θ
−
sin
θ
sin
θ
cos
θ
]
[
x
y
]
{\displaystyle {\begin{bmatrix}x'\\y'\end{bmatrix}}={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}}
Similarly, for a rotation clockwise (negative direction) about the origin, the functional form is
x
′
=
x
cos
θ
+
y
sin
θ
{\displaystyle x'=x\cos \theta +y\sin \theta }
and
y
′
=
−
x
sin
θ
+
y
cos
θ
{\displaystyle y'=-x\sin \theta +y\cos \theta }
the matrix form is:
[
x
′
y
′
]
=
[
cos
θ
sin
θ
−
sin
θ
cos
θ
]
[
x
y
]
{\displaystyle {\begin{bmatrix}x'\\y'\end{bmatrix}}={\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta &\cos \theta \end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}}
These formulae assume that the x axis points right and the y axis points up.
=== Shearing ===
For shear mapping (visually similar to slanting), there are two possibilities.
A shear parallel to the x axis has
x
′
=
x
+
k
y
{\displaystyle x'=x+ky}
and
y
′
=
y
{\displaystyle y'=y}
. Written in matrix form, this becomes:
[
x
′
y
′
]
=
[
1
k
0
1
]
[
x
y
]
{\displaystyle {\begin{bmatrix}x'\\y'\end{bmatrix}}={\begin{bmatrix}1&k\\0&1\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}}
A shear parallel to the y axis has
x
′
=
x
{\displaystyle x'=x}
and
y
′
=
y
+
k
x
{\displaystyle y'=y+kx}
, which has matrix form:
[
x
′
y
′
]
=
[
1
0
k
1
]
[
x
y
]
{\displaystyle {\begin{bmatrix}x'\\y'\end{bmatrix}}={\begin{bmatrix}1&0\\k&1\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}}
=== Reflection ===
For reflection about a line that goes through the origin, let
l
=
(
l
x
,
l
y
)
{\displaystyle \mathbf {l} =(l_{x},l_{y})}
be a vector in the direction of the line. Then the transformation matrix is:
A
=
1
‖
l
‖
2
[
l
x
2
−
l
y
2
2
l
x
l
y
2
l
x
l
y
l
y
2
−
l
x
2
]
{\displaystyle \mathbf {A} ={\frac {1}{\lVert \mathbf {l} \rVert ^{2}}}{\begin{bmatrix}l_{x}^{2}-l_{y}^{2}&2l_{x}l_{y}\\2l_{x}l_{y}&l_{y}^{2}-l_{x}^{2}\end{bmatrix}}}
=== Orthogonal projection ===
To project a vector orthogonally onto a line that goes through the origin, let
u
=
(
u
x
,
u
y
)
{\displaystyle \mathbf {u} =(u_{x},u_{y})}
be a vector in the direction of the line. Then the transformation matrix is:
A
=
1
‖
u
‖
2
[
u
x
2
u
x
u
y
u
x
u
y
u
y
2
]
{\displaystyle \mathbf {A} ={\frac {1}{\lVert \mathbf {u} \rVert ^{2}}}{\begin{bmatrix}u_{x}^{2}&u_{x}u_{y}\\u_{x}u_{y}&u_{y}^{2}\end{bmatrix}}}
As with reflections, the orthogonal projection onto a line that does not pass through the origin is an affine, not linear, transformation.
Parallel projections are also linear transformations and can be represented simply by a matrix. However, perspective projections are not, and to represent these with a matrix, homogeneous coordinates can be used.
== Examples in 3D computer graphics ==
=== Rotation ===
The matrix to rotate an angle θ about any axis defined by unit vector (x,y,z) is
[
x
x
(
1
−
cos
θ
)
+
cos
θ
y
x
(
1
−
cos
θ
)
−
z
sin
θ
z
x
(
1
−
cos
θ
)
+
y
sin
θ
x
y
(
1
−
cos
θ
)
+
z
sin
θ
y
y
(
1
−
cos
θ
)
+
cos
θ
z
y
(
1
−
cos
θ
)
−
x
sin
θ
x
z
(
1
−
cos
θ
)
−
y
sin
θ
y
z
(
1
−
cos
θ
)
+
x
sin
θ
z
z
(
1
−
cos
θ
)
+
cos
θ
]
.
{\displaystyle {\begin{bmatrix}xx(1-\cos \theta )+\cos \theta &yx(1-\cos \theta )-z\sin \theta &zx(1-\cos \theta )+y\sin \theta \\xy(1-\cos \theta )+z\sin \theta &yy(1-\cos \theta )+\cos \theta &zy(1-\cos \theta )-x\sin \theta \\xz(1-\cos \theta )-y\sin \theta &yz(1-\cos \theta )+x\sin \theta &zz(1-\cos \theta )+\cos \theta \end{bmatrix}}.}
=== Reflection ===
To reflect a point through a plane
a
x
+
b
y
+
c
z
=
0
{\displaystyle ax+by+cz=0}
(which goes through the origin), one can use
A
=
I
−
2
N
N
T
{\displaystyle \mathbf {A} =\mathbf {I} -2\mathbf {NN} ^{\mathrm {T} }}
, where
I
{\displaystyle \mathbf {I} }
is the 3×3 identity matrix and
N
{\displaystyle \mathbf {N} }
is the three-dimensional unit vector for the vector normal of the plane. If the L2 norm of
a
{\displaystyle a}
,
b
{\displaystyle b}
, and
c
{\displaystyle c}
is unity, the transformation matrix can be expressed as:
A
=
[
1
−
2
a
2
−
2
a
b
−
2
a
c
−
2
a
b
1
−
2
b
2
−
2
b
c
−
2
a
c
−
2
b
c
1
−
2
c
2
]
{\displaystyle \mathbf {A} ={\begin{bmatrix}1-2a^{2}&-2ab&-2ac\\-2ab&1-2b^{2}&-2bc\\-2ac&-2bc&1-2c^{2}\end{bmatrix}}}
Note that these are particular cases of a Householder reflection in two and three dimensions. A reflection about a line or plane that does not go through the origin is not a linear transformation — it is an affine transformation — as a 4×4 affine transformation matrix, it can be expressed as follows (assuming the normal is a unit vector):
[
x
′
y
′
z
′
1
]
=
[
1
−
2
a
2
−
2
a
b
−
2
a
c
−
2
a
d
−
2
a
b
1
−
2
b
2
−
2
b
c
−
2
b
d
−
2
a
c
−
2
b
c
1
−
2
c
2
−
2
c
d
0
0
0
1
]
[
x
y
z
1
]
{\displaystyle {\begin{bmatrix}x'\\y'\\z'\\1\end{bmatrix}}={\begin{bmatrix}1-2a^{2}&-2ab&-2ac&-2ad\\-2ab&1-2b^{2}&-2bc&-2bd\\-2ac&-2bc&1-2c^{2}&-2cd\\0&0&0&1\end{bmatrix}}{\begin{bmatrix}x\\y\\z\\1\end{bmatrix}}}
where
d
=
−
p
⋅
N
{\displaystyle d=-\mathbf {p} \cdot \mathbf {N} }
for some point
p
{\displaystyle \mathbf {p} }
on the plane, or equivalently,
a
x
+
b
y
+
c
z
+
d
=
0
{\displaystyle ax+by+cz+d=0}
.
If the 4th component of the vector is 0 instead of 1, then only the vector's direction is reflected and its magnitude remains unchanged, as if it were mirrored through a parallel plane that passes through the origin. This is a useful property as it allows the transformation of both positional vectors and normal vectors with the same matrix. See homogeneous coordinates and affine transformations below for further explanation.
== Composing and inverting transformations ==
One of the main motivations for using matrices to represent linear transformations is that transformations can then be easily composed and inverted.
Composition is accomplished by matrix multiplication. Row and column vectors are operated upon by matrices, rows on the left and columns on the right. Since text reads from left to right, column vectors are preferred when transformation matrices are composed:
If A and B are the matrices of two linear transformations, then the effect of first applying A and then B to a column vector
x
{\displaystyle \mathbf {x} }
is given by:
B
(
A
x
)
=
(
B
A
)
x
.
{\displaystyle \mathbf {B} (\mathbf {A} \mathbf {x} )=(\mathbf {BA} )\mathbf {x} .}
In other words, the matrix of the combined transformation A followed by B is simply the product of the individual matrices.
When A is an invertible matrix there is a matrix A−1 that represents a transformation that "undoes" A since its composition with A is the identity matrix. In some practical applications, inversion can be computed using general inversion algorithms or by performing inverse operations (that have obvious geometric interpretation, like rotating in opposite direction) and then composing them in reverse order. Reflection matrices are a special case because they are their own inverses and don't need to be separately calculated.
== Other kinds of transformations ==
=== Affine transformations ===
To represent affine transformations with matrices, we can use homogeneous coordinates. This means representing a 2-vector (x, y) as a 3-vector (x, y, 1), and similarly for higher dimensions. Using this system, translation can be expressed with matrix multiplication. The functional form
x
′
=
x
+
t
x
;
y
′
=
y
+
t
y
{\displaystyle x'=x+t_{x};y'=y+t_{y}}
becomes:
[
x
′
y
′
1
]
=
[
1
0
t
x
0
1
t
y
0
0
1
]
[
x
y
1
]
.
{\displaystyle {\begin{bmatrix}x'\\y'\\1\end{bmatrix}}={\begin{bmatrix}1&0&t_{x}\\0&1&t_{y}\\0&0&1\end{bmatrix}}{\begin{bmatrix}x\\y\\1\end{bmatrix}}.}
All ordinary linear transformations are included in the set of affine transformations, and can be described as a simplified form of affine transformations. Therefore, any linear transformation can also be represented by a general transformation matrix. The latter is obtained by expanding the corresponding linear transformation matrix by one row and column, filling the extra space with zeros except for the lower-right corner, which must be set to 1. For example, the counter-clockwise rotation matrix from above becomes:
[
cos
θ
−
sin
θ
0
sin
θ
cos
θ
0
0
0
1
]
{\displaystyle {\begin{bmatrix}\cos \theta &-\sin \theta &0\\\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}}}
Using transformation matrices containing homogeneous coordinates, translations become linear, and thus can be seamlessly intermixed with all other types of transformations. The reason is that the real plane is mapped to the w = 1 plane in real projective space, and so translation in real Euclidean space can be represented as a shear in real projective space. Although a translation is a non-linear transformation in a 2-D or 3-D Euclidean space described by Cartesian coordinates (i.e. it can't be combined with other transformations while preserving commutativity and other properties), it becomes, in a 3-D or 4-D projective space described by homogeneous coordinates, a simple linear transformation (a shear).
More affine transformations can be obtained by composition of two or more affine transformations. For example, given a translation T' with vector
(
t
x
′
,
t
y
′
)
,
{\displaystyle (t'_{x},t'_{y}),}
a rotation R by an angle θ counter-clockwise, a scaling S with factors
(
s
x
,
s
y
)
{\displaystyle (s_{x},s_{y})}
and a translation T of vector
(
t
x
,
t
y
)
,
{\displaystyle (t_{x},t_{y}),}
the result M of T'RST is:
[
s
x
cos
θ
−
s
y
sin
θ
t
x
s
x
cos
θ
−
t
y
s
y
sin
θ
+
t
x
′
s
x
sin
θ
s
y
cos
θ
t
x
s
x
sin
θ
+
t
y
s
y
cos
θ
+
t
y
′
0
0
1
]
{\displaystyle {\begin{bmatrix}s_{x}\cos \theta &-s_{y}\sin \theta &t_{x}s_{x}\cos \theta -t_{y}s_{y}\sin \theta +t'_{x}\\s_{x}\sin \theta &s_{y}\cos \theta &t_{x}s_{x}\sin \theta +t_{y}s_{y}\cos \theta +t'_{y}\\0&0&1\end{bmatrix}}}
When using affine transformations, the homogeneous component of a coordinate vector (normally called w) will never be altered. One can therefore safely assume that it is always 1 and ignore it. However, this is not true when using perspective projections.
=== Perspective projection ===
Another type of transformation, of importance in 3D computer graphics, is the perspective projection. Whereas parallel projections are used to project points onto the image plane along parallel lines, the perspective projection projects points onto the image plane along lines that emanate from a single point, called the center of projection. This means that an object has a smaller projection when it is far away from the center of projection and a larger projection when it is closer (see also reciprocal function).
The simplest perspective projection uses the origin as the center of projection, and the plane at
z
=
1
{\displaystyle z=1}
as the image plane. The functional form of this transformation is then
x
′
=
x
/
z
{\displaystyle x'=x/z}
;
y
′
=
y
/
z
{\displaystyle y'=y/z}
. We can express this in homogeneous coordinates as:
[
x
c
y
c
z
c
w
c
]
=
[
1
0
0
0
0
1
0
0
0
0
1
0
0
0
1
0
]
[
x
y
z
1
]
=
[
x
y
z
z
]
{\displaystyle {\begin{bmatrix}x_{c}\\y_{c}\\z_{c}\\w_{c}\end{bmatrix}}={\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&1&0\end{bmatrix}}{\begin{bmatrix}x\\y\\z\\1\end{bmatrix}}={\begin{bmatrix}x\\y\\z\\z\end{bmatrix}}}
After carrying out the matrix multiplication, the homogeneous component
w
c
{\displaystyle w_{c}}
will be equal to the value of
z
{\displaystyle z}
and the other three will not change. Therefore, to map back into the real plane we must perform the homogeneous divide or perspective divide by dividing each component by
w
c
{\displaystyle w_{c}}
:
[
x
′
y
′
z
′
1
]
=
1
w
c
[
x
c
y
c
z
c
w
c
]
=
[
x
/
z
y
/
z
1
1
]
{\displaystyle {\begin{bmatrix}x'\\y'\\z'\\1\end{bmatrix}}={\frac {1}{w_{c}}}{\begin{bmatrix}x_{c}\\y_{c}\\z_{c}\\w_{c}\end{bmatrix}}={\begin{bmatrix}x/z\\y/z\\1\\1\end{bmatrix}}}
More complicated perspective projections can be composed by combining this one with rotations, scales, translations, and shears to move the image plane and center of projection wherever they are desired.
== See also ==
3D projection
Change of basis
Image rectification
Pose (computer vision)
Rigid transformation
Transformation (function)
Transformation geometry
== References ==
== External links ==
The Matrix Page Practical examples in POV-Ray
Reference page - Rotation of axes
Linear Transformation Calculator
Transformation Applet - Generate matrices from 2D transformations and vice versa.
Coordinate transformation under rotation in 2D
Excel Fun - Build 3D graphics from a spreadsheet | Wikipedia/Eigenvalue_equation |
Audio crossovers are a type of electronic filter circuitry that splits an audio signal into two or more frequency ranges, so that the signals can be sent to loudspeaker drivers that are designed to operate within different frequency ranges. The crossover filters can be either active or passive. They are often described as two-way or three-way, which indicate, respectively, that the crossover splits a given signal into two frequency ranges or three frequency ranges. Crossovers are used in loudspeaker cabinets, power amplifiers in consumer electronics (hi-fi, home cinema sound and car audio) and pro audio and musical instrument amplifier products. For the latter two markets, crossovers are used in bass amplifiers, keyboard amplifiers, bass and keyboard speaker enclosures and sound reinforcement system equipment (PA speakers, monitor speakers, subwoofer systems, etc.).
Crossovers are used because most individual loudspeaker drivers are incapable of covering the entire audio spectrum from low frequencies to high frequencies with acceptable relative volume and absence of distortion. Most hi-fi speaker systems and sound reinforcement system speaker cabinets use a combination of multiple loudspeaker drivers, each catering to a different frequency band. A standard simple example is in hi-fi and PA system cabinets that contain a woofer for low and mid frequencies and a tweeter for high frequencies. Since a sound signal source, be it recorded music from a CD player or a live band's mix from an audio console, has all of the low, mid and high frequencies combined, a crossover circuit is used to split the audio signal into separate frequency bands that can be separately routed to loudspeakers, tweeters or horns optimized for those frequency bands.
Passive crossovers are probably the most common type of audio crossover. They use a network of passive electrical components (e.g., capacitors, inductors and resistors) to split up an amplified signal coming from one power amplifier so that it can be sent to two or more loudspeaker drivers (e.g., a woofer and a very low frequency subwoofer, or a woofer and a tweeter, or a woofer-midrange-tweeter combination).
Active crossovers are distinguished from passive crossovers in that they split up an audio signal prior to the power amplification stage so that it can be sent to two or more power amplifiers, each of which is connected to a separate loudspeaker driver. Home cinema 5.1 surround sound audio systems use a crossover that separates out the very-low frequency signal, so that it can be sent to a subwoofer, and then sending the remaining low-, mid- and high-range frequencies to five speakers which are placed around the listener. In a typical application, the signals sent to the surround speaker cabinets are further split up using a passive crossover into a low/mid-range woofer and a high-range tweeter. Active crossovers come in both digital and analog varieties.
Digital active crossovers often include additional signal processing, such as limiting, delay, and equalization. Signal crossovers allow the audio signal to be split into bands that are processed separately before they are mixed together again. Some examples are multiband compression, limiting, de-essing, multiband distortion, bass enhancement, high frequency exciters, and noise reduction such as Dolby A noise reduction.
== Overview ==
The definition of an ideal audio crossover changes relative to the task and audio application at hand. If the separate bands are to be mixed back together again (as in multiband processing), then the ideal audio crossover would split the incoming audio signal into separate bands that do not overlap or interact and which result in an output signal unchanged in frequency, relative levels, and phase response. This ideal performance can only be approximated. How to implement the best approximation is a matter of lively debate. On the other hand, if the audio crossover separates the audio bands in a loudspeaker, there is no requirement for mathematically ideal characteristics within the crossover itself, as the frequency and phase response of the loudspeaker drivers within their mountings will eclipse the results. Satisfactory output of the complete system comprising the audio crossover and the loudspeaker drivers in their enclosure(s) is the design goal. Such a goal is often achieved using non-ideal, asymmetric crossover filter characteristics.
Many different crossover types are used in audio, but they generally belong to one of the following classes.
== Classification ==
=== Classification based on the number of filter sections ===
Loudspeakers are often classified as "N-way", where N is the number of drivers in the system. For instance, a loudspeaker with a woofer and a tweeter is a 2-way loudspeaker system. An N-way loudspeaker usually has an N-way crossover to divide the signal among the drivers. A 2-way crossover consists of a low-pass and a high-pass filter. A 3-way crossover is constructed as a combination of low-pass, band-pass and high-pass filters (LPF, BPF and HPF respectively). The BPF section is in turn a combination of HPF and LPF sections. 4 (or more) way crossovers are not very common in speaker design, primarily due to the complexity involved, which is not generally justified by better acoustic performance.
An extra HPF section may be present in an "N-way" loudspeaker crossover to protect the lowest-frequency driver from frequencies lower than it can safely handle. Such a crossover would then have a bandpass filter for the lowest-frequency driver. Similarly, the highest-frequency driver may have a protective LPF section to prevent high-frequency damage, though this is far less common.
Recently, a number of manufacturers have begun using what is often called "N.5-way" crossover techniques for stereo loudspeaker crossovers. This usually indicates the addition of a second woofer that plays the same bass range as the main woofer but rolls off far before the main woofer does.
Remark: Filter sections mentioned here is not to be confused with the individual 2-pole filter sections that a higher-order filter consists of.
=== Classification based on components ===
Crossovers can also be classified based on the type of components used.
==== Passive ====
A passive crossover splits up an audio signal after it is amplified by a single power amplifier, so that the amplified signal can be sent to two or more driver types, each of which cover different frequency ranges. These crossover are made entirely of passive components and circuitry; the term "passive" means that no additional power source is needed for the circuitry. A passive crossover just needs to be connected by wiring to the power amplifier signal. Passive crossovers are usually arranged in a Cauer topology to achieve a Butterworth filter effect. Passive filters use resistors combined with reactive components such as capacitors and inductors. Very high-performance passive crossovers are likely to be more expensive than active crossovers since individual components capable of good performance at the high currents and voltages at which speaker systems are driven are hard to make.
Inexpensive consumer electronics products, such as budget-priced Home theater in a box packages and low-cost boom boxes, may use lower quality passive crossovers, often utilizing lower-order filter networks with fewer components. Expensive hi-fi speaker systems and receivers may use higher quality passive crossovers, to obtain improved sound quality and lower distortion. The same price/quality approach is often used with sound reinforcement system equipment and musical instrument amplifiers and speaker cabinets; a low-priced stage monitor, PA speaker or bass amplifier speaker cabinet will typically use lower quality, lower priced passive crossovers, whereas high-priced, high-quality cabinets typically will use better quality crossovers. Passive crossovers may use capacitors made from polypropylene, metalized polyester foil, paper and electrolytic capacitors technology. Inductors may have air cores, powdered metal cores, ferrite cores, or laminated silicon steel cores, and most are wound with enameled copper wire.
Some passive networks include devices such as fuses, PTC devices, bulbs or circuit breakers to protect the loudspeaker drivers from accidental overpowering (e.g., from sudden surges or spikes). Modern passive crossovers increasingly incorporate equalization networks (e.g., Zobel networks) that compensate for the changes in impedance with frequency inherent in virtually all loudspeakers. The issue is complex, as part of the change in impedance is due to acoustic loading changes across a driver's passband.
Two disadvantages of passive networks are that they may be bulky and cause power loss. They are not only frequency specific, but also impedance specific (i.e. their response varies with the electrical load that they are connected to). This prevents their interchangeability with speaker systems of different impedances. Ideal crossover filters, including impedance compensation and equalization networks, can be very difficult to design, as the components interact in complex ways. Crossover design expert Siegfried Linkwitz said of them that "the only excuse for passive crossovers is their low cost. Their behavior changes with the signal level-dependent dynamics of the drivers. They block the power amplifier from taking maximum control over the voice coil motion. They are a waste of time, if accuracy of reproduction is the goal." Alternatively, passive components can be utilized to construct filter circuits before the amplifier. This implementation is called a passive line-level crossover.
==== Active ====
An active crossover contains active components in its filters, such as transistors and operational amplifiers. In recent years, the most commonly used active device is an operational amplifier. In contrast to passive crossovers, which operate after the power amplifier's output at high current and in some cases high voltage, active crossovers are operated at levels that are suited to power amplifier inputs. On the other hand, all circuits with gain introduce noise, and such noise has a deleterious effect when introduced prior to the signal being amplified by the power amplifiers.
Active crossovers always require the use of power amplifiers for each output band. Thus a 2-way active crossover needs two amplifiers—one for the woofer and one for the tweeter. This means that a loudspeaker system that is based on active crossovers will often cost more than a passive-crossover-based system. Despite the cost and complication disadvantages, active crossovers provide the following advantages over passive ones:
a frequency response independent of the dynamic changes in a driver's electrical characteristics (e.g. from heating of the voice coil)
typically, the possibility of an easy way to vary or fine-tune each frequency band to the specific drivers used. Examples would be crossover slope, filter type (e.g., Bessel, Butterworth, Linkwitz-Riley, etc.), relative levels, etc.
better isolation of each driver from the signals being handled by other drivers, thus reducing intermodulation distortion and overdriving
the power amplifiers are directly connected to the speaker drivers, thereby maximizing amplifier damping control of the speaker voice coil, reducing consequences of dynamic changes in driver electrical characteristics, all of which are likely to improve the transient response of the system
reduction in power amplifier output requirement. With no energy being lost in passive components, amplifier requirements are reduced considerably (up to 1/2 in some cases), reducing costs, and potentially increasing quality.
==== Digital ====
Active crossovers can be implemented digitally using a digital signal processor or other microprocessor. They either use digital approximations to traditional analog circuits, known as IIR filters (Bessel, Butterworth, Linkwitz-Riley etc.), or they use Finite Impulse Response (FIR) filters. IIR filters have many similarities with analog filters and are relatively undemanding of CPU resources; FIR filters on the other hand usually have a higher order and therefore require more resources for similar characteristics. They can be designed and built so that they have a linear phase response, which is thought desirable by many involved in sound reproduction. There are drawbacks though—in order to achieve linear phase response, a longer delay time is incurred than would be necessary with an IIR or minimum phase FIR filters. IIR filters, which are by nature recursive, have the drawback that, if not carefully designed, they may enter limit cycles, resulting in non-linear distortion.
==== Mechanical ====
This crossover type is mechanical and uses the properties of the materials in a driver diaphragm to achieve the necessary filtering. Such crossovers are commonly found in full-range speakers which are designed to cover as much of the audio band as possible. One such is constructed by coupling the cone of the speaker to the voice coil bobbin through a compliant section and directly attaching a small lightweight whizzer cone to the bobbin. This compliant section serves as a compliant filter, so the main cone is not vibrated at higher frequencies. The whizzer cone responds to all frequencies, but due to its smaller size, it only gives a useful output at higher frequencies, thereby implementing a mechanical crossover function. Careful selection of materials used for the cone, whizzer and suspension elements determines the crossover frequency and the effectiveness of the crossover. Such mechanical crossovers are complex to design, especially if high fidelity is desired. Computer-aided design has largely replaced the laborious trial and error approach that was historically used. Over several years, the compliance of the materials may change, negatively affecting the frequency response of the speaker.
A more common approach is to employ the dust cap as a high-frequency radiator. The dust cap radiates low frequencies, moving as part of the main assembly, but due to low mass and reduced damping, radiates increased energy at higher frequencies. As with whizzer cones, careful selection of material, shape and position are required to provide smooth, extended output. High frequency dispersion is somewhat different for this approach than for whizzer cones. A related approach is to shape the main cone with such profile, and of such materials, that the neck area remains more rigid, radiating all frequencies, while the outer areas of the cone are selectively decoupled, radiating only at lower frequencies. Cone profiles and materials can be modeled using finite element analysis software and the results are predicted to excellent tolerances.
Speakers which use these mechanical crossovers have some advantages in sound quality despite the difficulties of designing and manufacturing them and despite the inevitable output limitations. Full-range drivers have a single acoustic center and can have relatively modest phase change across the audio spectrum. For best performance at low frequencies, these drivers require careful enclosure design. Their small size (typically 165 to 200 mm) requires considerable cone excursion to reproduce bass effectively. However, the short voice coils, which are necessary for reasonable high-frequency performance, can only move over a limited range. Nevertheless, within these constraints, cost and complications are reduced, as no crossovers are required.
=== Classification based on filter order or slope ===
Just as filters have different orders, so do crossovers, depending on the filter slope they implement. The final acoustic slope may be completely determined by the electrical filter or may be achieved by combining the electrical filter's slope with the natural characteristics of the driver. In the former case, the only requirement is that each driver has a flat response at least to the point where its signal is approximately −10dB down from the passband. In the latter case, the final acoustic slope is usually steeper than that of the electrical filters used. A third- or fourth-order acoustic crossover often has just a second-order electrical filter. This requires that speaker drivers be well behaved a considerable way from the nominal crossover frequency, and further that the high-frequency driver be able to survive a considerable input in a frequency range below its crossover point. This is difficult to achieve in actual practice. In the discussion below, the characteristics of the electrical filter order are discussed, followed by a discussion of crossovers having that acoustic slope and their advantages or disadvantages.
Most audio crossovers use first- to fourth-order electrical filters. Higher orders are not generally implemented in passive crossovers for loudspeakers but are sometimes found in electronic equipment under circumstances for which their considerable cost and complexity can be justified.
==== First order ====
First-order filters have a 20 dB/decade (or 6 dB/octave) slope. All first-order filters have a Butterworth filter characteristic. First-order filters are considered by many audiophiles to be ideal for crossovers. This is because this filter type is 'transient perfect', meaning that the sum of the low-pass and high-pass outputs passes both amplitude and phase unchanged across the range of interest. It also uses the fewest parts and has the lowest insertion loss (if passive). A first-order crossover allows more signal content consisting of unwanted frequencies to get through in the LPF and HPF sections than do higher-order configurations. While woofers can easily handle this (aside from generating distortion at frequencies above those that they can properly reproduce), smaller high-frequency drivers (especially tweeters) are more likely to be damaged, since they are not capable of handling large power inputs at frequencies below their rated crossover point.
In practice, speaker systems with true first-order acoustic slopes are difficult to design because they require large overlapping driver bandwidth, and the shallow slopes mean that non-coincident drivers interfere over a wide frequency range and cause large response shifts off-axis.
==== Second order ====
Second-order filters have a 40 dB/decade (or 12 dB/octave) slope. Second-order filters can have a Bessel, Linkwitz-Riley or Butterworth characteristic depending on design choices and the components that are used. This order is commonly used in passive crossovers as it offers a reasonable balance between complexity, response, and higher-frequency driver protection. When designed with time-aligned physical placement, these crossovers have a symmetrical polar response, as do all even-order crossovers.
It is commonly thought that there will always be a phase difference of 180° between the outputs of a (second-order) low-pass filter and a high-pass filter having the same crossover frequency. And so, in a 2-way system, the high-pass section's output is usually connected to the high-frequency driver 'inverted', to correct for this phase problem. For passive systems, the tweeter is wired with opposite polarity to the woofer; for active crossovers the high-pass filter's output is inverted. In 3-way systems the mid-range driver or filter is inverted. However, this is generally only true when the speakers have a wide response overlap and the acoustic centers are physically aligned.
==== Third order ====
Third-order filters have a 60 dB/decade (or 18 dB/octave) slope. These crossovers usually have Butterworth filter characteristics; phase response is very good, the level sum being flat and in phase quadrature, similar to a first-order crossover. The polar response is asymmetric. In the original D'Appolito MTM arrangement, a symmetrical arrangement of drivers is used to create a symmetrical off-axis response when using third-order crossovers. Third-order acoustic crossovers are often built from first- or second-order filter circuits.
==== Fourth order ====
Fourth-order filters have an 80 dB/decade (or 24 dB/octave) slope. These filters are relatively complex to design in passive form, because the components interact with each other, but modern computer-aided crossover optimisation design software can produce accurate designs. Steep-slope passive networks are less tolerant of parts value deviations or tolerances, and more sensitive to mis-termination with reactive driver loads (although this is also a problem with lower-order crossovers). A 4th-order crossover with −6 dB crossover point and flat summing is also known as a Linkwitz-Riley crossover (named after its inventors), and can be constructed in active form by cascading two 2nd-order Butterworth filter sections. The low-frequency and high-frequency output signals of the Linkwitz–Riley crossover type are in phase, thus avoiding partial phase inversion if the crossover band-passes are electrically summed, as they would be within the output stage of a multiband compressor. Crossovers used in loudspeaker design do not require the filter sections to be in phase; smooth output characteristics are often achieved using non-ideal, asymmetric crossover filter characteristics. Bessel, Butterworth, and Chebyshev are among the possible crossover topologies.
Such steep-slope filters have greater problems with overshoot and ringing but there are several key advantages, even in their passive form, such as the potential for a lower crossover point and increased power handling for tweeters, together with less overlap between drivers, dramatically reducing the shifting of the main lobe of a multi-way loudspeaker system's radiation pattern with frequency, or other unwelcome off-axis effects. With less frequency overlap between adjacent drivers, their geometric location relative to each other becomes less critical and allows more latitude in speaker system cosmetics or (in-car audio) practical installation constraints.
==== Higher order ====
Passive crossovers giving acoustic slopes higher than fourth-order are not common because of cost and complexity. Filters with slopes of up to 96 dB per octave are available in active crossovers and loudspeaker management systems.
==== Mixed order ====
Crossovers can also be constructed with mixed-order filters. For example, a second-order low-pass filter can be combined with a third-order high-pass filter. These are generally passive and are used for several reasons, often when the component values are found by computer program optimization. A higher-order tweeter crossover can sometimes help to compensate for the time offset between the woofer and tweeter, caused by non-aligned acoustic centers.
==== Notched ====
There is a class of crossover filters that produce null responses in the high-pass and low-pass outputs at frequencies close to the crossover frequency. Within their respective stopbands, the outputs have a high initial rate of attenuation, while the sum of their outputs has a flat all-pass response. Their two outputs maintain a constant zero-phase difference across the transition, thus enhancing their lobing performance with noncoincident loudspeaker drivers.
=== Classification based on circuit topology ===
==== Parallel ====
Parallel crossovers are by far the most common. Electrically the filters are in parallel and thus the various filter sections do not interact. This makes two-way crossovers easier to design because, in terms of electrical impedance, the sections can be considered separate and because component tolerance variations will be isolated but like all crossovers, the final design relies on the output of the drivers to be complementary acoustically and this, in turn, requires careful matching in amplitude and phase of the underlying crossover. Parallel crossovers also have the advantage of allowing the speaker drivers to be bi-wired, a feature whose benefits are hotly disputed.
==== Series ====
In this topology, the individual filters are connected in series, and a driver or driver combination is connected in parallel with each filter. To understand the signal path in this type of crossover, refer to the "Series Crossover" figure, and consider a high-frequency signal that, during a certain moment, has a positive voltage on the upper Input terminal compared to the lower Input terminal. The low-pass filter presents a high impedance to the signal, and the tweeter presents a low impedance; so the signal passes through the tweeter. The signal continues to the connection point between the woofer and the high-pass filter. There, the HPF presents a low impedance to the signal, so the signal passes through the HPF, and appears at the lower Input terminal. A low-frequency signal with a similar instantaneous voltage characteristic first passes through the LPF, then the woofer, and appears at the lower Input terminal.
==== Derived ====
Derived crossovers include active crossovers in which one of the crossover responses is derived from the other through the use of a differential amplifier. For example, the difference between the input signal and the output of the high-pass section is a low-pass response. Thus, when a differential amplifier is used to extract this difference, its output constitutes the low-pass filter section. The main advantage of derived filters is that they produce no phase difference between the high-pass and low-pass sections at any frequency. The disadvantages are either:
that the high-pass and low-pass sections often have different levels of attenuation in their stopbands, i.e., their slopes are asymmetrical, or
that the response of one or both sections peaks near the crossover frequency, or both.
In the case of (1), above, the usual situation is that the derived low-pass response attenuates at a much slower rate than the fixed response. This requires the speaker to which it is directed to continue to respond to signals deep into the stopband where its physical characteristics may not be ideal. In the case of (2), above, both speakers are required to operate at higher volume levels as the signal nears the crossover points. This uses more amplifier power and may drive the speaker cones into nonlinearity.
=== Models and simulation ===
Professionals and hobbyists have access to a range of computer tools that were not available before. These computer-based measurement and simulation tools allow for the modeling and virtual design of various parts of a speaker system which greatly accelerate the design process and improve the quality of a speaker. These tools range from commercial to free offerings. Their scope also varies. Some may focus on woofer/cabinet design and issues related to cabinet volume and ports (if any), while others may focus on the crossover and frequency response. Some tools, for instance, only simulate the baffle step response.
In the period before computer modeling made it affordable and quick to simulate the combined effects of drivers, crossovers and cabinets, a number of issues could go unnoticed by the speaker designer. For instance, simplistic three-way crossovers were designed as a pair of two-way crossovers: the tweeter/mid-range and the other the mid-range/woofer sections. This could create excess gain and a 'haystack' response in the mid-range output, together with a lower than anticipated input impedance. Other issues such as improper phase matching or incomplete modeling of the driver impedance curves could also go unnoticed. These problems were not impossible to solve but required more iterations, time and effort than they do today.
== See also ==
Bass management
Electrical characteristics of a dynamic loudspeaker
Full-range speaker
Loudspeaker enclosure
Midrange speaker
Powered speakers
Subwoofer
Super tweeter
Tweeter
Woofer
== References == | Wikipedia/Crossover_network |
Equalization, or simply EQ, in sound recording and reproduction is the process of adjusting the volume of different frequency bands within an audio signal. The circuit or equipment used to achieve this is called an equalizer.
Most hi-fi equipment uses relatively simple filters to make bass and treble adjustments. Graphic and parametric equalizers have much more flexibility in tailoring the frequency content of an audio signal. Broadcast and recording studios use sophisticated equalizers capable of much more detailed adjustments, such as eliminating unwanted sounds or making certain instruments or voices more prominent. Because of this ability, they can be aptly described as "frequency-specific volume knobs.": 73
Equalizers are used in recording and radio studios, production control rooms, and live sound reinforcement and in instrument amplifiers, such as guitar amplifiers, to correct or adjust the response of microphones, instrument pickups, loudspeakers, and hall acoustics. Equalization may also be used to eliminate or reduce unwanted sounds (e.g., low-frequency hum coming from a guitar amplifier), make certain instruments or voices more (or less) prominent, enhance particular aspects of an instrument's tone, or combat feedback (howling) in a public address system. Equalizers are also used in music production to adjust the timbre of individual instruments and voices by adjusting their frequency content and to fit individual instruments within the overall frequency spectrum of the mix.: 73–74
== Terminology ==
The concept of equalization was first applied in correcting the frequency response of telephone lines using passive filters; this was prior to the invention of electronic amplification. Initially, equalization was used to compensate for the uneven frequency response of an electric system by applying a filter having the opposite response, thus restoring the fidelity of the transmission. A plot of the system's net frequency response would be a flat line, as its response at any frequency would be equal to its response at any other frequency. Hence the term equalization.
Later the concept was applied in audio engineering to adjust the frequency response in recording, reproduction, and live sound reinforcement systems. Sound engineers correct the frequency response of a sound system so that the frequency balance of the music as heard through speakers better matches the original performance picked up by a microphone. Audio amplifiers have long had filters or controls to modify their frequency response. These are most often in the form of variable bass and treble controls, and switches to apply low-cut or high-cut filters for elimination of low-frequency rumble and high-frequency hiss respectively.
Graphic equalizers and other equipment developed for improving fidelity have since been used by recording engineers to modify frequency responses for aesthetic reasons. Hence in the field of audio electronics the term equalization is now broadly used to describe the application of such filters regardless of intent. This broad definition, therefore, includes all linear filters at the disposal of a listener or engineer.
A British EQ or British style equalizer is one with similar properties to those on mixing consoles made in the UK by companies such as Amek, Neve and Soundcraft from the 1950s through to the 1970s. Later on, as other manufacturers started to market their products, these British companies began touting their equalizers as being a cut above the rest. Today, many non-British companies such as Behringer and Mackie advertise British EQ on their equipment. A British style EQ seeks to replicate the qualities of the expensive British mixing consoles.
== History ==
Filtering audio frequencies dates back at least to acoustic telegraphy and multiplexing in general. Audio electronic equipment evolved to incorporate filtering elements as consoles in radio stations began to be used for recording as much as broadcast. Early filters included basic bass and treble controls featuring fixed frequency centers, and fixed levels of cut or boost. These filters worked over broad frequency ranges. Variable equalization in audio reproduction was first used by John Volkman working at RCA in the 1920s. That system was used to equalize a motion picture theater sound playback system.
The Langevin EQ-251-A, designed by Art Davis, was the first equalizer to use slide controls. It featured two passive equalization sections, a bass shelving filter, and a pass band filter. Each filter had switchable frequencies and used a 15-position slide switch to adjust cut or boost. The passive design required 14 dB of make-up gain. Born in Salt Lake City, Davis worked in Southern California most of his life for a series of companies including Cinema Engineering (from 1938), Langevin, Electrodyne, Cetec and Altec. The first true graphic equalizer was the type 7080, an active tube device developed in the 1950s by Davis's Cinema Engineering company. It featured six bands, each 1.5 octaves wide, with a boost or cut range of 8 dB. It used a slide switch to adjust each band in 1 dB steps. Three summing amps smoothly restored the gain lost in the filter circuits. Davis followed this in 1961 with the Langevin EQ-252-A having seven sliders, then reworked it for Altec Lansing to create the Model 9062A EQ which sold well into the 1970s. In 1967 Davis developed the first 1/3 octave variable notch filter set, the Altec-Lansing "Acousta-Voice" system.
In 1966, Burgess Macneal and George Massenburg envisioned a tunable EQ for a new recording console. Bob Meushaw, a friend of Massenburg, built the equalizer. According to Massenburg, "Four people could possibly lay claim to the modern concept: Bob Meushaw, Burgess Macneal, Daniel Flickinger, and myself… Our (Bob’s, Burgess’ and my) sweep-tunable EQ was borne, more or less, out of an idea that Burgess and I had around 1966 or 1967 for an EQ… three controls adjusting, independently, the parameters for each of three bands for a recording console… I wrote and delivered the AES paper on Parametrics at the Los Angeles show in 1972… It’s the first mention of 'Parametric' associated with sweep-tunable EQ."
Daniel N. Flickinger introduced the first parametric equalizer in early 1971. His design leveraged a high-performance op-amp of his own design, the 535 series to achieve filtering circuits that were before impossible. Flickinger's patent from early in 1971 showed the circuit topology that would come to dominate audio equalization until the present day, as well as the theoretical underpinnings of the elegant circuit. Instead of slide potentiometers working on individual bands of frequency, or rotary switches, Flickinger's circuit allowed arbitrary selection of frequency and cut or boost level in three overlapping bands over the entire audio spectrum. Six knobs on his early EQs would control these sweepable filters. Up to six switches were incorporated to select shelving on the high and low bands, and bypassing for any unused band for the purest signal path.
Similar designs appeared soon thereafter from George Massenburg (in 1972) and Burgess McNeal from ITI corp. In May 1972 Massenburg used the term parametric equalization in a paper presented at the 42nd convention of the Audio Engineering Society. Most channel equalization on mixing consoles made from 1971 to the present day rely upon the designs of Flickinger, Massenburg and McNeal in either semi or fully-parametric topology. In the late 1990s and in the 2000s, parametric equalizers became increasingly available as digital signal processing (DSP) equipment, usually in the form of plug-ins for various digital audio workstations. Standalone outboard gear versions of DSP parametric equalizers were also quickly introduced after the software versions.
== Filter types ==
Although the range of equalization functions is governed by the theory of linear filters, the adjustment of those functions and the flexibility with which they can be adjusted varies according to the topology of the circuitry and controls presented to the user.
Shelving controls are usually simple first-order filter functions that alter the relative gains between frequencies much higher and much lower than the cutoff frequencies. A low shelf, such as the bass control on most hi-fi equipment, is adjusted to affect the gain of lower frequencies while having no effect well above its cutoff frequency. A high shelf, such as a treble control, adjusts the gain of higher frequencies only. These are broad adjustments designed more to increase the listener's satisfaction than to provide actual equalization in the strict sense of the term.
A parametric equalizer has one or more sections each of which implements a second-order filter function. This involves three adjustments: selection of the center frequency (in Hz), adjustment of the Q which determines the sharpness of the bandwidth, and the level or gain control which determines how much those frequencies are boosted or cut relative to frequencies much above or below the center frequency selected. In a semi-parametric equalizer the bandwidth is preset by the designer. In a quasi-parametric equalizer, the user is given limited switchable options for bandwidth.
A graphic equalizer also implements second-order filter functions in a more user-friendly manner but with somewhat less flexibility. This equipment is based on a bank of filters covering the audio spectrum in up to 31 frequency bands. Each second-order filter has a fixed center frequency and Q factor, but an adjustable level. The user can raise or lower each slider in order to visually approximate a graph of the intended frequency response.
Equalization in the context of audio reproduction is not used strictly to compensate for the deficiency of equipment and transmission channels. A high-pass filter modifies a signal by eliminating only lower frequencies. An example of this is a low-cut or rumble filter, which is used to remove infrasonic energy from a program that may consume undue amplifier power and cause excessive diaphragm excursions in (or even damage to) loudspeakers. A low-pass filter only modifies the audio signal by removing high frequencies. An example of this is a high-cut or hiss filter, which is used to remove annoying white noise at the expense of the crispness of the program material.
A first-order low-pass or high-pass filter has a standard response curve that reduces the unwanted frequencies well above or below the cutoff frequency with a slope of 6 dB per octave. A second-order filter will reduce those frequencies with a slope of 12 dB per octave and moreover may be designed with a higher Q or finite zeros in order to effect an even steeper response around the cutoff frequency. For instance, a second-order low-pass notch filter section only reduces (rather than eliminates) very high frequencies, but has a steep response falling to zero at a specific frequency (the so-called notch frequency). Such a filter might be ideal, for instance, in completely removing the 19 kHz FM stereo subcarrier pilot signal while helping to cut even higher frequency subcarrier components remaining from the stereo demultiplexer.
In addition to adjusting the relative amplitude of frequency bands, an audio equalizer usually alters the relative phases of those frequencies. While the human ear is not as sensitive to the phase of audio frequencies, music professionals may favor certain equalizers because of how they affect the timbre of the musical content by way of audible phase artifacts.
=== High-pass and low-pass filters ===
A high-pass filter is a filter, an electronic circuit or device, that passes higher frequencies well but attenuates lower-frequency components. A low-pass filter passes low-frequency components of signals while attenuating higher frequencies. In audio applications these high-pass and low-pass filters are frequently termed low cut and high cut, respectively, to emphasize their effect on the original signal. For instance, sometimes audio equipment will include a switch labeled high cut or described as a hiss filter (hiss being high-frequency noise). In the phonograph era, many stereos would include a switch to introduce a high-pass (low cut) filter, often called a rumble filter, to eliminate infrasonic frequencies. High and low-pass filters are used in audio crossovers to direct energy to the speaker drivers capable of reproducing it. For instance, a low-pass filter is used in the signal chain before a subwoofer to ensure that only deep bass frequencies reach the subwoofer.
=== Shelving filter ===
While high-pass and low-pass filters are useful for removing unwanted signal above or below a set frequency, shelving filters can be used to reduce or increase signals above or below a set frequency. Shelving filters are used as common tone controls (bass and treble) found in consumer audio equipment such as home stereos, and on guitar amplifiers and bass amplifiers. These implement a first-order response and provide an adjustable boost or cut to frequencies above or below a certain point.
A high shelf or treble control will have a frequency response |H(f)| whose square is given by:
|
H
(
f
)
|
2
=
1
+
(
f
/
f
z
)
2
1
+
(
f
/
f
p
)
2
{\displaystyle |H(f)|^{2}={{1+(f/f_{z})^{2}} \over {1+(f/f_{p})^{2}}}}
where fp and fz are called the pole and zero frequencies, respectively. Turning down the treble control increases fz and decreases fp so that frequencies higher than fp are attenuated. Turning up the treble control increases fp and decreases fz so that frequencies higher than fz are boosted. Setting the treble control at the center sets fz = fp so that |H(f)|2 = 1 and the circuit has no effect. At most, the slope of the filter response in the transition region will be 6 dB per octave.
Similarly the response of a low shelf or bass control can be represented as:
|
H
(
f
)
|
2
=
(
f
z
/
f
p
)
2
1
+
(
f
/
f
z
)
2
1
+
(
f
/
f
p
)
2
.
{\displaystyle |H(f)|^{2}=(f_{z}/f_{p})^{2}\;{{1+(f/f_{z})^{2}} \over {1+(f/f_{p})^{2}}}.}
In this case, the inclusion of the leading factor simply indicates that the response at frequencies much higher than fz or fp is unity and that only bass frequencies are affected.
A high shelving control in which fz is set to infinity, or a low shelving response in which fz is set to zero, implements a first-order low-pass or high-pass filter, respectively. However, the usual tone controls have a more limited range, since their purpose is not to eliminate any frequencies but only to achieve a greater balance when, for instance, the treble is lacking and the sound is not crisp. Since the range of possible responses from shelving filters is so limited, some audio engineers considered shelving controls inadequate for equalization tasks. On some bass amps and DI boxes, the units provide both low and high shelving controls and additional equalization controls.
=== Graphic equalizer ===
In the graphic equalizer, the input signal is sent to a bank of filters. Each filter passes the portion of the signal present in its own frequency range or band. The amplitude passed by each filter is adjusted using a slide control to boost or cut frequency components passed by that filter. The vertical position of each slider thus indicates the gain applied to that frequency band, so that the sliders resemble a graph of the equalizer's response plotted versus frequency.
The number of frequency channels may be matched to the requirements of the intended application. A car audio equalizer might have a total of five to ten frequency bands. An equalizer for professional live sound reinforcement typically has some 25 to 31 bands, for more precise control of feedback problems and equalization of room modes. Such an equalizer is called a 1/3-octave equalizer (spoken informally as "third-octave EQ") because the center frequencies of its filters are spaced one third of an octave apart, three filters to an octave. Equalizers with half as many filters per octave are common where less precise control is required—this design is called a 2/3-octave equalizer.
=== Parametric equalizer ===
Parametric equalizers are multi-band variable equalizers that allow users to control the three primary filter parameters: gain, center frequency and bandwidth. Gain allows adjustment of boost or cut produced. The center frequency controls the frequencies affected. The bandwidth (which is inversely related to Q) the range of frequencies affected. Parametric equalizers are capable of making much more precise adjustments to the sound than other equalizers and are commonly used in sound recording and live sound reinforcement.
A variant of the parametric equalizer is the semi-parametric equalizer, a sweepable filter. It allows users to control the gain and frequency but uses a pre-set bandwidth. In some cases, semi-parametric equalizers allow the user to select between a wide and a narrow preset bandwidth.
== Filter functions ==
The responses of linear filters are mathematically described in terms of their transfer function or, in layman's terms, frequency response. A transfer function can be decomposed as a combination of first-order responses and second-order responses (implemented as biquad sections). These can be described according to their pole and zero frequencies, which are complex numbers in the case of second-order responses.
=== First-order filters ===
A first-order filter can alter the response of frequencies above and below a point. In the transition region the filter response will have a slope of up to 6 dB per octave. The bass and treble controls in a hi-fi system are each a first-order filter in which the balance of frequencies above and below a point are varied using a single knob. A special case of first-order filters is a first-order high-pass or low-pass filter in which the 6 dB per octave cut of low or high frequencies extends indefinitely. These are the simplest of all filters to implement individually, requiring only a capacitor and resistor.
=== Second-order filters ===
Second-order filters are capable of resonance (or anti-resonance) around a particular frequency. The response of a second-order filter is specified not only by its frequency but also its Q; a higher Q corresponds to a sharper response (smaller bandwidth) around a particular center frequency. For instance, the red response in the accompanying image cuts frequencies around 100 Hz with a higher Q than the blue response, which boosts frequencies around 1000 Hz. Higher Q's correspond to resonant behaviour in which the half-power or −3 dB bandwidth, BW, is given by:
B
W
=
F
0
/
Q
{\displaystyle BW\ =\ F_{0}/Q}
where F0 is the resonant frequency of the second-order filter. BW is the bandwidth expressed in the same frequency unit that F0 is. Low Q filter responses (where Q < 1⁄2) are not said to be resonant and the above formula for bandwidth does not apply.
It is also possible to define the Q of a band-pass function as:
Q
=
2
N
2
N
−
1
=
1
2
sinh
(
ln
(
2
)
2
N
)
,
{\displaystyle Q\ =\ {\frac {\sqrt {2^{N}}}{2^{N}-1}}\ =\ {\frac {1}{2\sinh \left({\frac {\ln(2)}{2}}N\right)}},}
where N is the bandwidth in octaves. The reverse mapping is:
N
=
2
log
2
(
1
2
Q
+
1
4
Q
2
+
1
)
=
2
ln
(
2
)
arsinh
(
1
2
Q
)
.
{\displaystyle N\ =\ 2\log _{2}\left({\frac {1}{2Q}}+{\sqrt {{\frac {1}{4Q^{2}}}+1}}\right)\ =\ {\frac {2}{\ln(2)}}\operatorname {arsinh} \left({\frac {1}{2Q}}\right).}
A second-order filter response with Q of less than 1/2 can be decomposed into two first-order filter functions, a low-cut and a high-cut (or boost). Of more interest are resonant filter functions which can boost (or cut) a narrow range of frequencies. In addition to specifying the center frequency F0 and the Q, the specification of the filter's zeros determines how much that frequency band will be boosted (or cut). Thus a parametric equalizer section will have three controls for its center frequency F0, bandwidth or Q, and the amount of boost or cut usually expressed in dB.
The range of second-order filter functions is important because any analog filter function can be decomposed into a (usually small) number of these (plus, perhaps, simpler first-order responses). These are implemented directly by each section of a parametric equalizer, where they are explicitly adjusted. And each element of a graphic equalizer based on a filter bank includes one such element per band whose Q and F0 is not adjustable by the user.
== Uses ==
In sound recording, equalization may be used to adjust frequency responses for practical or aesthetic reasons, where the end result typically is unequal volume levels for the different frequencies. For example, equalization is used to modify an instrument's sound or make certain instruments and sounds more prominent. A recording engineer may use an equalizer to make some high-pitches in a vocal part louder while making low-pitches in a drum part quieter.
Equalization is commonly used to increase the depth of a mix, creating the impression that some sounds in a mono or stereo mix are farther away or closer than others.: 75–76 Equalization is also commonly used to give tracks with similar frequency components complementary spectral contours, known as mirrored equalization. Selected components of parts that would otherwise compete, such as bass guitar and kick drum, are boosted in one part and cut in the other, and vice versa, so that they both stand out.: 76–77
Equalizers can correct problems posed by a room's acoustics, as an auditorium will generally have an uneven frequency response especially due to standing waves and acoustic dampening. For instance, the frequency response of a room may be analyzed using a spectrum analyzer and a pink noise generator. Then a graphic equalizer can be easily adjusted to compensate for the room's acoustics. Such compensation can also be applied to tweak the sound quality of a recording studio in addition to its use in live sound reinforcement systems and even home hi-fi systems.
During live events where signals from microphones are amplified and sent to speaker systems, equalization is not only used to "flatten" the frequency response but may also be useful in eliminating feedback. When the sound produced by the speakers is picked up by a microphone, it is further reamplified; this recirculation of sound can lead to "howling", requiring the sound technician to reduce the gain for that microphone, perhaps sacrificing the contribution of a singer's voice, for instance. Even at a slightly reduced gain, the feedback will still cause an unpleasant resonant sound around the frequency at which it would howl. But because the feedback is troublesome at a particular frequency, it is possible to cut the gain only around that frequency while preserving the gain at most other frequencies. This can best be done using a parametric equalizer tuned to that very frequency with its amplitude control sharply reduced. By adjusting the equalizer for a narrow bandwidth (high Q), most other frequency components will not be affected. The extreme case when the signal at the filter's center frequency is completely eliminated is known as a notch filter.
An equalizer can be used to correct or modify the frequency response of a loudspeaker system rather than designing the speaker itself to have the desired response. For instance, the Bose 901 speaker system does not use separate larger and smaller drivers to cover the bass and treble frequencies. Instead it uses nine drivers all of the same four-inch diameter, more akin to what one would find in a table radio. However, this speaker system is sold with an active equalizer. That equalizer must be inserted into the amplifier system so that the amplified signal that is finally sent to the speakers has its response increased at the frequencies where the response of these drivers falls off, and vice versa, producing the response intended by the manufacturer.
Tone controls (usually designated "bass" and "treble") are simple shelving filters included in most hi-fi equipment for gross adjustment of the frequency balance. The bass control may be used, for instance, to increase the drum and bass parts at a dance party, or to reduce annoying bass sounds when listening to a person speaking. The treble control might be used to give the percussion a sharper or more "brilliant" sound, or can be used to cut such high frequencies when they have been overemphasized in the program material or simply to accommodate a listener's preference.
A "rumble filter" is a high-pass (low cut) filter with a cutoff typically in the 20 to 40 Hz range; this is the low frequency end of human hearing. "Rumble" is a type of low-frequency noise produced in record players and turntables, particularly older or low quality models. The rumble filter prevents this noise from being amplified and sent to the loudspeakers. Some cassette decks have a switchable "subsonic filter" feature that does the same thing for recordings.
A crossover network is a system of filters designed to direct electrical energy separately to the woofer and tweeter of a 2-way speaker system (and also to the mid-range speaker of a 3-way system). This is most often built into the speaker enclosure and hidden from the user. However, in bi-amplification, these filters operate on the low level audio signals, sending the low-frequency and high-frequency signal components to separate amplifiers, which connect to the woofers and tweeters, respectively.
Equalization is used in a reciprocal manner in certain communication channels and recording technologies. The original music is passed through a particular filter to alter its frequency balance, followed by the channel or recording process. At the end of the channel or when the recording is played, a complementary filter is inserted which precisely compensates for the original filter and recovers the original waveform. For instance, FM broadcasting uses a pre-emphasis filter to boost the high frequencies before transmission, and every receiver includes a matching de-emphasis filter to restore it. The white noise that is introduced by the radio is then also de-emphasized at the higher frequencies (where it is most noticeable) along with the pre-emphasized program, making the noise less audible. Tape recorders used the same approach to reduce "tape hiss" while maintaining fidelity. On the other hand, in the production of vinyl records, a filter is used to reduce the amplitude of low frequencies which otherwise produce large amplitudes on the tracks of a record. Then the groove can take up less physical space, fitting more music on the record. The preamplifier attached to the phono cartridge has a complementary filter boosting those low frequencies, following the standard RIAA equalization curve.
== See also ==
Electronic filter
Loudness compensation
Weighting filter
== Notes ==
== Citations ==
== General sources ==
== External links ==
Discriminating EQ frequencies by ear
Calculator: bandwidth per octave
N
{\displaystyle N}
to quality factor
Q
{\displaystyle Q}
and back
EQ Condensed Overview
Audio EQ Cookbook
PreSonus Equalizer Terms and Tips
WikiRecording's Guide to Equalization | Wikipedia/Graphic_equalizer |
In physics, physical chemistry and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids – liquids and gases. It has several subdisciplines, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of water and other liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and moments on aircraft, determining the mass flow rate of petroleum through pipelines, predicting weather patterns, understanding nebulae in interstellar space, understanding large scale geophysical flows involving oceans/atmosphere and modelling fission weapon detonation.
Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves the calculation of various properties of the fluid, such as flow velocity, pressure, density, and temperature, as functions of space and time.
Before the twentieth century, "hydrodynamics" was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, both of which can also be applied to gases.
== Equations ==
The foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum, and conservation of energy (also known as the first law of thermodynamics). These are based on classical mechanics and are modified in quantum mechanics and general relativity. They are expressed using the Reynolds transport theorem.
In addition to the above, fluids are assumed to obey the continuum assumption. At small scale, all fluids are composed of molecules that collide with one another and solid objects. However, the continuum assumption assumes that fluids are continuous, rather than discrete. Consequently, it is assumed that properties such as density, pressure, temperature, and flow velocity are well-defined at infinitesimally small points in space and vary continuously from one point to another. The fact that the fluid is made up of discrete molecules is ignored.
For fluids that are sufficiently dense to be a continuum, do not contain ionized species, and have flow velocities that are small in relation to the speed of light, the momentum equations for Newtonian fluids are the Navier–Stokes equations—which is a non-linear set of differential equations that describes the flow of a fluid whose stress depends linearly on flow velocity gradients and pressure. The unsimplified equations do not have a general closed-form solution, so they are primarily of use in computational fluid dynamics. The equations can be simplified in several ways, all of which make them easier to solve. Some of the simplifications allow some simple fluid dynamics problems to be solved in closed form.
In addition to the mass, momentum, and energy conservation equations, a thermodynamic equation of state that gives the pressure as a function of other thermodynamic variables is required to completely describe the problem. An example of this would be the perfect gas equation of state:
p
=
ρ
R
u
T
M
{\displaystyle p={\frac {\rho R_{u}T}{M}}}
where p is pressure, ρ is density, and T is the absolute temperature, while Ru is the gas constant and M is molar mass for a particular gas. A constitutive relation may also be useful.
=== Conservation laws ===
Three conservation laws are used to solve fluid dynamics problems, and may be written in integral or differential form. The conservation laws may be applied to a region of the flow called a control volume. A control volume is a discrete volume in space through which fluid is assumed to flow. The integral formulations of the conservation laws are used to describe the change of mass, momentum, or energy within the control volume. Differential formulations of the conservation laws apply Stokes' theorem to yield an expression that may be interpreted as the integral form of the law applied to an infinitesimally small volume (at a point) within the flow.
Mass continuity (conservation of mass) The rate of change of fluid mass inside a control volume must be equal to the net rate of fluid flow into the volume. Physically, this statement requires that mass is neither created nor destroyed in the control volume, and can be translated into the integral form of the continuity equation:
∂
∂
t
∭
V
ρ
d
V
=
−
{\displaystyle {\frac {\partial }{\partial t}}\iiint _{V}\rho \,dV=-\,{}}
S
{\displaystyle {\scriptstyle S}}
ρ
u
⋅
d
S
{\displaystyle {}\,\rho \mathbf {u} \cdot d\mathbf {S} }
Above, ρ is the fluid density, u is the flow velocity vector, and t is time. The left-hand side of the above expression is the rate of increase of mass within the volume and contains a triple integral over the control volume, whereas the right-hand side contains an integration over the surface of the control volume of mass convected into the system. Mass flow into the system is accounted as positive, and since the normal vector to the surface is opposite to the sense of flow into the system the term is negated. The differential form of the continuity equation is, by the divergence theorem:
∂
ρ
∂
t
+
∇
⋅
(
ρ
u
)
=
0
{\displaystyle \ {\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )=0}
Conservation of momentum
Newton's second law of motion applied to a control volume, is a statement that any change in momentum of the fluid within that control volume will be due to the net flow of momentum into the volume and the action of external forces acting on the fluid within the volume.
∂
∂
t
∭
V
ρ
u
d
V
=
−
{\displaystyle {\frac {\partial }{\partial t}}\iiint _{\scriptstyle V}\rho \mathbf {u} \,dV=-\,{}}
S
{\displaystyle _{\scriptstyle S}}
(
ρ
u
⋅
d
S
)
u
−
{\displaystyle (\rho \mathbf {u} \cdot d\mathbf {S} )\mathbf {u} -{}}
S
{\displaystyle {\scriptstyle S}}
p
d
S
{\displaystyle {}\,p\,d\mathbf {S} }
+
∭
V
ρ
f
body
d
V
+
F
surf
{\displaystyle \displaystyle {}+\iiint _{\scriptstyle V}\rho \mathbf {f} _{\text{body}}\,dV+\mathbf {F} _{\text{surf}}}
In the above integral formulation of this equation, the term on the left is the net change of momentum within the volume. The first term on the right is the net rate at which momentum is convected into the volume. The second term on the right is the force due to pressure on the volume's surfaces. The first two terms on the right are negated since momentum entering the system is accounted as positive, and the normal is opposite the direction of the velocity u and pressure forces. The third term on the right is the net acceleration of the mass within the volume due to any body forces (here represented by fbody). Surface forces, such as viscous forces, are represented by Fsurf, the net force due to shear forces acting on the volume surface. The momentum balance can also be written for a moving control volume.
The following is the differential form of the momentum conservation equation. Here, the volume is reduced to an infinitesimally small point, and both surface and body forces are accounted for in one total force, F. For example, F may be expanded into an expression for the frictional and gravitational forces acting at a point in a flow.
D
u
D
t
=
F
−
∇
p
ρ
{\displaystyle {\frac {D\mathbf {u} }{Dt}}=\mathbf {F} -{\frac {\nabla p}{\rho }}}
In aerodynamics, air is assumed to be a Newtonian fluid, which posits a linear relationship between the shear stress (due to internal friction forces) and the rate of strain of the fluid. The equation above is a vector equation in a three-dimensional flow, but it can be expressed as three scalar equations in three coordinate directions. The conservation of momentum equations for the compressible, viscous flow case is called the Navier–Stokes equations.
Conservation of energy
Although energy can be converted from one form to another, the total energy in a closed system remains constant.
ρ
D
h
D
t
=
D
p
D
t
+
∇
⋅
(
k
∇
T
)
+
Φ
{\displaystyle \rho {\frac {Dh}{Dt}}={\frac {Dp}{Dt}}+\nabla \cdot \left(k\nabla T\right)+\Phi }
Above, h is the specific enthalpy, k is the thermal conductivity of the fluid, T is temperature, and Φ is the viscous dissipation function. The viscous dissipation function governs the rate at which the mechanical energy of the flow is converted to heat. The second law of thermodynamics requires that the dissipation term is always positive: viscosity cannot create energy within the control volume. The expression on the left side is a material derivative.
== Classifications ==
=== Compressible versus incompressible flow ===
All fluids are compressible to an extent; that is, changes in pressure or temperature cause changes in density. However, in many situations the changes in pressure and temperature are sufficiently small that the changes in density are negligible. In this case the flow can be modelled as an incompressible flow. Otherwise the more general compressible flow equations must be used.
Mathematically, incompressibility is expressed by saying that the density ρ of a fluid parcel does not change as it moves in the flow field, that is,
D
ρ
D
t
=
0
,
{\displaystyle {\frac {\mathrm {D} \rho }{\mathrm {D} t}}=0\,,}
where D/Dt is the material derivative, which is the sum of local and convective derivatives. This additional constraint simplifies the governing equations, especially in the case when the fluid has a uniform density.
For flow of gases, to determine whether to use compressible or incompressible fluid dynamics, the Mach number of the flow is evaluated. As a rough guide, compressible effects can be ignored at Mach numbers below approximately 0.3. For liquids, whether the incompressible assumption is valid depends on the fluid properties (specifically the critical pressure and temperature of the fluid) and the flow conditions (how close to the critical pressure the actual flow pressure becomes). Acoustic problems always require allowing compressibility, since sound waves are compression waves involving changes in pressure and density of the medium through which they propagate.
=== Newtonian versus non-Newtonian fluids ===
All fluids, except superfluids, are viscous, meaning that they exert some resistance to deformation: neighbouring parcels of fluid moving at different velocities exert viscous forces on each other. The velocity gradient is referred to as a strain rate; it has dimensions T−1. Isaac Newton showed that for many familiar fluids such as water and air, the stress due to these viscous forces is linearly related to the strain rate. Such fluids are called Newtonian fluids. The coefficient of proportionality is called the fluid's viscosity; for Newtonian fluids, it is a fluid property that is independent of the strain rate.
Non-Newtonian fluids have a more complicated, non-linear stress-strain behaviour. The sub-discipline of rheology describes the stress-strain behaviours of such fluids, which include emulsions and slurries, some viscoelastic materials such as blood and some polymers, and sticky liquids such as latex, honey and lubricants.
=== Inviscid versus viscous versus Stokes flow ===
The dynamic of fluid parcels is described with the help of Newton's second law. An accelerating parcel of fluid is subject to inertial effects.
The Reynolds number is a dimensionless quantity which characterises the magnitude of inertial effects compared to the magnitude of viscous effects. A low Reynolds number (Re ≪ 1) indicates that viscous forces are very strong compared to inertial forces. In such cases, inertial forces are sometimes neglected; this flow regime is called Stokes or creeping flow.
In contrast, high Reynolds numbers (Re ≫ 1) indicate that the inertial effects have more effect on the velocity field than the viscous (friction) effects. In high Reynolds number flows, the flow is often modeled as an inviscid flow, an approximation in which viscosity is completely neglected. Eliminating viscosity allows the Navier–Stokes equations to be simplified into the Euler equations. The integration of the Euler equations along a streamline in an inviscid flow yields Bernoulli's equation. When, in addition to being inviscid, the flow is irrotational everywhere, Bernoulli's equation can completely describe the flow everywhere. Such flows are called potential flows, because the velocity field may be expressed as the gradient of a potential energy expression.
This idea can work fairly well when the Reynolds number is high. However, problems such as those involving solid boundaries may require that the viscosity be included. Viscosity cannot be neglected near solid boundaries because the no-slip condition generates a thin region of large strain rate, the boundary layer, in which viscosity effects dominate and which thus generates vorticity. Therefore, to calculate net forces on bodies (such as wings), viscous flow equations must be used: inviscid flow theory fails to predict drag forces, a limitation known as the d'Alembert's paradox.
A commonly used model, especially in computational fluid dynamics, is to use two flow models: the Euler equations away from the body, and boundary layer equations in a region close to the body. The two solutions can then be matched with each other, using the method of matched asymptotic expansions.
=== Steady versus unsteady flow ===
A flow that is not a function of time is called steady flow. Steady-state flow refers to the condition where the fluid properties at a point in the system do not change over time. Time dependent flow is known as unsteady (also called transient). Whether a particular flow is steady or unsteady, can depend on the chosen frame of reference. For instance, laminar flow over a sphere is steady in the frame of reference that is stationary with respect to the sphere. In a frame of reference that is stationary with respect to a background flow, the flow is unsteady.
Turbulent flows are unsteady by definition. A turbulent flow can, however, be statistically stationary. The random velocity field U(x, t) is statistically stationary if all statistics are invariant under a shift in time.: 75 This roughly means that all statistical properties are constant in time. Often, the mean field is the object of interest, and this is constant too in a statistically stationary flow.
Steady flows are often more tractable than otherwise similar unsteady flows. The governing equations of a steady problem have one dimension fewer (time) than the governing equations of the same problem without taking advantage of the steadiness of the flow field.
=== Laminar versus turbulent flow ===
Turbulence is flow characterized by recirculation, eddies, and apparent randomness. Flow in which turbulence is not exhibited is called laminar. The presence of eddies or recirculation alone does not necessarily indicate turbulent flow—these phenomena may be present in laminar flow as well. Mathematically, turbulent flow is often represented via a Reynolds decomposition, in which the flow is broken down into the sum of an average component and a perturbation component.
It is believed that turbulent flows can be described well through the use of the Navier–Stokes equations. Direct numerical simulation (DNS), based on the Navier–Stokes equations, makes it possible to simulate turbulent flows at moderate Reynolds numbers. Restrictions depend on the power of the computer used and the efficiency of the solution algorithm. The results of DNS have been found to agree well with experimental data for some flows.
Most flows of interest have Reynolds numbers much too high for DNS to be a viable option,: 344 given the state of computational power for the next few decades. Any flight vehicle large enough to carry a human (L > 3 m), moving faster than 20 m/s (72 km/h; 45 mph) is well beyond the limit of DNS simulation (Re = 4 million). Transport aircraft wings (such as on an Airbus A300 or Boeing 747) have Reynolds numbers of 40 million (based on the wing chord dimension). Solving these real-life flow problems requires turbulence models for the foreseeable future. Reynolds-averaged Navier–Stokes equations (RANS) combined with turbulence modelling provides a model of the effects of the turbulent flow. Such a modelling mainly provides the additional momentum transfer by the Reynolds stresses, although the turbulence also enhances the heat and mass transfer. Another promising methodology is large eddy simulation (LES), especially in the form of detached eddy simulation (DES) — a combination of LES and RANS turbulence modelling.
=== Other approximations ===
There are a large number of other possible approximations to fluid dynamic problems. Some of the more commonly used are listed below.
The Boussinesq approximation neglects variations in density except to calculate buoyancy forces. It is often used in free convection problems where density changes are small.
Lubrication theory and Hele–Shaw flow exploits the large aspect ratio of the domain to show that certain terms in the equations are small and so can be neglected.
Slender-body theory is a methodology used in Stokes flow problems to estimate the force on, or flow field around, a long slender object in a viscous fluid.
The shallow-water equations can be used to describe a layer of relatively inviscid fluid with a free surface, in which surface gradients are small.
Darcy's law is used for flow in porous media, and works with variables averaged over several pore-widths.
In rotating systems, the quasi-geostrophic equations assume an almost perfect balance between pressure gradients and the Coriolis force. It is useful in the study of atmospheric dynamics.
== Multidisciplinary types ==
=== Flows according to Mach regimes ===
While many flows (such as flow of water through a pipe) occur at low Mach numbers (subsonic flows), many flows of practical interest in aerodynamics or in turbomachines occur at high fractions of M = 1 (transonic flows) or in excess of it (supersonic or even hypersonic flows). New phenomena occur at these regimes such as instabilities in transonic flow, shock waves for supersonic flow, or non-equilibrium chemical behaviour due to ionization in hypersonic flows. In practice, each of those flow regimes is treated separately.
=== Reactive versus non-reactive flows ===
Reactive flows are flows that are chemically reactive, which finds its applications in many areas, including combustion (IC engine), propulsion devices (rockets, jet engines, and so on), detonations, fire and safety hazards, and astrophysics. In addition to conservation of mass, momentum and energy, conservation of individual species (for example, mass fraction of methane in methane combustion) need to be derived, where the production/depletion rate of any species are obtained by simultaneously solving the equations of chemical kinetics.
=== Magnetohydrodynamics ===
Magnetohydrodynamics is the multidisciplinary study of the flow of electrically conducting fluids in electromagnetic fields. Examples of such fluids include plasmas, liquid metals, and salt water. The fluid flow equations are solved simultaneously with Maxwell's equations of electromagnetism.
=== Relativistic fluid dynamics ===
Relativistic fluid dynamics studies the macroscopic and microscopic fluid motion at large velocities comparable to the velocity of light. This branch of fluid dynamics accounts for the relativistic effects both from the special theory of relativity and the general theory of relativity. The governing equations are derived in Riemannian geometry for Minkowski spacetime.
=== Fluctuating hydrodynamics ===
This branch of fluid dynamics augments the standard hydrodynamic equations with stochastic fluxes that model
thermal fluctuations.
As formulated by Landau and Lifshitz,
a white noise contribution obtained from the fluctuation-dissipation theorem of statistical mechanics
is added to the viscous stress tensor and heat flux.
== Terminology ==
The concept of pressure is central to the study of both fluid statics and fluid dynamics. A pressure can be identified for every point in a body of fluid, regardless of whether the fluid is in motion or not. Pressure can be measured using an aneroid, Bourdon tube, mercury column, or various other methods.
Some of the terminology that is necessary in the study of fluid dynamics is not found in other similar areas of study. In particular, some of the terminology used in fluid dynamics is not used in fluid statics.
=== Characteristic numbers ===
=== Terminology in incompressible fluid dynamics ===
The concepts of total pressure and dynamic pressure arise from Bernoulli's equation and are significant in the study of all fluid flows. (These two pressures are not pressures in the usual sense—they cannot be measured using an aneroid, Bourdon tube or mercury column.) To avoid potential ambiguity when referring to pressure in fluid dynamics, many authors use the term static pressure to distinguish it from total pressure and dynamic pressure. Static pressure is identical to pressure and can be identified for every point in a fluid flow field.
A point in a fluid flow where the flow has come to rest (that is to say, speed is equal to zero adjacent to some solid body immersed in the fluid flow) is of special significance. It is of such importance that it is given a special name—a stagnation point. The static pressure at the stagnation point is of special significance and is given its own name—stagnation pressure. In incompressible flows, the stagnation pressure at a stagnation point is equal to the total pressure throughout the flow field.
=== Terminology in compressible fluid dynamics ===
In a compressible fluid, it is convenient to define the total conditions (also called stagnation conditions) for all thermodynamic state properties (such as total temperature, total enthalpy, total speed of sound). These total flow conditions are a function of the fluid velocity and have different values in frames of reference with different motion.
To avoid potential ambiguity when referring to the properties of the fluid associated with the state of the fluid rather than its motion, the prefix "static" is commonly used (such as static temperature and static enthalpy). Where there is no prefix, the fluid property is the static condition (so "density" and "static density" mean the same thing). The static conditions are independent of the frame of reference.
Because the total flow conditions are defined by isentropically bringing the fluid to rest, there is no need to distinguish between total entropy and static entropy as they are always equal by definition. As such, entropy is most commonly referred to as simply "entropy".
== Applications ==
== See also ==
List of publications in fluid dynamics
List of fluid dynamicists
== References ==
== Further reading ==
Acheson, D. J. (1990). Elementary Fluid Dynamics. Clarendon Press. ISBN 0-19-859679-0.
Batchelor, G. K. (1967). An Introduction to Fluid Dynamics. Cambridge University Press. ISBN 0-521-66396-2.
Chanson, H. (2009). Applied Hydrodynamics: An Introduction to Ideal and Real Fluid Flows. CRC Press, Taylor & Francis Group, Leiden, The Netherlands, 478 pages. ISBN 978-0-415-49271-3.
Clancy, L. J. (1975). Aerodynamics. London: Pitman Publishing Limited. ISBN 0-273-01120-0.
Lamb, Horace (1994). Hydrodynamics (6th ed.). Cambridge University Press. ISBN 0-521-45868-4. Originally published in 1879, the 6th extended edition appeared first in 1932.
Milne-Thompson, L. M. (1968). Theoretical Hydrodynamics (5th ed.). Macmillan. Originally published in 1938.
Shinbrot, M. (1973). Lectures on Fluid Mechanics. Gordon and Breach. ISBN 0-677-01710-3.
Nazarenko, Sergey (2014), Fluid Dynamics via Examples and Solutions, CRC Press (Taylor & Francis group), ISBN 978-1-43-988882-7
Encyclopedia: Fluid dynamics Scholarpedia
== External links ==
National Committee for Fluid Mechanics Films (NCFMF), containing films on several subjects in fluid dynamics (in RealMedia format)
Gallery of fluid motion, "a visual record of the aesthetic and science of contemporary fluid mechanics," from the American Physical Society
List of Fluid Dynamics books | Wikipedia/fluid_dynamics |
The capstan equation or belt friction equation, also known as Euler–Eytelwein formula (after Leonhard Euler and Johann Albert Eytelwein), relates the hold-force to the load-force if a flexible line is wound around a cylinder (a bollard, a winch or a capstan).
It also applies for fractions of one turn as occur with rope drives or band brakes.
Because of the interaction of frictional forces and tension, the tension on a line wrapped around a capstan may be different on either side of the capstan. A small holding force exerted on one side can carry a much larger loading force on the other side; this is the principle by which a capstan-type device operates.
A holding capstan is a ratchet device that can turn only in one direction; once a load is pulled into place in that direction, it can be held with a much smaller force. A powered capstan, also called a winch, rotates so that the applied tension is multiplied by the friction between rope and capstan. On a tall ship a holding capstan and a powered capstan are used in tandem so that a small force can be used to raise a heavy sail and then the rope can be easily removed from the powered capstan and tied off.
In rock climbing this effect allows a lighter person to hold (belay) a heavier person when top-roping, and also produces rope drag during lead climbing.
The formula is
T
load
=
T
hold
e
μ
φ
,
{\displaystyle T_{\text{load}}=T_{\text{hold}}\ e^{\mu \varphi }~,}
where
T
load
{\displaystyle T_{\text{load}}}
is the applied tension on the line,
T
hold
{\displaystyle T_{\text{hold}}}
is the resulting force exerted at the other side of the capstan,
μ
{\displaystyle \mu }
is the coefficient of friction between the rope and capstan materials, and
φ
{\displaystyle \varphi }
is the total angle swept by all turns of the rope, measured in radians (i.e., with one full turn the angle
φ
=
2
π
{\displaystyle \varphi =2\pi \,}
).
For dynamic applications such as belt drives or brakes the quantity of interest is the force difference between
T
load
{\displaystyle T_{\text{load}}}
and
T
hold
{\displaystyle T_{\text{hold}}}
. The formula for this is
F
=
T
load
−
T
hold
=
(
e
μ
φ
−
1
)
T
hold
=
(
1
−
e
−
μ
φ
)
T
load
{\displaystyle F=T_{\text{load}}-T_{\text{hold}}=(e^{\mu \varphi }-1)~T_{\text{hold}}=(1-e^{-\mu \varphi })~T_{\text{load}}}
Several assumptions must be true for the equations to be valid:
The rope is on the verge of full sliding, i.e.
T
load
{\displaystyle T_{\text{load}}}
is the maximum load that one can hold. Smaller loads can be held as well, resulting in a smaller effective contact angle
φ
{\displaystyle \varphi }
.
It is important that the line is not rigid, in which case significant force would be lost in the bending of the line tightly around the cylinder. (The equation must be modified for this case.) For instance a Bowden cable is to some extent rigid and doesn't obey the principles of the capstan equation.
The line is non-elastic.
It can be observed that the force gain increases exponentially with the coefficient of friction, the number of turns around the cylinder, and the angle of contact. Note that the radius of the cylinder has no influence on the force gain.
The table below lists values of the factor
e
μ
φ
{\displaystyle e^{\mu \varphi }\,}
based on the number of turns and coefficient of friction μ.
From the table it is evident why one seldom sees a sheet (a rope to the loose side of a sail) wound more than three turns around a winch. The force gain would be extreme besides being counter-productive since there is risk of a riding turn, result being that the sheet will foul, form a knot and not run out when eased (by slacking grip on the tail (free end)).
It is both ancient and modern practice for anchor capstans and jib winches to be slightly flared out at the base, rather than cylindrical, to prevent the rope (anchor warp or sail sheet) from sliding down. The rope wound several times around the winch can slip upwards gradually, with little risk of a riding turn, provided it is tailed (loose end is pulled clear), by hand or a self-tailer.
For instance, the factor "153,552,935" (5 turns around a capstan with a coefficient of friction of 0.6) means, in theory, that a newborn baby would be capable of holding (not moving) the weight of two USS Nimitz supercarriers (97,000 tons each, but for the baby it would be only a little more than 1 kg). The large number of turns around the capstan combined with such a high friction coefficient mean that very little additional force is necessary to hold such heavy weight in place. The cables necessary to support this weight, as well as the capstan's ability to withstand the crushing force of those cables, are separate considerations.
== Derivation ==
The applied tension
T
l
o
a
d
(
φ
)
{\textstyle T_{\mathrm {load} }(\varphi )}
is a function of the total angle subtended by the rope on the capstan. On the verge of slipping, this is also the frictional force, which is by definition
μ
{\textstyle \mu }
times the normal force
R
(
φ
)
{\displaystyle R(\varphi )}
. By simple geometry, the additional normal force
δ
R
(
φ
)
=
R
(
φ
+
δ
φ
)
−
R
(
φ
)
{\textstyle \delta R(\varphi )=R(\varphi +\delta \varphi )-R(\varphi )}
when increasing the angle by a small angle
δ
φ
{\textstyle \delta \varphi }
is well approximated by
δ
R
(
φ
)
≈
T
l
o
a
d
(
φ
)
sin
(
δ
φ
)
≈
T
l
o
a
d
(
φ
)
δ
φ
{\textstyle \delta R(\varphi )\approx T_{\mathrm {load} }(\varphi )\sin(\delta \varphi )\approx T_{\mathrm {load} }(\varphi )\delta \varphi }
. Combining these and considering infinitesimally small
δ
φ
{\textstyle \delta \varphi }
yields the differential equation
d
T
l
o
a
d
(
φ
)
d
φ
=
μ
T
l
o
a
d
(
φ
)
,
T
l
o
a
d
(
0
)
=
T
h
o
l
d
,
{\displaystyle {\frac {dT_{\mathrm {load} }(\varphi )}{d\varphi }}=\mu T_{\mathrm {load} }(\varphi ),\qquad T_{\mathrm {load} }(0)=T_{\mathrm {hold} },}
whose solution is
T
l
o
a
d
(
φ
)
=
T
h
o
l
d
e
μ
φ
{\displaystyle T_{\mathrm {load} }(\varphi )=T_{\mathrm {hold} }e^{\mu \varphi }}
== Generalizations ==
=== Generalization of the capstan equation for a V-belt ===
The belt friction equation for a v-belt is:
T
load
=
T
hold
e
μ
φ
/
sin
(
α
/
2
)
{\displaystyle T_{\text{load}}=T_{\text{hold}}e^{\mu \varphi /\sin(\alpha /2)}}
where
α
{\displaystyle \alpha }
is the angle (in radians) between the two flat sides of the pulley that the v-belt presses against. A flat belt has an effective angle of
α
=
π
{\displaystyle \alpha =\pi }
.
The material of a V-belt or multi-V serpentine belt tends to wedge into the mating groove in a pulley as the load increases, improving torque transmission.
For the same power transmission, a V-belt requires less tension than a flat belt, increasing bearing life.
=== Generalization of the capstan equation for a rope lying on an arbitrary orthotropic surface ===
If a rope is lying in equilibrium under tangential forces on a rough orthotropic surface then all three following conditions are satisfied:
No separation – normal reaction
N
{\displaystyle N}
is positive for all points of the rope curve:
N
=
−
k
n
T
>
0
{\displaystyle N=-k_{n}T>0}
, where
k
n
{\displaystyle k_{n}}
is a normal curvature of the rope curve.
Dragging coefficient of friction
μ
g
{\displaystyle \mu _{g}}
and angle
α
{\displaystyle \alpha }
are satisfying the following criteria for all points of the curve
−
μ
g
<
tan
α
<
+
μ
g
{\displaystyle -\mu _{g}<\tan \alpha <+\mu _{g}}
Limit values of the tangential forces:
The forces at both ends of the rope
T
{\displaystyle T}
and
T
0
{\displaystyle T_{0}}
are satisfying the following inequality
T
0
e
−
∫
s
ω
d
s
≤
T
≤
T
0
e
∫
s
ω
d
s
{\displaystyle T_{0}e^{-\int _{s}\omega \,ds}\leq T\leq T_{0}e^{\int _{s}\omega \,ds}}
with
ω
=
μ
τ
k
n
2
−
k
g
2
μ
g
2
=
μ
τ
k
cos
2
α
−
sin
2
α
μ
g
2
,
{\displaystyle \omega =\mu _{\tau }{\sqrt {k_{n}^{2}-{\frac {k_{g}^{2}}{\mu _{g}^{2}}}}}=\mu _{\tau }k{\sqrt {\cos ^{2}\alpha -{\frac {\sin ^{2}\alpha }{\mu _{g}^{2}}}}},}
where
k
g
{\displaystyle k_{g}}
is a geodesic curvature of the rope curve,
k
{\displaystyle k}
is a curvature of a rope curve,
μ
τ
{\displaystyle \mu _{\tau }}
is a coefficient of friction in the tangential direction.
If
ω
=
constant
{\displaystyle \omega ={\text{constant}}}
then
T
0
e
−
μ
τ
k
s
cos
2
α
−
sin
2
α
/
μ
g
2
≤
T
≤
T
0
e
μ
τ
k
s
cos
2
α
−
sin
2
α
/
μ
g
2
.
{\displaystyle T_{0}e^{-\mu _{\tau }ks\,{\sqrt {\cos ^{2}\alpha -\sin ^{2}\alpha /\mu _{g}^{2}}}}\leq T\leq T_{0}e^{\mu _{\tau }ks\,{\sqrt {\cos ^{2}\alpha -\sin ^{2}\alpha /\mu _{g}^{2}}}}.}
This generalization has been obtained by Konyukhov.
== See also ==
Belt friction
Frictional contact mechanics
Torque amplifier, a device that exploits the capstan effect
== References ==
== Further reading ==
Arne Kihlberg, Kompendium i Mekanik för E1, del II, Göteborg 1980, 60–62.
== External links ==
Capstan equation calculator | Wikipedia/Capstan_equation |
Contact mechanics is the study of the deformation of solids that touch each other at one or more points. A central distinction in contact mechanics is between stresses acting perpendicular to the contacting bodies' surfaces (known as normal stress) and frictional stresses acting tangentially between the surfaces (shear stress). Normal contact mechanics or frictionless contact mechanics focuses on normal stresses caused by applied normal forces and by the adhesion present on surfaces in close contact, even if they are clean and dry.
Frictional contact mechanics emphasizes the effect of friction forces.
Contact mechanics is part of mechanical engineering. The physical and mathematical formulation of the subject is built upon the mechanics of materials and continuum mechanics and focuses on computations involving elastic, viscoelastic, and plastic bodies in static or dynamic contact. Contact mechanics provides necessary information for the safe and energy efficient design of technical systems and for the study of tribology, contact stiffness, electrical contact resistance and indentation hardness. Principles of contacts mechanics are implemented towards applications such as locomotive wheel-rail contact, coupling devices, braking systems, tires, bearings, combustion engines, mechanical linkages, gasket seals, metalworking, metal forming, ultrasonic welding, electrical contacts, and many others. Current challenges faced in the field may include stress analysis of contact and coupling members and the influence of lubrication and material design on friction and wear. Applications of contact mechanics further extend into the micro- and nanotechnological realm.
The original work in contact mechanics dates back to 1881 with the publication of the paper "On the contact of elastic solids" "Über die Berührung fester elastischer Körper" by Heinrich Hertz. Hertz attempted to understand how the optical properties of multiple, stacked lenses might change with the force holding them together. Hertzian contact stress refers to the localized stresses that develop as two curved surfaces come in contact and deform slightly under the imposed loads. This amount of deformation is dependent on the modulus of elasticity of the material in contact. It gives the contact stress as a function of the normal contact force, the radii of curvature of both bodies and the modulus of elasticity of both bodies. Hertzian contact stress forms the foundation for the equations for load bearing capabilities and fatigue life in bearings, gears, and any other bodies where two surfaces are in contact.
== History ==
Classical contact mechanics is most notably associated with Heinrich Hertz. In 1882, Hertz solved the contact problem of two elastic bodies with curved surfaces. This still-relevant classical solution provides a foundation for modern problems in contact mechanics. For example, in mechanical engineering and tribology, Hertzian contact stress is a description of the stress within mating parts. The Hertzian contact stress usually refers to the stress close to the area of contact between two spheres of different radii.
It was not until nearly one hundred years later that Kenneth L. Johnson, Kevin Kendall, and Alan D. Roberts found a similar solution for the case of adhesive contact. This theory was rejected by Boris Derjaguin and co-workers who proposed a different theory of adhesion in the 1970s. The Derjaguin model came to be known as the Derjaguin–Muller–Toporov (DMT) model (after Derjaguin, M. V. Muller and Yu. P. Toporov), and the Johnson et al. model came to be known as the Johnson–Kendall–Roberts (JKR) model for adhesive elastic contact. This rejection proved to be instrumental in the development of the David Tabor and later Daniel Maugis parameters that quantify which contact model (of the JKR and DMT models) represent adhesive contact better for specific materials.
Further advancement in the field of contact mechanics in the mid-twentieth century may be attributed to names such as Frank Philip Bowden and Tabor. Bowden and Tabor were the first to emphasize the importance of surface roughness for bodies in contact. Through investigation of the surface roughness, the true contact area between friction partners is found to be less than the apparent contact area. Such understanding also drastically changed the direction of undertakings in tribology. The works of Bowden and Tabor yielded several theories in contact mechanics of rough surfaces.
The contributions of J. F. Archard (1957) must also be mentioned in discussion of pioneering works in this field. Archard concluded that, even for rough elastic surfaces, the contact area is approximately proportional to the normal force. Further important insights along these lines were provided by James A. Greenwood and J. B. P. Williamson (1966), A. W. Bush (1975), and Bo N. J. Persson (2002). The main findings of these works were that the true contact surface in rough materials is generally proportional to the normal force, while the parameters of individual micro-contacts (pressure and size of the micro-contact) are only weakly dependent upon the load.
== Classical solutions for non-adhesive elastic contact ==
The theory of contact between elastic bodies can be used to find contact areas and indentation depths for simple geometries. Some commonly used solutions are listed below. The theory used to compute these solutions is discussed later in the article. Solutions for multitude of other technically relevant shapes, e.g. the truncated cone, the worn sphere, rough profiles, hollow cylinders, etc. can be found in
=== Contact between a sphere and a half-space ===
An elastic sphere of radius
R
{\displaystyle R}
indents an elastic half-space where total deformation is
d
{\displaystyle d}
, causing a contact area of radius
a
=
R
d
{\displaystyle a={\sqrt {Rd}}}
The applied force
F
{\displaystyle F}
is related to the displacement
d
{\displaystyle d}
by
F
=
4
3
E
∗
R
1
2
d
3
2
{\displaystyle F={\frac {4}{3}}E^{*}R^{\frac {1}{2}}d^{\frac {3}{2}}}
where
1
E
∗
=
1
−
ν
1
2
E
1
+
1
−
ν
2
2
E
2
{\displaystyle {\frac {1}{E^{*}}}={\frac {1-\nu _{1}^{2}}{E_{1}}}+{\frac {1-\nu _{2}^{2}}{E_{2}}}}
and
E
1
{\displaystyle E_{1}}
,
E
2
{\displaystyle E_{2}}
are the elastic moduli and
ν
1
{\displaystyle \nu _{1}}
,
ν
2
{\displaystyle \nu _{2}}
the Poisson's ratios associated with each body.
The distribution of normal pressure in the contact area as a function of distance from the center of the circle is
p
(
r
)
=
p
0
(
1
−
r
2
a
2
)
1
2
{\displaystyle p(r)=p_{0}\left(1-{\frac {r^{2}}{a^{2}}}\right)^{\frac {1}{2}}}
where
p
0
{\displaystyle p_{0}}
is the maximum contact pressure given by
p
0
=
3
F
2
π
a
2
=
1
π
(
6
F
E
∗
2
R
2
)
1
3
{\displaystyle p_{0}={\frac {3F}{2\pi a^{2}}}={\frac {1}{\pi }}\left({\frac {6F{E^{*}}^{2}}{R^{2}}}\right)^{\frac {1}{3}}}
The radius of the circle is related to the applied load
F
{\displaystyle F}
by the equation
a
3
=
3
F
R
4
E
∗
{\displaystyle a^{3}={\cfrac {3FR}{4E^{*}}}}
The total deformation
d
{\displaystyle d}
is related to the maximum contact pressure by
d
=
a
2
R
=
(
9
F
2
16
E
∗
2
R
)
1
3
{\displaystyle d={\frac {a^{2}}{R}}=\left({\frac {9F^{2}}{16{E^{*}}^{2}R}}\right)^{\frac {1}{3}}}
The maximum shear stress occurs in the interior at
z
≈
0.49
a
{\displaystyle z\approx 0.49a}
for
ν
=
0.33
{\displaystyle \nu =0.33}
.
=== Contact between two spheres ===
For contact between two spheres of radii
R
1
{\displaystyle R_{1}}
and
R
2
{\displaystyle R_{2}}
, the area of contact is a circle of radius
a
{\displaystyle a}
. The equations are the same as for a sphere in contact with a half plane except that the effective radius
R
{\displaystyle R}
is defined as
1
R
=
1
R
1
+
1
R
2
{\displaystyle {\frac {1}{R}}={\frac {1}{R_{1}}}+{\frac {1}{R_{2}}}}
=== Contact between two crossed cylinders of equal radius ===
This is equivalent to contact between a sphere of radius
R
{\displaystyle R}
and a plane.
=== Contact between a rigid cylinder with flat end and an elastic half-space ===
If a rigid cylinder is pressed into an elastic half-space, it creates a pressure distribution described by
p
(
r
)
=
p
0
(
1
−
r
2
R
2
)
−
1
2
{\displaystyle p(r)=p_{0}\left(1-{\frac {r^{2}}{R^{2}}}\right)^{-{\frac {1}{2}}}}
where
R
{\displaystyle R}
is the radius of the cylinder and
p
0
=
1
π
E
∗
d
R
{\displaystyle p_{0}={\frac {1}{\pi }}E^{*}{\frac {d}{R}}}
The relationship between the indentation depth and the normal force is given by
F
=
2
R
E
∗
d
{\displaystyle F=2RE^{*}d}
=== Contact between a rigid conical indenter and an elastic half-space ===
In the case of indentation of an elastic half-space of Young's modulus
E
{\displaystyle E}
using a rigid conical indenter, the depth of the contact region
ϵ
{\displaystyle \epsilon }
and contact radius
a
{\displaystyle a}
are related by
ϵ
=
a
tan
(
θ
)
{\displaystyle \epsilon =a\tan(\theta )}
with
θ
{\displaystyle \theta }
defined as the angle between the plane and the side surface of the cone. The total indentation depth
d
{\displaystyle d}
is given by:
d
=
π
2
ϵ
{\displaystyle d={\frac {\pi }{2}}\epsilon }
The total force is
F
=
π
E
2
(
1
−
ν
2
)
a
2
tan
(
θ
)
=
2
E
π
(
1
−
ν
2
)
d
2
tan
(
θ
)
{\displaystyle F={\frac {\pi E}{2\left(1-\nu ^{2}\right)}}a^{2}\tan(\theta )={\frac {2E}{\pi \left(1-\nu ^{2}\right)}}{\frac {d^{2}}{\tan(\theta )}}}
The pressure distribution is given by
p
(
r
)
=
E
d
π
a
(
1
−
ν
2
)
ln
(
a
r
+
(
a
r
)
2
−
1
)
=
E
d
π
a
(
1
−
ν
2
)
cosh
−
1
(
a
r
)
{\displaystyle p\left(r\right)={\frac {Ed}{\pi a\left(1-\nu ^{2}\right)}}\ln \left({\frac {a}{r}}+{\sqrt {\left({\frac {a}{r}}\right)^{2}-1}}\right)={\frac {Ed}{\pi a\left(1-\nu ^{2}\right)}}\cosh ^{-1}\left({\frac {a}{r}}\right)}
The stress has a logarithmic singularity at the tip of the cone.
=== Contact between two cylinders with parallel axes ===
In contact between two cylinders with parallel axes, the force is linearly proportional to the length of cylinders L and to the indentation depth d:
F
≈
π
4
E
∗
L
d
{\displaystyle F\approx {\frac {\pi }{4}}E^{*}Ld}
The radii of curvature are entirely absent from this relationship. The contact radius is described through the usual relationship
a
=
R
d
{\displaystyle a={\sqrt {Rd}}}
with
1
R
=
1
R
1
+
1
R
2
{\displaystyle {\frac {1}{R}}={\frac {1}{R_{1}}}+{\frac {1}{R_{2}}}}
as in contact between two spheres. The maximum pressure is equal to
p
0
=
(
E
∗
F
π
L
R
)
1
2
{\displaystyle p_{0}=\left({\frac {E^{*}F}{\pi LR}}\right)^{\frac {1}{2}}}
=== Bearing contact ===
The contact in the case of bearings is often a contact between a convex surface (male cylinder or sphere) and a concave surface (female cylinder or sphere: bore or hemispherical cup).
=== Method of dimensionality reduction ===
Some contact problems can be solved with the method of dimensionality reduction (MDR). In this method, the initial three-dimensional system is replaced with a contact of a body with a linear elastic or viscoelastic foundation (see fig.). The properties of one-dimensional systems coincide exactly with those of the original three-dimensional system, if the form of the bodies is modified and the elements of the foundation are defined according to the rules of the MDR. MDR is based on the solution to axisymmetric contact problems first obtained by Ludwig Föppl (1941) and Gerhard Schubert (1942)
However, for exact analytical results, it is required that the contact problem is axisymmetric and the contacts are compact.
== Hertzian theory of non-adhesive elastic contact ==
The classical theory of contact focused primarily on non-adhesive contact where no tension force is allowed to occur within the contact area, i.e., contacting bodies can be separated without adhesion forces. Several analytical and numerical approaches have been used to solve contact problems that satisfy the no-adhesion condition. Complex forces and moments are transmitted between the bodies where they touch, so problems in contact mechanics can become quite sophisticated. In addition, the contact stresses are usually a nonlinear function of the deformation. To simplify the solution procedure, a frame of reference is usually defined in which the objects (possibly in motion relative to one another) are static. They interact through surface tractions (or pressures/stresses) at their interface.
As an example, consider two objects which meet at some surface
S
{\displaystyle S}
in the (
x
{\displaystyle x}
,
y
{\displaystyle y}
)-plane with the
z
{\displaystyle z}
-axis assumed normal to the surface. One of the bodies will experience a normally-directed pressure distribution
p
z
=
p
(
x
,
y
)
=
q
z
(
x
,
y
)
{\displaystyle p_{z}=p(x,y)=q_{z}(x,y)}
and in-plane surface traction distributions
q
x
=
q
x
(
x
,
y
)
{\displaystyle q_{x}=q_{x}(x,y)}
and
q
y
=
q
y
(
x
,
y
)
{\displaystyle q_{y}=q_{y}(x,y)}
over the region
S
{\displaystyle S}
. In terms of a Newtonian force balance, the forces:
P
z
=
∫
S
p
(
x
,
y
)
d
A
;
Q
x
=
∫
S
q
x
(
x
,
y
)
d
A
;
Q
y
=
∫
S
q
y
(
x
,
y
)
d
A
{\displaystyle P_{z}=\int _{S}p(x,y)~\mathrm {d} A~;~~Q_{x}=\int _{S}q_{x}(x,y)~\mathrm {d} A~;~~Q_{y}=\int _{S}q_{y}(x,y)~\mathrm {d} A}
must be equal and opposite to the forces established in the other body. The moments corresponding to these forces:
M
x
=
∫
S
y
q
z
(
x
,
y
)
d
A
;
M
y
=
∫
S
−
x
q
z
(
x
,
y
)
d
A
;
M
z
=
∫
S
[
x
q
y
(
x
,
y
)
−
y
q
x
(
x
,
y
)
]
d
A
{\displaystyle M_{x}=\int _{S}y~q_{z}(x,y)~\mathrm {d} A~;~~M_{y}=\int _{S}-x~q_{z}(x,y)~\mathrm {d} A~;~~M_{z}=\int _{S}[x~q_{y}(x,y)-y~q_{x}(x,y)]~\mathrm {d} A}
are also required to cancel between bodies so that they are kinematically immobile.
=== Assumptions in Hertzian theory ===
The following assumptions are made in determining the solutions of Hertzian contact problems:
The strains are small and within the elastic limit.
The surfaces are continuous and non-conforming (implying that the area of contact is much smaller than the characteristic dimensions of the contacting bodies).
Each body can be considered an elastic half-space.
The surfaces are frictionless.
Additional complications arise when some or all these assumptions are violated and such contact problems are usually called non-Hertzian.
=== Analytical solution techniques ===
Analytical solution methods for non-adhesive contact problem can be classified into two types based on the geometry of the area of contact. A conforming contact is one in which the two bodies touch at multiple points before any deformation takes place (i.e., they just "fit together"). A non-conforming contact is one in which the shapes of the bodies are dissimilar enough that, under zero load, they only touch at a point (or possibly along a line). In the non-conforming case, the contact area is small compared to the sizes of the objects and the stresses are highly concentrated in this area. Such a contact is called concentrated, otherwise it is called diversified.
A common approach in linear elasticity is to superpose a number of solutions each of which corresponds to a point load acting over the area of contact. For example, in the case of loading of a half-plane, the Flamant solution is often used as a starting point and then generalized to various shapes of the area of contact. The force and moment balances between the two bodies in contact act as additional constraints to the solution.
==== Point contact on a (2D) half-plane ====
A starting point for solving contact problems is to understand the effect of a "point-load" applied to an isotropic, homogeneous, and linear elastic half-plane, shown in the figure to the right. The problem may be either plane stress or plane strain. This is a boundary value problem of linear elasticity subject to the traction boundary conditions:
σ
x
z
(
x
,
0
)
=
0
;
σ
z
(
x
,
z
)
=
−
P
δ
(
x
,
z
)
{\displaystyle \sigma _{xz}(x,0)=0~;~~\sigma _{z}(x,z)=-P\delta (x,z)}
where
δ
(
x
,
z
)
{\displaystyle \delta (x,z)}
is the Dirac delta function. The boundary conditions state that there are no shear stresses on the surface and a singular normal force P is applied at (0, 0). Applying these conditions to the governing equations of elasticity produces the result
σ
x
x
=
−
2
P
π
x
2
z
(
x
2
+
z
2
)
2
σ
z
z
=
−
2
P
π
z
3
(
x
2
+
z
2
)
2
σ
x
z
=
−
2
P
π
x
z
2
(
x
2
+
z
2
)
2
{\displaystyle {\begin{aligned}\sigma _{xx}&=-{\frac {2P}{\pi }}{\frac {x^{2}z}{\left(x^{2}+z^{2}\right)^{2}}}\\\sigma _{zz}&=-{\frac {2P}{\pi }}{\frac {z^{3}}{\left(x^{2}+z^{2}\right)^{2}}}\\\sigma _{xz}&=-{\frac {2P}{\pi }}{\frac {xz^{2}}{\left(x^{2}+z^{2}\right)^{2}}}\end{aligned}}}
for some point,
(
x
,
y
)
{\displaystyle (x,y)}
, in the half-plane. The circle shown in the figure indicates a surface on which the maximum shear stress is constant. From this stress field, the strain components and thus the displacements of all material points may be determined.
==== Line contact on a (2D) half-plane ====
===== Normal loading over a region =====
Suppose, rather than a point load
P
{\displaystyle P}
, a distributed load
p
(
x
)
{\displaystyle p(x)}
is applied to the surface instead, over the range
a
<
x
<
b
{\displaystyle a<x<b}
. The principle of linear superposition can be applied to determine the resulting stress field as the solution to the integral equations:
σ
x
x
=
−
2
z
π
∫
a
b
p
(
x
′
)
(
x
−
x
′
)
2
d
x
′
[
(
x
−
x
′
)
2
+
z
2
]
2
;
σ
z
z
=
−
2
z
3
π
∫
a
b
p
(
x
′
)
d
x
′
[
(
x
−
x
′
)
2
+
z
2
]
2
σ
x
z
=
−
2
z
2
π
∫
a
b
p
(
x
′
)
(
x
−
x
′
)
d
x
′
[
(
x
−
x
′
)
2
+
z
2
]
2
{\displaystyle {\begin{aligned}\sigma _{xx}&=-{\frac {2z}{\pi }}\int _{a}^{b}{\frac {p\left(x'\right)\left(x-x'\right)^{2}\,dx'}{\left[\left(x-x'\right)^{2}+z^{2}\right]^{2}}}~;~~\sigma _{zz}=-{\frac {2z^{3}}{\pi }}\int _{a}^{b}{\frac {p\left(x'\right)\,dx'}{\left[\left(x-x'\right)^{2}+z^{2}\right]^{2}}}\\[3pt]\sigma _{xz}&=-{\frac {2z^{2}}{\pi }}\int _{a}^{b}{\frac {p\left(x'\right)\left(x-x'\right)\,dx'}{\left[\left(x-x'\right)^{2}+z^{2}\right]^{2}}}\end{aligned}}}
===== Shear loading over a region =====
The same principle applies for loading on the surface in the plane of the surface. These kinds of tractions would tend to arise as a result of friction. The solution is similar the above (for both singular loads
Q
{\displaystyle Q}
and distributed loads
q
(
x
)
{\displaystyle q(x)}
) but altered slightly:
σ
x
x
=
−
2
π
∫
a
b
q
(
x
′
)
(
x
−
x
′
)
3
d
x
′
[
(
x
−
x
′
)
2
+
z
2
]
2
;
σ
z
z
=
−
2
z
2
π
∫
a
b
q
(
x
′
)
(
x
−
x
′
)
d
x
′
[
(
x
−
x
′
)
2
+
z
2
]
2
σ
x
z
=
−
2
z
π
∫
a
b
q
(
x
′
)
(
x
−
x
′
)
2
d
x
′
[
(
x
−
x
′
)
2
+
z
2
]
2
{\displaystyle {\begin{aligned}\sigma _{xx}&=-{\frac {2}{\pi }}\int _{a}^{b}{\frac {q\left(x'\right)\left(x-x'\right)^{3}\,dx'}{\left[\left(x-x'\right)^{2}+z^{2}\right]^{2}}}~;~~\sigma _{zz}=-{\frac {2z^{2}}{\pi }}\int _{a}^{b}{\frac {q\left(x'\right)\left(x-x'\right)\,dx'}{\left[\left(x-x'\right)^{2}+z^{2}\right]^{2}}}\\[3pt]\sigma _{xz}&=-{\frac {2z}{\pi }}\int _{a}^{b}{\frac {q\left(x'\right)\left(x-x'\right)^{2}\,dx'}{\left[\left(x-x'\right)^{2}+z^{2}\right]^{2}}}\end{aligned}}}
These results may themselves be superposed onto those given above for normal loading to deal with more complex loads.
==== Point contact on a (3D) half-space ====
Analogously to the Flamant solution for the 2D half-plane, fundamental solutions are known for the linearly elastic 3D half-space as well. These were found by Boussinesq for a concentrated normal load and by Cerruti for a tangential load. See the section on this in Linear elasticity.
=== Numerical solution techniques ===
Distinctions between conforming and non-conforming contact do not have to be made when numerical solution schemes are employed to solve contact problems. These methods do not rely on further assumptions within the solution process since they base solely on the general formulation of the underlying equations. Besides the standard equations describing the deformation and motion of bodies two additional inequalities can be formulated. The first simply restricts the motion and deformation of the bodies by the assumption that no penetration can occur. Hence the gap
h
{\displaystyle h}
between two bodies can only be positive or zero
h
≥
0
{\displaystyle h\geq 0}
where
h
=
0
{\displaystyle h=0}
denotes contact. The second assumption in contact mechanics is related to the fact, that no tension force is allowed to occur within the contact area (contacting bodies can be lifted up without adhesion forces). This leads to an inequality which the stresses have to obey at the contact interface. It is formulated for the normal stress
σ
n
=
t
⋅
n
{\displaystyle \sigma _{n}=\mathbf {t} \cdot \mathbf {n} }
.
At locations where there is contact between the surfaces the gap is zero, i.e.
h
=
0
{\displaystyle h=0}
, and there the normal stress is different than zero, indeed,
σ
n
<
0
{\displaystyle \sigma _{n}<0}
. At locations where the surfaces are not in contact the normal stress is identical to zero;
σ
n
=
0
{\displaystyle \sigma _{n}=0}
, while the gap is positive; i.e.,
h
>
0
{\displaystyle h>0}
. This type of complementarity formulation can be expressed in the so-called Kuhn–Tucker form, viz.
h
≥
0
,
σ
n
≤
0
,
σ
n
h
=
0
.
{\displaystyle h\geq 0\,,\quad \sigma _{n}\leq 0\,,\quad \sigma _{n}\,h=0\,.}
These conditions are valid in a general way. The mathematical formulation of the gap depends upon the kinematics of the underlying theory of the solid (e.g., linear or nonlinear solid in two- or three dimensions, beam or shell model). By restating the normal stress
σ
n
{\displaystyle \sigma _{n}}
in terms of the contact pressure,
p
{\displaystyle p}
; i.e.,
p
=
−
σ
n
{\displaystyle p=-\sigma _{n}}
the Kuhn-Tucker problem can be restated as in standard complementarity form i.e.
h
≥
0
,
p
≥
0
,
p
h
=
0
.
{\displaystyle h\geq 0\,,\quad p\geq 0\,,\quad p\,h=0\,.}
In the linear elastic case the gap can be formulated as
h
=
h
0
+
g
+
u
,
{\displaystyle {h}=h_{0}+{g}+u,}
where
h
0
{\displaystyle h_{0}}
is the rigid body separation,
g
{\displaystyle g}
is the geometry/topography of the contact (cylinder and roughness) and
u
{\displaystyle u}
is the elastic deformation/deflection. If the contacting bodies are approximated as linear elastic half spaces, the Boussinesq-Cerruti integral equation solution can be applied to express the deformation (
u
{\displaystyle u}
) as a function of the contact pressure (
p
{\displaystyle p}
); i.e.,
u
=
∫
∞
∞
K
(
x
−
s
)
p
(
s
)
d
s
,
{\displaystyle u=\int _{\infty }^{\infty }K(x-s)p(s)ds,}
where
K
(
x
−
s
)
=
2
π
E
∗
ln
|
x
−
s
|
{\displaystyle K(x-s)={\frac {2}{\pi E^{*}}}\ln |x-s|}
for line loading of an elastic half space and
K
(
x
−
s
)
=
1
π
E
∗
1
(
x
1
−
s
1
)
2
+
(
x
2
−
s
2
)
2
{\displaystyle K(x-s)={\frac {1}{\pi E^{*}}}{\frac {1}{\sqrt {\left(x_{1}-s_{1}\right)^{2}+\left(x_{2}-s_{2}\right)^{2}}}}}
for point loading of an elastic half-space.
After discretization the linear elastic contact mechanics problem can be stated in standard Linear Complementarity Problem (LCP) form.
h
=
h
0
+
g
+
C
p
,
h
⋅
p
=
0
,
p
≥
0
,
h
≥
0
,
{\displaystyle {\begin{aligned}\mathbf {h} &=\mathbf {h} _{0}+\mathbf {g} +\mathbf {Cp} ,\\\mathbf {h} \cdot \mathbf {p} &=0,\,\,\,\mathbf {p} \geq 0,\,\,\,\mathbf {h} \geq 0,\\\end{aligned}}}
where
C
{\displaystyle \mathbf {C} }
is a matrix, whose elements are so called influence coefficients relating the contact pressure and the deformation. The strict LCP formulation of the CM problem presented above, allows for direct application of well-established numerical solution techniques such as Lemke's pivoting algorithm. The Lemke algorithm has the advantage that it finds the numerically exact solution within a finite number of iterations. The MATLAB implementation presented by Almqvist et al. is one example that can be employed to solve the problem numerically. In addition, an example code for an LCP solution of a 2D linear elastic contact mechanics problem has also been made public at MATLAB file exchange by Almqvist et al.
== Contact between rough surfaces ==
When two bodies with rough surfaces are pressed against each other, the true contact area formed between the two bodies,
A
{\displaystyle A}
, is much smaller than the apparent or nominal contact area
A
0
{\displaystyle A_{0}}
. The mechanics of contacting rough surfaces are discussed in terms of normal contact mechanics and static frictional interactions. Natural and engineering surfaces typically exhibit roughness features, known as asperities, across a broad range of length scales down to the molecular level, with surface structures exhibiting self affinity, also known as surface fractality. It is recognized that the self affine structure of surfaces is the origin of the linear scaling of true contact area with applied pressure. Assuming a model of shearing welded contacts in tribological interactions, this ubiquitously observed linearity between contact area and pressure can also be considered the origin of the linearity of the relationship between static friction and applied normal force.
In contact between a "random rough" surface and an elastic half-space, the true contact area is related to the normal force
F
{\displaystyle F}
by
A
=
κ
E
∗
h
′
F
{\displaystyle A={\frac {\kappa }{E^{*}h'}}F}
with
h
′
{\displaystyle h'}
equal to the root mean square (also known as the quadratic mean) of the surface slope and
κ
≈
2
{\displaystyle \kappa \approx 2}
. The median pressure in the true contact surface
p
a
v
=
F
A
≈
1
2
E
∗
h
′
{\displaystyle p_{\mathrm {av} }={\frac {F}{A}}\approx {\frac {1}{2}}E^{*}h'}
can be reasonably estimated as half of the effective elastic modulus
E
∗
{\displaystyle E^{*}}
multiplied with the root mean square of the surface slope
h
′
{\displaystyle h'}
.
=== An overview of the GW model ===
Greenwood and Williamson in 1966 (GW) proposed a theory of elastic contact mechanics of rough surfaces which is today the foundation of many theories in tribology (friction, adhesion, thermal and electrical conductance, wear, etc.). They considered the contact between a smooth rigid plane and a nominally flat deformable rough surface covered with round tip asperities of the same radius R. Their theory assumes that the deformation of each asperity is independent of that of its neighbours and is described by the Hertz model. The heights of asperities have a random distribution. The probability that asperity height is between
z
{\displaystyle z}
and
z
+
d
z
{\displaystyle z+dz}
is
ϕ
(
z
)
d
z
{\displaystyle \phi (z)dz}
. The authors calculated the number of contact spots n, the total contact area
A
r
{\displaystyle A_{r}}
and the total load P in general case. They gave those formulas in two forms: in the basic and using standardized variables. If one assumes that N asperities covers a rough surface, then the expected number of contacts is
n
=
N
∫
d
∞
ϕ
(
z
)
d
z
{\displaystyle n=N\int _{d}^{\infty }\phi (z)dz}
The expected total area of contact can be calculated from the formula
A
a
=
N
π
R
∫
d
∞
(
z
−
d
)
ϕ
(
z
)
d
z
{\displaystyle A_{a}=N\pi R\int _{d}^{\infty }(z-d)\phi (z)dz}
and the expected total force is given by
P
=
4
3
N
E
r
R
∫
d
∞
(
z
−
d
)
3
2
ϕ
(
z
)
d
z
{\displaystyle P={\frac {4}{3}}NE_{r}{\sqrt {R}}\int _{d}^{\infty }(z-d)^{\frac {3}{2}}\phi (z)dz}
where:
R, radius of curvature of the microasperity,
z, height of the microasperity measured from the profile line,
d, close the surface,
E
r
=
(
1
−
ν
1
2
E
1
+
1
−
ν
2
2
E
2
)
−
1
{\displaystyle E_{r}=\left({\frac {1-\nu _{1}^{2}}{E_{1}}}+{\frac {1-\nu _{2}^{2}}{E_{2}}}\right)^{-1}}
, composite Young's modulus of elasticity,
E
i
{\displaystyle E_{i}}
, modulus of elasticity of the surface,
ν
i
{\displaystyle \nu _{i}}
, Poisson's surface coefficients.
Greenwood and Williamson introduced standardized separation
h
=
d
/
σ
{\displaystyle h=d/\sigma }
and standardized height distribution
ϕ
∗
(
s
)
{\displaystyle \phi ^{*}(s)}
whose standard deviation is equal to one. Below are presented the formulas in the standardized form.
F
n
(
h
)
=
∫
h
∞
(
s
−
h
)
n
ϕ
∗
(
s
)
d
s
n
=
η
A
n
F
0
(
h
)
A
a
=
π
η
A
R
σ
F
1
(
h
)
P
=
4
3
η
A
E
r
R
σ
3
2
F
3
2
(
h
)
{\displaystyle {\begin{aligned}F_{n}(h)&=\int _{h}^{\infty }(s-h)^{n}\phi ^{*}(s)ds\\n&=\eta A_{n}F_{0}(h)\\A_{a}&=\pi \eta AR\sigma F_{1}(h)\\P&={\frac {4}{3}}\eta AE_{r}{\sqrt {R}}\sigma ^{\frac {3}{2}}F_{\frac {3}{2}}(h)\end{aligned}}}
where:
d is the separation,
A
{\displaystyle A}
is the nominal contact area,
η
{\displaystyle \eta }
is the surface density of asperities,
E
∗
{\displaystyle E^{*}}
is the effective Young modulus.
A
{\displaystyle A}
and
P
{\displaystyle P}
can be determined when the
F
n
(
h
)
{\displaystyle F_{n}(h)}
terms are calculated for the given surfaces using the convolution of the surface roughness
ϕ
∗
(
s
)
{\displaystyle \phi ^{*}(s)}
. Several studies have followed the suggested curve fits for
F
n
(
h
)
{\displaystyle F_{n}(h)}
assuming a Gaussian surface high distribution with curve fits presented by Arcoumanis et al. and Jedynak among others. It has been repeatedly observed that engineering surfaces do not demonstrate Gaussian surface height distributions e.g. Peklenik. Leighton et al. presented fits for crosshatched IC engine cylinder liner surfaces together with a process for determining the
F
n
(
h
)
{\displaystyle F_{n}(h)}
terms for any measured surfaces. Leighton et al. demonstrated that Gaussian fit data is not accurate for modelling any engineered surfaces and went on to demonstrate that early running of the surfaces results in a gradual transition which significantly changes the surface topography, load carrying capacity and friction.
Recently the exact approximants to
A
r
{\displaystyle A_{r}}
and
P
{\displaystyle P}
were published by Jedynak. They are given by the following rational formulas, which are approximants to the integrals
F
n
(
h
)
{\displaystyle F_{n}(h)}
. They are calculated for the Gaussian distribution of asperities, which have been shown to be unrealistic for engineering surface but can be assumed where friction, load carrying capacity or real contact area results are not critical to the analysis.
F
n
(
h
)
=
a
0
+
a
1
h
+
a
2
h
2
+
a
3
h
3
1
+
b
1
h
+
b
2
h
2
+
b
3
h
3
+
b
4
h
4
+
b
5
h
5
+
b
6
h
6
exp
(
−
h
2
2
)
{\displaystyle F_{n}(h)={\frac {a_{0}+a_{1}h+a_{2}h^{2}+a_{3}h^{3}}{1+b_{1}h+b_{2}h^{2}+b_{3}h^{3}+b_{4}h^{4}+b_{5}h^{5}+b_{6}h^{6}}}\exp \left(-{\frac {h^{2}}{2}}\right)}
For
F
1
(
h
)
{\displaystyle F_{1}(h)}
the coefficients are
[
a
0
,
a
1
,
a
2
,
a
3
]
=
[
0.398942280401
,
0.159773702775
,
0.0389687688311
,
0.00364356495452
]
[
b
1
,
b
2
,
b
3
,
b
4
,
b
5
,
b
6
]
=
[
1.653807476138
,
1.170419428529
,
0.448892964428
,
0.0951971709160
,
0.00931642803836
,
−
6.383774657279
×
10
−
6
]
{\displaystyle {\begin{aligned}[][a_{0},a_{1},a_{2},a_{3}]&=[0.398942280401,0.159773702775,0.0389687688311,0.00364356495452]\\[][b_{1},b_{2},b_{3},b_{4},b_{5},b_{6}]&=\left[1.653807476138,1.170419428529,0.448892964428,0.0951971709160,0.00931642803836,-6.383774657279\times 10^{-6}\right]\end{aligned}}}
The maximum relative error is
9.93
×
10
−
8
%
{\displaystyle 9.93\times 10^{-8}\%}
.
For
F
3
2
(
h
)
{\displaystyle F_{\frac {3}{2}}(h)}
the coefficients are
[
a
0
,
a
1
,
a
2
,
a
3
]
=
[
0.430019993662
,
0.101979509447
,
0.0229040629580
,
0.000688602924
]
[
b
1
,
b
2
,
b
3
,
b
4
,
b
5
,
b
6
]
=
[
1.671117125984
,
1.199586555505
,
0.46936532151
,
0.102632881122
,
0.010686348714
,
0.0000517200271
]
{\displaystyle {\begin{aligned}[][a_{0},a_{1},a_{2},a_{3}]&=[0.430019993662,0.101979509447,0.0229040629580,0.000688602924]\\[][b_{1},b_{2},b_{3},b_{4},b_{5},b_{6}]&=[1.671117125984,1.199586555505,0.46936532151,0.102632881122,0.010686348714,0.0000517200271]\end{aligned}}}
The maximum relative error is
1.91
×
10
−
7
%
{\displaystyle 1.91\times 10^{-7}\%}
. The paper also contains the exact expressions for
F
n
(
h
)
{\displaystyle F_{n}(h)}
F
1
(
h
)
=
1
2
π
exp
(
−
1
2
h
2
)
−
1
2
h
erfc
(
h
2
)
F
3
2
(
h
)
=
1
4
π
exp
(
−
h
2
4
)
h
(
(
h
2
+
1
)
K
1
4
(
h
2
4
)
−
h
2
K
3
4
(
h
2
4
)
)
{\displaystyle {\begin{aligned}F_{1}(h)&={\frac {1}{\sqrt {2\pi }}}\exp \left(-{\frac {1}{2}}h^{2}\right)-{\frac {1}{2}}h\,\operatorname {erfc} \left({\frac {h}{\sqrt {2}}}\right)\\F_{\frac {3}{2}}(h)&={\frac {1}{4{\sqrt {\pi }}}}\exp \left(-{\frac {h^{2}}{4}}\right){\sqrt {h}}\left(\left(h^{2}+1\right)K_{\frac {1}{4}}\left({\frac {h^{2}}{4}}\right)-h^{2}K_{\frac {3}{4}}\left({\frac {h^{2}}{4}}\right)\right)\end{aligned}}}
where erfc(z) means the complementary error function and
K
ν
(
z
)
{\displaystyle K_{\nu }(z)}
is the modified Bessel function of the second kind.
For the situation where the asperities on the two surfaces have a Gaussian height distribution and the peaks can be assumed to be spherical, the average contact pressure is sufficient to cause yield when
p
av
=
1.1
σ
y
≈
0.39
σ
0
{\displaystyle p_{\text{av}}=1.1\sigma _{y}\approx 0.39\sigma _{0}}
where
σ
y
{\displaystyle \sigma _{y}}
is the uniaxial yield stress and
σ
0
{\displaystyle \sigma _{0}}
is the indentation hardness. Greenwood and Williamson defined a dimensionless parameter
Ψ
{\displaystyle \Psi }
called the plasticity index that could be used to determine whether contact would be elastic or plastic.
The Greenwood-Williamson model requires knowledge of two statistically dependent quantities; the standard deviation of the surface roughness and the curvature of the asperity peaks. An alternative definition of the plasticity index has been given by Mikic. Yield occurs when the pressure is greater than the uniaxial yield stress. Since the yield stress is proportional to the indentation hardness
σ
0
{\displaystyle \sigma _{0}}
, Mikic defined the plasticity index for elastic-plastic contact to be
Ψ
=
E
∗
h
′
σ
0
>
2
3
.
{\displaystyle \Psi ={\frac {E^{*}h'}{\sigma _{0}}}>{\frac {2}{3}}~.}
In this definition
Ψ
{\displaystyle \Psi }
represents the micro-roughness in a state of complete plasticity and only one statistical quantity, the rms slope, is needed which can be calculated from surface measurements. For
Ψ
<
2
3
{\displaystyle \Psi <{\frac {2}{3}}}
, the surface behaves elastically during contact.
In both the Greenwood-Williamson and Mikic models the load is assumed to be proportional to the deformed area. Hence, whether the system behaves plastically or elastically is independent of the applied normal force.
=== An overview of the GT model ===
The model proposed by John A. Greenwood and John H. Tripp (GT), extended the GW model to contact between two rough surfaces. The GT model is widely used in the field of elastohydrodynamic analysis.
The most frequently cited equations given by the GT model are for the asperity contact area
A
a
=
π
2
(
η
β
σ
)
2
A
F
2
(
λ
)
,
{\displaystyle A_{a}=\pi ^{2}(\eta \beta \sigma )^{2}AF_{2}(\lambda ),}
and load carried by asperities
P
=
8
2
15
π
(
η
β
σ
)
2
σ
β
E
′
A
F
5
2
(
λ
)
,
{\displaystyle P={\frac {8{\sqrt {2}}}{15}}\pi (\eta \beta \sigma )^{2}{\sqrt {\frac {\sigma }{\beta }}}E'AF_{\frac {5}{2}}(\lambda ),}
where:
η
β
σ
{\displaystyle \eta \beta \sigma }
, roughness parameter,
A
{\displaystyle A}
, nominal contact area,
λ
{\displaystyle \lambda }
, Stribeck oil film parameter, first defined by Stribeck \cite{gt} as
λ
=
h
/
σ
{\displaystyle \lambda =h/\sigma }
,
E
′
{\displaystyle E'}
, effective elastic modulus,
F
2
,
F
5
2
(
λ
)
{\displaystyle F_{2},F_{\frac {5}{2}}(\lambda )}
, statistical functions introduced to match the assumed Gaussian distribution of asperities.
Matthew Leighton et al. presented fits for crosshatched IC engine cylinder liner surfaces together with a process for determining the
F
n
(
h
)
{\displaystyle F_{n}(h)}
terms for any measured surfaces. Leighton et al. demonstrated that Gaussian fit data is not accurate for modelling any engineered surfaces and went on to demonstrate that early running of the surfaces results in a gradual transition which significantly changes the surface topography, load carrying capacity and friction.
The exact solutions for
A
a
{\displaystyle A_{a}}
and
P
{\displaystyle P}
are firstly presented by Jedynak. They are expressed by
F
n
{\displaystyle F_{n}}
as follows. They are calculated for the Gaussian distribution of asperities, which have been shown to be unrealistic for engineering surface but can be assumed where friction, load carrying capacity or real contact area results are not critical to the analysis.
F
2
=
1
2
(
h
2
+
1
)
erfc
(
h
2
)
−
h
2
π
exp
(
−
h
2
2
)
F
5
2
=
1
8
π
exp
(
−
h
2
4
)
h
3
2
(
(
2
h
2
+
3
)
K
3
4
(
h
2
4
)
−
(
2
h
2
+
5
)
K
1
4
(
h
2
4
)
)
{\displaystyle {\begin{aligned}F_{2}&={\frac {1}{2}}\left(h^{2}+1\right)\operatorname {erfc} \left({\frac {h}{\sqrt {2}}}\right)-{\frac {h}{\sqrt {2\pi }}}\exp \left(-{\frac {h^{2}}{2}}\right)\\F_{\frac {5}{2}}&={\frac {1}{8{\sqrt {\pi }}}}\exp \left(-{\frac {h^{2}}{4}}\right)h^{\frac {3}{2}}\left(\left(2h^{2}+3\right)K_{\frac {3}{4}}\left({\frac {h^{2}}{4}}\right)-\left(2h^{2}+5\right)K_{\frac {1}{4}}\left({\frac {h^{2}}{4}}\right)\right)\end{aligned}}}
where erfc(z) means the complementary error function and
K
ν
(
z
)
{\displaystyle K_{\nu }(z)}
is the modified Bessel function of the second kind.
In paper one can find comprehensive review of existing approximants to
F
5
2
{\displaystyle F_{\frac {5}{2}}}
. New proposals give the most accurate approximants to
F
5
2
{\displaystyle F_{\frac {5}{2}}}
and
F
2
{\displaystyle F_{2}}
, which are reported in the literature. They are given by the following rational formulas, which are very exact approximants to integrals
F
n
(
h
)
{\displaystyle F_{n}(h)}
. They are calculated for the Gaussian distribution of asperities
F
n
(
h
)
=
a
0
+
a
1
h
+
a
2
h
2
+
a
3
h
3
1
+
b
1
h
+
b
2
h
2
+
b
3
h
3
+
b
4
h
4
+
b
5
h
5
+
b
6
h
6
exp
(
−
h
2
2
)
{\displaystyle F_{n}(h)={\frac {a_{0}+a_{1}h+a_{2}h^{2}+a_{3}h^{3}}{1+b_{1}h+b_{2}h^{2}+b_{3}h^{3}+b_{4}h^{4}+b_{5}h^{5}+b_{6}h^{6}}}\exp \left(-{\frac {h^{2}}{2}}\right)}
For
F
2
(
h
)
{\displaystyle F_{2}(h)}
the coefficients are
[
a
0
,
a
1
,
a
2
,
a
3
]
=
[
0.5
,
0.182536384941
,
0.039812283118
,
0.003684879001
]
[
b
1
,
b
2
,
b
3
,
b
4
,
b
5
,
b
6
]
=
[
1.960841785003
,
1.708677456715
,
0.856592986083
,
0.264996791567
,
0.049257843893
,
0.004640740133
]
{\displaystyle {\begin{aligned}[][a_{0},a_{1},a_{2},a_{3}]&=[0.5,0.182536384941,0.039812283118,0.003684879001]\\[][b_{1},b_{2},b_{3},b_{4},b_{5},b_{6}]&=[1.960841785003,1.708677456715,0.856592986083,0.264996791567,0.049257843893,0.004640740133]\end{aligned}}}
The maximum relative error is
1.68
×
10
−
7
%
{\displaystyle 1.68\times 10^{-7}\%}
.
For
F
5
2
(
h
)
{\displaystyle F_{\frac {5}{2}}(h)}
the coefficients are
[
a
0
,
a
1
,
a
2
,
a
3
]
=
[
0.616634218997
,
0.108855827811
,
0.023453835635
,
0.000449332509
]
[
b
1
,
b
2
,
b
3
,
b
4
,
b
5
,
b
6
]
=
[
1.919948267476
,
1.635304362591
,
0.799392556572
,
0.240278859212
,
0.043178653945
,
0.003863334276
]
{\displaystyle {\begin{aligned}[][a_{0},a_{1},a_{2},a_{3}]&=[0.616634218997,0.108855827811,0.023453835635,0.000449332509]\\[][b_{1},b_{2},b_{3},b_{4},b_{5},b_{6}]&=[1.919948267476,1.635304362591,0.799392556572,0.240278859212,0.043178653945,0.003863334276]\end{aligned}}}
The maximum relative error is
4.98
×
10
−
8
%
{\displaystyle 4.98\times 10^{-8}\%}
.
== Adhesive contact between elastic bodies ==
When two solid surfaces are brought into close proximity, they experience attractive van der Waals forces. R. S. Bradley's van der Waals model provides a means of calculating the tensile force between two rigid spheres with perfectly smooth surfaces. The Hertzian model of contact does not consider adhesion possible. However, in the late 1960s, several contradictions were observed when the Hertz theory was compared with experiments involving contact between rubber and glass spheres.
It was observed that, though Hertz theory applied at large loads, at low loads
the area of contact was larger than that predicted by Hertz theory,
the area of contact had a non-zero value even when the load was removed, and
there was even strong adhesion if the contacting surfaces were clean and dry.
This indicated that adhesive forces were at work. The Johnson-Kendall-Roberts (JKR) model and the Derjaguin-Muller-Toporov (DMT) models were the first to incorporate adhesion into Hertzian contact.
=== Bradley model of rigid contact ===
It is commonly assumed that the surface force between two atomic planes at a distance
z
{\displaystyle z}
from each other can be derived from the Lennard-Jones potential. With this assumption
F
(
z
)
=
16
γ
3
z
0
[
(
z
z
0
)
−
9
−
(
z
z
0
)
−
3
]
{\displaystyle F(z)={\cfrac {16\gamma }{3z_{0}}}\left[\left({\cfrac {z}{z_{0}}}\right)^{-9}-\left({\cfrac {z}{z_{0}}}\right)^{-3}\right]}
where
F
{\displaystyle F}
is the force (positive in compression),
2
γ
{\displaystyle 2\gamma }
is the total surface energy of both surfaces per unit area, and
z
0
{\displaystyle z_{0}}
is the equilibrium separation of the two atomic planes.
The Bradley model applied the Lennard-Jones potential to find the force of adhesion between two rigid spheres. The total force between the spheres is found to be
F
a
(
z
)
=
16
γ
π
R
3
[
1
4
(
z
z
0
)
−
8
−
(
z
z
0
)
−
2
]
;
1
R
=
1
R
1
+
1
R
2
{\displaystyle F_{a}(z)={\cfrac {16\gamma \pi R}{3}}\left[{\cfrac {1}{4}}\left({\cfrac {z}{z_{0}}}\right)^{-8}-\left({\cfrac {z}{z_{0}}}\right)^{-2}\right]~;~~{\frac {1}{R}}={\frac {1}{R_{1}}}+{\frac {1}{R_{2}}}}
where
R
1
,
R
2
{\displaystyle R_{1},R_{2}}
are the radii of the two spheres.
The two spheres separate completely when the pull-off force is achieved at
z
=
z
0
{\displaystyle z=z_{0}}
at which point
F
a
=
F
c
=
−
4
γ
π
R
.
{\displaystyle F_{a}=F_{c}=-4\gamma \pi R.}
=== JKR model of elastic contact ===
To incorporate the effect of adhesion in Hertzian contact, Johnson, Kendall, and Roberts formulated the JKR theory of adhesive contact using a balance between the stored elastic energy and the loss in surface energy. The JKR model considers the effect of contact pressure and adhesion only inside the area of contact. The general solution for the pressure distribution in the contact area in the JKR model is
p
(
r
)
=
p
0
(
1
−
r
2
a
2
)
1
2
+
p
0
′
(
1
−
r
2
a
2
)
−
1
2
{\displaystyle p(r)=p_{0}\left(1-{\frac {r^{2}}{a^{2}}}\right)^{\frac {1}{2}}+p_{0}'\left(1-{\frac {r^{2}}{a^{2}}}\right)^{-{\frac {1}{2}}}}
Note that in the original Hertz theory, the term containing
p
0
′
{\displaystyle p_{0}'}
was neglected on the ground that tension could not be sustained in the contact zone. For contact between two spheres
p
0
=
2
a
E
∗
π
R
;
p
0
′
=
−
(
4
γ
E
∗
π
a
)
1
2
{\displaystyle p_{0}={\frac {2aE^{*}}{\pi R}};\quad p_{0}'=-\left({\frac {4\gamma E^{*}}{\pi a}}\right)^{\frac {1}{2}}}
where
a
{\displaystyle a\,}
is the radius of the area of contact,
F
{\displaystyle F}
is the applied force,
2
γ
{\displaystyle 2\gamma }
is the total surface energy of both surfaces per unit contact area,
R
i
,
E
i
,
ν
i
,
i
=
1
,
2
{\displaystyle R_{i},\,E_{i},\,\nu _{i},~~i=1,2}
are the radii, Young's moduli, and Poisson's ratios of the two spheres, and
1
R
=
1
R
1
+
1
R
2
;
1
E
∗
=
1
−
ν
1
2
E
1
+
1
−
ν
2
2
E
2
{\displaystyle {\frac {1}{R}}={\frac {1}{R_{1}}}+{\frac {1}{R_{2}}};\quad {\frac {1}{E^{*}}}={\frac {1-\nu _{1}^{2}}{E_{1}}}+{\frac {1-\nu _{2}^{2}}{E_{2}}}}
The approach distance between the two spheres is given by
d
=
π
a
2
E
∗
(
p
0
+
2
p
0
′
)
=
a
2
R
{\displaystyle d={\frac {\pi a}{2E^{*}}}\left(p_{0}+2p_{0}'\right)={\frac {a^{2}}{R}}}
The Hertz equation for the area of contact between two spheres, modified to take into account the surface energy, has the form
a
3
=
3
R
4
E
∗
(
F
+
6
γ
π
R
+
12
γ
π
R
F
+
(
6
γ
π
R
)
2
)
{\displaystyle a^{3}={\frac {3R}{4E^{*}}}\left(F+6\gamma \pi R+{\sqrt {12\gamma \pi RF+(6\gamma \pi R)^{2}}}\right)}
When the surface energy is zero,
γ
=
0
{\displaystyle \gamma =0}
, the Hertz equation for contact between two spheres is recovered. When the applied load is zero, the contact radius is
a
3
=
9
R
2
γ
π
E
∗
{\displaystyle a^{3}={\frac {9R^{2}\gamma \pi }{E^{*}}}}
The tensile load at which the spheres are separated (i.e.,
a
=
0
{\displaystyle a=0}
) is predicted to be
F
c
=
−
3
γ
π
R
{\displaystyle F_{\text{c}}=-3\gamma \pi R\,}
This force is also called the pull-off force. Note that this force is independent of the moduli of the two spheres. However, there is another possible solution for the value of
a
{\displaystyle a}
at this load. This is the critical contact area
a
c
{\displaystyle a_{\text{c}}}
, given by
a
c
3
=
9
R
2
γ
π
4
E
∗
{\displaystyle a_{\text{c}}^{3}={\frac {9R^{2}\gamma \pi }{4E^{*}}}}
If we define the work of adhesion as
Δ
γ
=
γ
1
+
γ
2
−
γ
12
{\displaystyle \Delta \gamma =\gamma _{1}+\gamma _{2}-\gamma _{12}}
where
γ
1
,
γ
2
{\displaystyle \gamma _{1},\gamma _{2}}
are the adhesive energies of the two surfaces and
γ
12
{\displaystyle \gamma _{12}}
is an interaction term, we can write the JKR contact radius as
a
3
=
3
R
4
E
∗
(
F
+
3
Δ
γ
π
R
+
6
Δ
γ
π
R
F
+
(
3
Δ
γ
π
R
)
2
)
{\displaystyle a^{3}={\frac {3R}{4E^{*}}}\left(F+3\Delta \gamma \pi R+{\sqrt {6\Delta \gamma \pi RF+(3\Delta \gamma \pi R)^{2}}}\right)}
The tensile load at separation is
F
=
−
3
2
Δ
γ
π
R
{\displaystyle F=-{\frac {3}{2}}\Delta \gamma \pi R\,}
and the critical contact radius is given by
a
c
3
=
9
R
2
Δ
γ
π
8
E
∗
{\displaystyle a_{\text{c}}^{3}={\frac {9R^{2}\Delta \gamma \pi }{8E^{*}}}}
The critical depth of penetration is
d
c
=
a
c
2
R
=
(
R
1
2
9
Δ
γ
π
4
E
∗
)
2
3
{\displaystyle d_{\text{c}}={\frac {a_{c}^{2}}{R}}=\left(R^{\frac {1}{2}}{\frac {9\Delta \gamma \pi }{4E^{*}}}\right)^{\frac {2}{3}}}
=== DMT model of elastic contact ===
The Derjaguin–Muller–Toporov (DMT) model is an alternative model for adhesive contact which assumes that the contact profile remains the same as in Hertzian contact but with additional attractive interactions outside the area of contact.
The radius of contact between two spheres from DMT theory is
a
3
=
3
R
4
E
∗
(
F
+
4
γ
π
R
)
{\displaystyle a^{3}={\cfrac {3R}{4E^{*}}}\left(F+4\gamma \pi R\right)}
and the pull-off force is
F
c
=
−
4
γ
π
R
{\displaystyle F_{c}=-4\gamma \pi R\,}
When the pull-off force is achieved the contact area becomes zero and there is no singularity in the contact stresses at the edge of the contact area.
In terms of the work of adhesion
Δ
γ
{\displaystyle \Delta \gamma }
a
3
=
3
R
4
E
∗
(
F
+
2
Δ
γ
π
R
)
{\displaystyle a^{3}={\cfrac {3R}{4E^{*}}}\left(F+2\Delta \gamma \pi R\right)}
and
F
c
=
−
2
Δ
γ
π
R
{\displaystyle F_{c}=-2\Delta \gamma \pi R\,}
=== Tabor parameter ===
In 1977, Tabor showed that the apparent contradiction between the JKR and DMT theories could be resolved by noting that the two theories were the extreme limits of a single theory parametrized by the Tabor parameter (
μ
{\displaystyle \mu }
) defined as
μ
:=
d
c
z
0
≈
[
R
(
Δ
γ
)
2
E
∗
2
z
0
3
]
1
3
{\displaystyle \mu :={\frac {d_{c}}{z_{0}}}\approx \left[{\frac {R(\Delta \gamma )^{2}}{{E^{*}}^{2}z_{0}^{3}}}\right]^{\frac {1}{3}}}
where
z
0
{\displaystyle z_{0}}
is the equilibrium separation between the two surfaces in contact. The JKR theory applies to large, compliant spheres for which
μ
{\displaystyle \mu }
is large. The DMT theory applies for small, stiff spheres with small values of
μ
{\displaystyle \mu }
.
Subsequently, Derjaguin and his collaborators by applying Bradley's surface force law to an elastic half space, confirmed that as the Tabor parameter increases, the pull-off force falls from the Bradley value
2
π
R
Δ
γ
{\displaystyle 2\pi R\Delta \gamma }
to the JKR value
(
3
/
2
)
π
R
Δ
γ
{\displaystyle (3/2)\pi R\Delta \gamma }
. More detailed calculations were later done by Greenwood revealing the S-shaped load/approach curve which explains the jumping-on effect. A more efficient method of doing the calculations and additional results were given by Feng
=== Maugis–Dugdale model of elastic contact ===
Further improvement to the Tabor idea was provided by Maugis who represented the surface force in terms of a Dugdale cohesive zone approximation such that the work of adhesion is given by
Δ
γ
=
σ
0
h
0
{\displaystyle \Delta \gamma =\sigma _{0}~h_{0}}
where
σ
0
{\displaystyle \sigma _{0}}
is the maximum force predicted by the Lennard-Jones potential and
h
0
{\displaystyle h_{0}}
is the maximum separation obtained by matching the areas under the Dugdale and Lennard-Jones curves (see adjacent figure). This means that the attractive force is constant for
z
0
≤
z
≤
z
0
+
h
0
{\displaystyle z_{0}\leq z\leq z_{0}+h_{0}}
. There is not further penetration in compression. Perfect contact occurs in an area of radius
a
{\displaystyle a}
and adhesive forces of magnitude
σ
0
{\displaystyle \sigma _{0}}
extend to an area of radius
c
>
a
{\displaystyle c>a}
. In the region
a
<
r
<
c
{\displaystyle a<r<c}
, the two surfaces are separated by a distance
h
(
r
)
{\displaystyle h(r)}
with
h
(
a
)
=
0
{\displaystyle h(a)=0}
and
h
(
c
)
=
h
0
{\displaystyle h(c)=h_{0}}
. The ratio
m
{\displaystyle m}
is defined as
m
:=
c
a
{\displaystyle m:={\frac {c}{a}}}
.
In the Maugis–Dugdale theory, the surface traction distribution is divided into two parts - one due to the Hertz contact pressure and the other from the Dugdale adhesive stress. Hertz contact is assumed in the region
−
a
<
r
<
a
{\displaystyle -a<r<a}
. The contribution to the surface traction from the Hertz pressure is given by
p
H
(
r
)
=
(
3
F
H
2
π
a
2
)
(
1
−
r
2
a
2
)
1
2
{\displaystyle p^{H}(r)=\left({\frac {3F^{H}}{2\pi a^{2}}}\right)\left(1-{\frac {r^{2}}{a^{2}}}\right)^{\frac {1}{2}}}
where the Hertz contact force
F
H
{\displaystyle F^{H}}
is given by
F
H
=
4
E
∗
a
3
3
R
{\displaystyle F^{H}={\frac {4E^{*}a^{3}}{3R}}}
The penetration due to elastic compression is
d
H
=
a
2
R
{\displaystyle d^{H}={\frac {a^{2}}{R}}}
The vertical displacement at
r
=
c
{\displaystyle r=c}
is
u
H
(
c
)
=
1
π
R
[
a
2
(
2
−
m
2
)
sin
−
1
(
1
m
)
+
a
2
m
2
−
1
]
{\displaystyle u^{H}(c)={\cfrac {1}{\pi R}}\left[a^{2}\left(2-m^{2}\right)\sin ^{-1}\left({\frac {1}{m}}\right)+a^{2}{\sqrt {m^{2}-1}}\right]}
and the separation between the two surfaces at
r
=
c
{\displaystyle r=c}
is
h
H
(
c
)
=
c
2
2
R
−
d
H
+
u
H
(
c
)
{\displaystyle h^{H}(c)={\frac {c^{2}}{2R}}-d^{H}+u^{H}(c)}
The surface traction distribution due to the adhesive Dugdale stress is
p
D
(
r
)
=
{
−
σ
0
π
cos
−
1
[
2
−
m
2
−
r
2
a
2
m
2
(
1
−
r
2
m
2
a
2
)
]
for
r
≤
a
−
σ
0
for
a
≤
r
≤
c
{\displaystyle p^{D}(r)={\begin{cases}-{\frac {\sigma _{0}}{\pi }}\cos ^{-1}\left[{\frac {2-m^{2}-{\frac {r^{2}}{a^{2}}}}{m^{2}\left(1-{\frac {r^{2}}{m^{2}a^{2}}}\right)}}\right]&\quad {\text{for}}\quad r\leq a\\-\sigma _{0}&\quad {\text{for}}\quad a\leq r\leq c\end{cases}}}
The total adhesive force is then given by
F
D
=
−
2
σ
0
m
2
a
2
[
cos
−
1
(
1
m
)
+
1
m
2
m
2
−
1
]
{\displaystyle F^{D}=-2\sigma _{0}m^{2}a^{2}\left[\cos ^{-1}\left({\frac {1}{m}}\right)+{\frac {1}{m^{2}}}{\sqrt {m^{2}-1}}\right]}
The compression due to Dugdale adhesion is
d
D
=
−
(
2
σ
0
a
E
∗
)
m
2
−
1
{\displaystyle d^{D}=-\left({\frac {2\sigma _{0}a}{E^{*}}}\right){\sqrt {m^{2}-1}}}
and the gap at
r
=
c
{\displaystyle r=c}
is
h
D
(
c
)
=
(
4
σ
0
a
π
E
∗
)
[
m
2
−
1
cos
−
1
(
1
m
)
+
1
−
m
]
{\displaystyle h^{D}(c)=\left({\frac {4\sigma _{0}a}{\pi E^{*}}}\right)\left[{\sqrt {m^{2}-1}}\cos ^{-1}\left({\frac {1}{m}}\right)+1-m\right]}
The net traction on the contact area is then given by
p
(
r
)
=
p
H
(
r
)
+
p
D
(
r
)
{\displaystyle p(r)=p^{H}(r)+p^{D}(r)}
and the net contact force is
F
=
F
H
+
F
D
{\displaystyle F=F^{H}+F^{D}}
. When
h
(
c
)
=
h
H
(
c
)
+
h
D
(
c
)
=
h
0
{\displaystyle h(c)=h^{H}(c)+h^{D}(c)=h_{0}}
the adhesive traction drops to zero.
Non-dimensionalized values of
a
,
c
,
F
,
d
{\displaystyle a,c,F,d}
are introduced at this stage that are defied as
a
¯
=
α
a
;
c
¯
:=
α
c
;
d
¯
:=
α
2
R
d
;
α
:=
(
4
E
∗
3
π
Δ
γ
R
2
)
1
3
;
A
¯
:=
π
c
2
;
F
¯
=
F
π
Δ
γ
R
{\displaystyle {\bar {a}}=\alpha a~;~~{\bar {c}}:=\alpha c~;~~{\bar {d}}:=\alpha ^{2}Rd~;~~\alpha :=\left({\frac {4E^{*}}{3\pi \Delta \gamma R^{2}}}\right)^{\frac {1}{3}}~;~~{\bar {A}}:=\pi c^{2}~;~~{\bar {F}}={\frac {F}{\pi \Delta \gamma R}}}
In addition, Maugis proposed a parameter
λ
{\displaystyle \lambda }
which is equivalent to the Tabor parameter
μ
{\displaystyle \mu }
. This parameter is defined as
λ
:=
σ
0
(
9
R
2
π
Δ
γ
E
∗
2
)
1
3
≈
1.16
μ
{\displaystyle \lambda :=\sigma _{0}\left({\frac {9R}{2\pi \Delta \gamma {E^{*}}^{2}}}\right)^{\frac {1}{3}}\approx 1.16\mu }
where the step cohesive stress
σ
0
{\displaystyle \sigma _{0}}
equals to the theoretical stress of the Lennard-Jones potential
σ
th
=
16
Δ
γ
9
3
z
0
{\displaystyle \sigma _{\text{th}}={\frac {16\Delta \gamma }{9{\sqrt {3}}z_{0}}}}
Zheng and Yu suggested another value for the step cohesive stress
σ
0
=
exp
(
−
223
420
)
⋅
Δ
γ
z
0
≈
0.588
Δ
γ
z
0
{\displaystyle \sigma _{0}=\exp \left(-{\frac {223}{420}}\right)\cdot {\frac {\Delta \gamma }{z_{0}}}\approx 0.588{\frac {\Delta \gamma }{z_{0}}}}
to match the Lennard-Jones potential, which leads to
λ
≈
0.663
μ
{\displaystyle \lambda \approx 0.663\mu }
Then the net contact force may be expressed as
F
¯
=
a
¯
3
−
λ
a
¯
2
[
m
2
−
1
+
m
2
sec
−
1
m
]
{\displaystyle {\bar {F}}={\bar {a}}^{3}-\lambda {\bar {a}}^{2}\left[{\sqrt {m^{2}-1}}+m^{2}\sec ^{-1}m\right]}
and the elastic compression as
d
¯
=
a
¯
2
−
4
3
λ
a
¯
m
2
−
1
{\displaystyle {\bar {d}}={\bar {a}}^{2}-{\frac {4}{3}}~\lambda {\bar {a}}{\sqrt {m^{2}-1}}}
The equation for the cohesive gap between the two bodies takes the form
λ
a
¯
2
2
[
(
m
2
−
2
)
sec
−
1
m
+
m
2
−
1
]
+
4
λ
a
¯
3
[
m
2
−
1
sec
−
1
m
−
m
+
1
]
=
1
{\displaystyle {\frac {\lambda {\bar {a}}^{2}}{2}}\left[\left(m^{2}-2\right)\sec ^{-1}m+{\sqrt {m^{2}-1}}\right]+{\frac {4\lambda {\bar {a}}}{3}}\left[{\sqrt {m^{2}-1}}\sec ^{-1}m-m+1\right]=1}
This equation can be solved to obtain values of
c
{\displaystyle c}
for various values of
a
{\displaystyle a}
and
λ
{\displaystyle \lambda }
. For large values of
λ
{\displaystyle \lambda }
,
m
→
1
{\displaystyle m\rightarrow 1}
and the JKR model is obtained. For small values of
λ
{\displaystyle \lambda }
the DMT model is retrieved.
=== Carpick–Ogletree-Salmeron (COS) model ===
The Maugis–Dugdale model can only be solved iteratively if the value of
λ
{\displaystyle \lambda }
is not known a-priori. The Carpick–Ogletree–Salmeron (COS) approximate solution (after Robert Carpick, D. Frank Ogletree and Miquel Salmeron)simplifies the process by using the following relation to determine the contact radius
a
{\displaystyle a}
:
a
=
a
0
(
β
)
(
β
+
1
−
F
/
F
c
(
β
)
1
+
β
)
2
3
{\displaystyle a=a_{0}(\beta )\left({\frac {\beta +{\sqrt {1-F/F_{c}(\beta )}}}{1+\beta }}\right)^{\frac {2}{3}}}
where
a
0
{\displaystyle a_{0}}
is the contact area at zero load, and
β
{\displaystyle \beta }
is a transition parameter that is related to
λ
{\displaystyle \lambda }
by
λ
≈
−
0.924
ln
(
1
−
1.02
β
)
{\displaystyle \lambda \approx -0.924\ln(1-1.02\beta )}
The case
β
=
1
{\displaystyle \beta =1}
corresponds exactly to JKR theory while
β
=
0
{\displaystyle \beta =0}
corresponds to DMT theory. For intermediate cases
0
<
β
<
1
{\displaystyle 0<\beta <1}
the COS model corresponds closely to the Maugis–Dugdale solution for
0.1
<
λ
<
5
{\displaystyle 0.1<\lambda <5}
.
=== Influence of contact shape ===
Even in the presence of perfectly smooth surfaces, geometry can come into play in form of the macroscopic shape of the contacting region. When a rigid punch with flat but oddly shaped face is carefully pulled off its soft counterpart, its detachment occurs not instantaneously but detachment fronts start at pointed corners and travel inwards, until the final configuration is reached which for macroscopically isotropic shapes is almost circular. The main parameter determining the adhesive strength of flat contacts occurs to be the maximum linear size of the contact. The process of detachment can as observed experimentally can be seen in the film.
== See also ==
== References ==
== External links ==
[1]: A MATLAB routine to solve the linear elastic contact mechanics problem entitled; "An LCP solution of the linear elastic contact mechanics problem" is provided at the file exchange at MATLAB Central.
[2]: Contact mechanics calculator.
[3]: detailed calculations and formulae of JKR theory for two spheres.
[5]: A Matlab code for Hertz contact analysis (includes line, point and elliptical cases).
[6]: JKR, MD, and DMT models of adhesion (Matlab routines). | Wikipedia/Frictionless_contact_mechanics |
ScienceDaily is an American website launched in 1995 that aggregates press releases and publishes lightly edited press releases (a practice called churnalism) about science, similar to Phys.org and EurekAlert!.
== History ==
The site was founded by married couple Dan and Michele Hogan in 1995; Dan Hogan formerly worked in the public affairs department of Jackson Laboratory writing press releases. The site makes money from selling advertisements. As of 2010, the site said that it had grown "from a two-person operation to a full-fledged news business with worldwide contributors". At the time, it was run out of the Hogans' home, had no reporters, and only reprinted press releases. In 2012, Quantcast ranked it at 614 with 2.6 million U.S. visitors.
== Sections ==
As of August 2023, ScienceDaily mainly has five sections, Health, Tech, Enviro, Society, and Quirky, the last of which includes the top news.
== References ==
== External links ==
Official website
Alexa - ScienceDaily—Archived February 26, 2020, at the Wayback Machine | Wikipedia/Science_Daily |
Modons or dipole eddy pairs, are eddies that can carry water over distances of more than 1000 km in the ocean, in different directions than usual sea currents like Rossby waves, and much faster than other eddies.
== History ==
The name modon was coined by M. E. Stern as a pun on the joint USA-USSR oceanographic research program POLYMODE. The modon is a dipole-vortex solution to the potential-vorticity equation that was theorized in order to explain anomalous atmospheric blocking events and eddy structures in rotating fluids, and the first solution was obtained by Stern in 1975. However, this solution was imperfect because it was not continuous at the modon boundary, so other scientists, such as Larichev and Reznik (1976), proposed other solutions that corrected that problem.
Although modons were predicted theoretically in the 1970s, a pair of modons spinning in opposite directions was first identified traveling in 2017 over the Tasman Sea. The study of satellite images has allowed the identification of other modons, at least dating back to 1993, that hadn't been identified as such until then. The scientists that first discovered modons in the wild think that they can absorb small sea creatures and carry them at high speed over long ocean distances. They are also capable of affecting the transport of heat, carbon and nutrients over that area of the ocean. They move about ten times faster than a typical eddy, and can last for six months before being disengaged.
== Equatorial modon ==
In 2019, Rostami and Zeitlin reported a discovery of steady, long-living, slowly eastward-moving large-scale coherent twin cyclones, so-called “equatorial modon,” by means of a moist-convective rotating shallow water model. Crudest barotropic features of MJO such as eastward propagation along the equator, slow phase speed, hydro-dynamical coherent structure, the convergent zone of moist-convection, are captured by Rostami and Zeitlin's modon. Having an exact solution of streamlines for internal and external regions of equatorial asymptotic modon is another feature of this structure. It is shown that such eastward-moving coherent dipolar structures can be produced during geostrophic adjustment of localized large-scale pressure anomalies in the diabatic moist-convective environment on the equator.
== References == | Wikipedia/Modon_(fluid_dynamics) |
The Geophysical Fluid Dynamics Laboratory (GFDL) is a laboratory in the National Oceanic and Atmospheric Administration (NOAA) Office of Oceanic and Atmospheric Research (OAR). The current director is Venkatachalam Ramaswamy. It is one of seven Research Laboratories within NOAA's OAR.
GFDL is engaged in comprehensive long-lead-time research to expand our scientific understanding of the physical and chemical processes that govern the behavior of the atmosphere and the oceans as complex fluid systems. These systems can be modeled mathematically and their phenomenology can be studied by computer simulation methods.
GFDL's accomplishments include the development of the first climate models to study global warming, the first comprehensive ocean prediction codes, and the first dynamical models with significant skill in hurricane track and intensity predictions. Much current research within the laboratory is focused around the development of Earth System Models for assessment of natural and human-induced climate change.
== Accomplishments ==
The first global numerical simulations of the atmosphere — defining the basic structure of the numerical weather prediction and climate models that are still in use today throughout the world.
The first numerical simulation of the world ocean.
The initial definition and further elaborations of many of the central issues in global warming research, including water vapor feedback, polar amplification of temperature change, summer mid-continental dryness and cloud feedback.
The first coupled atmosphere-ocean climate models and the first simulations of global warming using these models (including the above feedback processes and the potential weakening of the Atlantic overturning circulation).
The development of a state-of-art hurricane model and its transfer to operations in the NOAA National Weather Service and the Navy.
== Scientific divisions ==
The GFDL has a diverse community of about 300 researchers, collaborators and staff, with many from Britain, India, China, Japan, France, and other countries around the world. The laboratory is currently organized into several scientific divisions (listed alphabetically below). There is also a large group of scientific programmers known as the Modeling Systems Division, as well as a large computer support group.
=== Atmospheric Physics ===
Current head: Venkatachalam Ramaswamy
This divisions goal is to employ numerical models and observations of the Earth System to characterize and quantify atmospheric physical processes, particularly those involving greenhouse gases, aerosols, water vapor, and clouds, and their roles in atmospheric general circulation, weather and climate.
=== Biogeochemistry, Atmospheric Chemistry, and Ecosystems ===
Current head: John P. Dunne
This divisions goal is to develop and use the GFDL’s earth system models to create a more comprehensive understanding of the interactions between physical, chemical, and ecological drivers and feedbacks on the earth system.
=== Ocean and Cryosphere ===
Current head: Rong Zhang
This divisions goal is to conduct leading research to understand ocean and cryosphere changes and variability; their interactions with weather, climate, sea level, and ecosystems; and advance prediction and projection of future changes. To support this goal, we are developing state-of-the-science numerical models for the ocean, sea ice, land ice, and fully coupled models.
=== Seasonal-to-Decadal Variability and Predictability ===
Current head: Thomas L. Delworth
This divisions goal is to improve our understanding of climate variability, predictability and change on time scales ranging from seasonal to multidecadal. This includes internal variability of the coupled climate system, and the response to changing radiative forcing. We are actively working to develop a next-generation experimental seasonal-to-decadal prediction system.
=== Weather and Climate Dynamics ===
Current head: Thomas Knutson
This divisions goal is to develop innovative physical and dynamical components for the next generation of earth system models, with special emphasis on high resolution (1–25 km) atmospheric model development. We aim to explore the frontiers of weather and climate modeling and analysis, and to improve the predictions of high-impact events such as hurricanes, floods, severe storms, and droughts, from weather to seasonal and interannual (2 year) time-scales.
== Facilities ==
The GFDL is located at Princeton University's Forrestal Campus in Princeton, NJ.
Since March 2011, the GFDL no longer possesses an on-site supercomputer. They instead utilize a massively parallel Cray supercomputer with over 140,000 processor cores which is currently located at Oak Ridge National Laboratory in Oak Ridge, Tennessee. This contrasts from their previous systems architecture, which consisted of eight Silicon Graphics Altix computers, each housing 1024 processor cores.
Hardware updates occur on average, every 18 months.
The GFDL has been using high-performance computing systems to perform numerical modeling since the 1950s.
== Alumni[12] ==
Joseph Smagorinsky: GFDL's first director
Jerry D. Mahlman: GFDL's second director
Ants Leetmaa: GFDL's third director
Isaac Held
Kirk Bryan (oceanographer)
Syukuro Manabe
Yoshio Kurihara
Kikuro Miyakoda
Isidoro Orlanski
Gareth Williams
Frank Lipps
Abraham Oort
== See also ==
Modular Ocean Model
GFDL CM2.X
== References ==
== External links ==
National Oceanic & Atmospheric Administration
NOAA's Office of Oceanic and Atmospheric Research
Geophysical Fluid Dynamics Laboratory
NOAA GFDL ranking among the Top 500 Supercomputer Sites
National Climate-Computing Research Center | Wikipedia/Geophysical_Fluid_Dynamics_Laboratory |
Schlieren photography is a process for photographing fluid flow. Invented by the German physicist August Toepler in 1864 to study supersonic motion, it is widely used in aeronautical engineering to photograph the flow of air around objects.
The process works by imaging the deflections of light rays that are refracted by a moving fluid, allowing normally unobservable changes in a fluid's refractive index to be seen. Because changes to flow rate directly affect the refractive index of a fluid, one can therefore photograph a fluid's flow rate (as well as other changes to density, temperature, and pressure) by viewing changes to its refractive index.
Using the schlieren photography process, other unobservable fluid changes can also be seen, such as convection currents, and the standing waves used in acoustic levitation.
== Classical optical system ==
The classical implementation of an optical schlieren system uses light from a single collimated source shining on, or from behind, a target object. Variations in refractive index caused by density gradients in the fluid distort the collimated light beam. This distortion creates a spatial variation in the intensity of the light, which can be visualised directly with a shadowgraph system.
Classical schlieren imaging systems appear in two configurations, using either one or two mirrors. In each case, a transparent object is illuminated with collimated or nearly-collimated light. Rays that are not deflected by the object proceed to their focal point, where they are blocked by a knife edge. Rays that are deflected by the object, have a chance of passing the knife edge without being blocked. As a result, one can place a camera after the knife edge such that the image of the object will exhibit intensity variations due to the deflections of the rays. The result is a set of lighter and darker patches corresponding to positive and negative fluid density gradients in the direction normal to the knife edge. When a knife edge is used, the system is generally referred to as a schlieren system, which measures the first derivative of density in the direction of the knife edge. If a knife edge is not used, the system is generally referred to as a shadowgraph system, which measures the second derivative of density.
In the two-mirror schlieren system (sometimes called the Z-configuration), the source is collimated by the first mirror, the collimated light traverses the object and then is focused by the second mirror. This generally allows higher resolution imaging (seeing finer details in the object) than is possible using the single-mirror configuration.
Schlieren system designs
If the fluid flow is uniform, the image will be steady, but any turbulence will cause scintillation, the shimmering effect that can be seen over heated surfaces on a hot day. To visualise instantaneous density profiles, a short-duration flash (rather than continuous illumination) may be used.
== Focusing schlieren optical system ==
In the mid 20th century, R. A. Burton developed an alternative form of schlieren photography, which is now usually called focusing schlieren or lens-and-grid schlieren, based on a suggestion by Hubert Schardin. Focusing schlieren systems generally retain the characteristic knife edge to produce contrast, but instead of using collimated light and a single knife edge, they use an illumination pattern of repeated edges with a focusing imaging system.
The basic idea is that the illumination pattern is imaged onto a geometrically congruent cutoff pattern (essentially a multiplicity of knife edges) with focusing optics, while density gradients lying between the illumination pattern and the cutoff pattern are imaged, typically by a camera system. Like in classical schlieren, the distortions produce regions of brightening or darkening corresponding to the position and direction of the distortion, because they redirect rays either away from or onto the opaque part of the cutoff pattern. While in classical schlieren, distortions over the whole beam path are visualized equally, in focusing schlieren, only distortions in the object field of the camera are clearly imaged. Distortions away from the object field become blurred, so this technique allows some degree of depth selection. It also has the advantage that a wide variety of illuminated backgrounds can be used, since collimation is not required. This allows construction of projection-based focusing schlieren systems, which are much easier to build and align than classical schlieren systems. The requirement of collimated light in classical schlieren is often a substantial practical barrier for constructing large systems due to the need for the collimating optic to be the same size as the field of view. Focusing schlieren systems can use compact optics with a large background illumination pattern, which is particularly easy to produce with a projection system. For systems with large demagnification, the illumination pattern needs to be around twice larger than the field of view to allow defocusing of the background pattern.
== Background-oriented techniques ==
Background-oriented schlieren technique (BOS) relies on measuring or visualizing shifts in focused images. In these techniques, the background and the schlieren object (the distortion to be visualized) are both in focus and the distortion is detected because it moves part of the background image relative to its original position. Because of this focus requirement, they tend to be used for large-scale applications where both the schlieren object and the background are distant (typically beyond the hyperfocal distance of the optical system). Since these systems require no additional optics aside from a camera, they are often the simplest to construct but they are usually not as sensitive as other types of schlieren systems, with the sensitivity being limited by the camera resolution. The technique also requires a suitable background image. In some cases, the background may be provided by the experimenter, such as a random speckle pattern or sharp line, but naturally occurring features such as landscapes or bright light sources such as the sun and moon can also be used. Background-oriented schlieren is most often performed using software techniques such as digital image correlation and optical flow analysis to perform synthetic schlieren, but it is possible to achieve the same effect in streak imaging with an analog optical system.
== Variations and applications ==
Variations on the optical schlieren method include the replacement of the knife-edge by a coloured target, resulting in rainbow schlieren which can assist in visualising the flow. Different edge configurations such as concentric rings can also give sensitivity to variable gradient directions, and programmable digital edge generation has been demonstrated as well using digital displays and modulators. The adaptive optics pyramid wavefront sensor is a modified form of schlieren (having two perpendicular knife edges formed by the vertices of a refracting square pyramid).
Complete schlieren optical systems can be built from components, or bought as commercially available instruments. Details of theory and operation are given in Settles' 2001 book. The USSR once produced a number of sophisticated schlieren systems based on the Maksutov telescope principle, many of which still survive in the former Soviet Union and China.
Schlieren photography is used to visualise the flows of the media, which are themselves transparent (hence, their movement cannot be seen directly), but form refractive index gradients, which become visible in schlieren images either as shades of grey or even in colour. Refractive index gradients can be caused either by changes of temperature/pressure of the same fluid or by the variations of the concentration of components in mixtures and solutions. A typical application in gas dynamics is the study of shock waves in ballistics and supersonic or hypersonic vehicles. Flows caused by heating, physical absorption or chemical reactions can be visualised. Thus schlieren photography can be used in many engineering problems such as heat transfer, leak detection, study of boundary layer detachment, and characterization of optics.
== See also ==
Laser schlieren deflectometry
Mach–Zehnder interferometer
Moire deflectometry
Schlieren
Schlieren imaging
Shadowgraph
== References ==
== External links ==
Grady, Denise (2008-10-28). "The Mysterious Cough, Caught on Film". The New York Times. p. D3. Retrieved 2008-10-28.
"Schlieren Photography – How Does It Work?". ian.org. 2010-01-22. Retrieved 2020-04-04.
"High-speed Ballistics Imaging: A Guest Blog from Nathan Boor of Aimed Research". mousegunaddict.blogspot.com. 2013-06-10. Retrieved 2020-04-04.
Buckner, Benjamin D.; L'Esperance, Drew (2013). "Digital synchroballistic schlieren camera for high-speed photography of bullets and rocket sleds". Optical Engineering. 52 (8): 083105. Bibcode:2013OptEn..52h3105B. doi:10.1117/1.OE.52.8.083105. ISSN 0091-3286.
Archived at Ghostarchive and the Wayback Machine: "Schlieren Optical System & Aerodynamic Tests 1958 Shell Oil Co. Educational Film XD13174". PeriscopeFilm. 2020-04-03 – via YouTube.
"Schlieren Optics". Harvard Natural Sciences Lecture Demonstrations. Retrieved 2024-02-03.
RolfeFollow, Bryan (2015-11-25). "Schlieren Imaging: How to See Air Flow!". Instructables. Retrieved 2024-02-03. | Wikipedia/Schlieren_photograph |
Blade element theory (BET) is a mathematical process originally designed by William Froude (1878), David W. Taylor (1893) and Stefan Drzewiecki (1885) to determine the behavior of propellers. It involves breaking a blade down into several small parts then determining the forces on each of these small blade elements. These forces are then integrated along the entire blade and over one rotor revolution in order to obtain the forces and moments produced by the entire propeller or rotor. One of the key difficulties lies in modelling the induced velocity on the rotor disk. Because of this the blade element theory is often combined with momentum theory to provide additional relationships necessary to describe the induced velocity on the rotor disk, producing blade element momentum theory. At the most basic level of approximation a uniform induced velocity on the disk is assumed:
v
i
=
T
A
⋅
1
2
ρ
.
{\displaystyle v_{i}={\sqrt {{\frac {T}{A}}\cdot {\frac {1}{2\rho }}}}.}
Alternatively the variation of the induced velocity along the radius can be modeled by breaking the blade down into small annuli and applying the conservation of mass, momentum and energy to every annulus. This approach is sometimes called the Froude–Finsterwalder equation.
If the blade element method is applied to helicopter rotors in forward flight it is necessary to consider the flapping motion of the blades as well as the longitudinal and lateral distribution of the induced velocity on the rotor disk. The most simple forward flight inflow models are first harmonic models.
== Simple blade element theory ==
While the momentum theory is useful for determining ideal efficiency, it gives a very incomplete account of the action of screw propellers, neglecting among other things the torque. In order to investigate propeller action in greater detail, the blades are considered as made up of a number of small elements, and the air forces on each element are calculated. Thus, while the momentum theory deals with the flow of the air, the blade-element theory deals primarily with the forces on the propeller blades. The idea of analyzing the forces on elementary strips of propeller blades was first published by William Froude in 1878. It was also worked out independently by Drzewiecki and given in a book on mechanical flight published in Russia seven years later, in 1885. Again, in 1907, Lanchester published a somewhat more advanced form of the blade-element theory without knowledge of previous work on the subject. The simple blade-element theory is usually referred to, however, as the Drzewiecki theory, for it was Drzewiecki who put it into practical form and brought it into general use. Also, he was the first to sum up the forces on the blade elements to obtain the thrust and torque for a whole propeller and the first to introduce the idea of using airfoil data to find the forces on the blade elements.
In the Drzewiecki blade-element theory the propeller is considered a warped or twisted airfoil, each segment of which follows a helical path and is treated as a segment of an ordinary wing. It is usually assumed in the simple theory that airfoil coefficients obtained from wind tunnel tests of model wings (ordinarily tested with an aspect ratio of 6) apply directly to propeller blade elements of the same cross-sectional shape.
The air flow around each element is considered two-dimensional and therefore unaffected by the adjacent parts of the blade. The independence of the blade elements at any given radius with respect to the neighbouring elements has been established theoretically and has also been shown to be substantially true for the working sections of the blade by special experiments made for the purpose. It is also assumed that the air passes through the propeller with no radial flow (i.e., there is no contraction of the slipstream in passing through the propeller disc) and that there is no blade interference.
=== Aerodynamic forces on a blade element ===
Consider the element at radius r, shown in Fig. 1, which has the infinitesimal length dr and the width b. The motion of the element in an aircraft propeller in flight is along a helical path determined by the forward velocity V of the aircraft and the tangential velocity 2πrn of the element in the plane of the propeller disc, where n represents the revolutions per unit time. The velocity of the element with respect to the air Vr is then the resultant of the forward and tangential velocities, as shown in Fig. 2. Call the angle between the direction of motion of the element and the plane of rotation Φ, and the blade angle β. The angle of attack α of the element relative to the air is then
α
=
β
−
ϕ
{\displaystyle \alpha =\beta -\phi }
.
Applying ordinary airfoil coefficients, the lift force on the element is:
d
L
=
1
2
V
r
2
C
L
b
d
r
.
{\displaystyle dL={\frac {1}{2}}V_{r}^{2}C_{L}b\,dr.}
Let γ be the angle between the lift component and the resultant force, or
γ
=
arctan
D
L
{\textstyle \gamma =\arctan {\frac {D}{L}}}
. Then the total resultant air force on the element is:
d
R
=
1
2
V
r
2
C
L
b
d
r
cos
γ
.
{\displaystyle dR={\frac {{\frac {1}{2}}V_{r}^{2}C_{L}b\,dr}{\cos \gamma }}.}
The thrust of the element is the component of the resultant force in the direction of the propeller axis (Fig. 2), or
d
T
=
d
R
cos
(
ϕ
+
γ
)
=
1
2
V
r
2
C
L
b
cos
(
ϕ
+
γ
)
cos
γ
d
r
,
{\displaystyle {\begin{aligned}dT&=dR\cos(\phi +\gamma )\\&={\frac {{\frac {1}{2}}V_{r}^{2}C_{L}b\cos(\phi +\gamma )}{\cos \gamma }}dr,\end{aligned}}}
and since
V
r
=
V
sin
ϕ
{\textstyle V_{r}={\frac {V}{\sin \phi }}}
d
T
=
1
2
V
2
C
L
b
cos
(
ϕ
+
γ
)
sin
2
ϕ
cos
γ
d
r
.
{\displaystyle dT={\frac {{\frac {1}{2}}V^{2}C_{L}b\cos(\phi +\gamma )}{\sin ^{2}\phi \cos \gamma }}dr.}
For convenience let
K
=
C
L
b
sin
2
ϕ
cos
γ
{\displaystyle K={\frac {C_{L}b}{\sin ^{2}\phi \cos \gamma }}}
and
T
c
=
K
cos
(
ϕ
+
γ
)
.
{\displaystyle T_{c}=K\cos(\phi +\gamma ).}
Then
d
T
=
1
2
ρ
V
2
T
c
d
r
,
{\displaystyle dT={\frac {1}{2}}\rho V^{2}T_{c}\,dr,}
and the total thrust for the propeller (of B blades) is:
T
=
1
2
ρ
V
2
B
∫
0
R
T
c
d
r
.
{\displaystyle T={\frac {1}{2}}\rho V^{2}B\int _{0}^{R}T_{c}\,dr.}
Referring again to Fig. 2, the tangential or torque force is
d
F
=
d
R
sin
(
ϕ
+
γ
)
,
{\displaystyle dF=dR\sin(\phi +\gamma ),}
and the torque on the element is
d
Q
=
r
d
R
sin
(
ϕ
+
γ
)
,
{\displaystyle dQ=r\,dR\,\sin(\phi +\gamma ),}
which, if
Q
c
=
K
r
sin
(
ϕ
+
γ
)
{\textstyle Q_{c}=Kr\sin(\phi +\gamma )}
, can be written
d
Q
=
1
2
ρ
V
2
Q
c
d
r
.
{\displaystyle dQ={\frac {1}{2}}\rho V^{2}Q_{c}dr.}
The expression for the torque of the whole propeller is therefore
Q
=
1
2
ρ
V
2
B
∫
0
R
Q
c
d
r
.
{\displaystyle Q={\frac {1}{2}}\rho V^{2}B\int _{0}^{R}Q_{c}dr.}
The horsepower absorbed by the propeller, or the torque horsepower, is
Q
H
P
=
2
π
n
Q
550
{\displaystyle QHP={\frac {2\pi nQ}{550}}}
and the efficiency is
η
=
T
H
P
Q
H
P
=
T
V
2
π
n
Q
.
{\displaystyle \eta ={\frac {THP}{QHP}}={\frac {TV}{2\pi nQ}}.}
=== Efficiency ===
Because of the variation of the blade width, angle, and airfoil section along the blade, it is not possible to obtain a simple expression for the thrust, torque, and efficiency of propellers in general. A single element at about two-thirds or three-fourths of the tip radius is, however, fairly representative of the whole propeller, and it is therefore interesting to examine the expression for the efficiency of a single element. The efficiency of an element is the ratio of the useful power to the power absorbed, or
η
=
d
T
V
d
Q
2
π
n
=
d
R
cos
(
ϕ
+
γ
)
V
d
R
sin
(
ϕ
+
γ
)
2
π
n
r
=
tan
ϕ
tan
(
ϕ
+
γ
)
.
{\displaystyle {\begin{aligned}\eta &={\frac {dTV}{dQ2\pi n}}\\&={\frac {dR\cos(\phi +\gamma )V}{dR\sin(\phi +\gamma )2\pi nr}}\\&={\frac {\tan \phi }{\tan(\phi +\gamma )}}.\end{aligned}}}
Now tan Φ is the ratio of the forward to the tangential velocity, and
tan
γ
=
D
L
{\textstyle \tan \gamma ={\frac {D}{L}}}
. According to the simple blade-element theory, therefore, the efficiency of an element of a propeller depends only on the ratio of the forward to the tangential velocity and on the
D
L
{\textstyle {\frac {D}{L}}}
of the airfoil section.
The value of Φ which gives the maximum efficiency for an element, as found by differentiating the efficiency with respect to Φ and equating the result to zero, is
ϕ
=
45
∘
−
γ
2
{\displaystyle \phi =45^{\circ }-{\frac {\gamma }{2}}}
The variation of efficiency with Φ is shown in Fig. 3 for two extreme values of γ. The efficiency rises to a maximum at
45
∘
−
γ
2
{\textstyle 45^{\circ }-{\frac {\gamma }{2}}}
and then falls to zero again at
90
∘
−
γ
{\textstyle 90^{\circ }-\gamma }
. With an
L
D
{\textstyle {\frac {L}{D}}}
of 28.6 the maximum possible efficiency of an element according to the simple theory is 0.932, while with an
L
D
{\textstyle {\frac {L}{D}}}
of 9.5 it is only 0.812. At the values of Φ at which the most important elements of the majority of propellers work (10° to 15°) the effect of
L
D
{\textstyle {\frac {L}{D}}}
on efficiency is still greater. Within the range of 10° to 15°, the curves in Fig. 3 indicate that it is advantageous to have both the
L
D
{\textstyle {\frac {L}{D}}}
of the airfoil sections and the angle Φ (or the advance per revolution, and consequently the pitch) as high as possible.
=== Limitations ===
According to momentum theory, a velocity is imparted to the air passing through the propeller, and half of this velocity is given to the air by the time it reaches the propeller plane. This increase of velocity of the air as it passes into the propeller disc is called the inflow velocity. It is always found where there is pressure discontinuity in a fluid. In the case of a wing moving horizontally, the air is given a downward velocity, as shown in Fig. 4., and theoretically half of this velocity is imparted in front of and above the wing, and the other half below and behind.
This induced downflow is present in the model wing tests from which the airfoil coefficients used in the blade-element theory are obtained; the inflow indicated by the momentum theory is therefore automatically taken into account in the simple blade-element theory. However, the induced downflow is widely different for different aspect ratios, being zero for infinite aspect ratio. Most model airfoil tests are made with rectangular wings having an arbitrarily chosen aspect ratio of 6, and there is no reason to suppose that the downflow in such a test corresponds to the inflow for each element of a propeller blade. In fact, the general conclusion drawn from an exhaustive series of tests, in which the pressure distribution was measured over 12 sections of a model propeller running in a wind tunnel, is that the lift coefficient of the propeller blade element differs considerably from that measured at the same angle of attack on an airfoil of aspect ratio 6. This is one of the greatest weaknesses of the simple blade-element theory.
Another weakness is that the interference between the propeller blades is not considered. The elements of the blades at any particular radius form a cascade similar to a multiplane with negative stagger, as shown in Fig. 5. Near the tips where the gap is large the interference is very small, but in toward the blade roots it is quite large.
In actual propellers, there is a tip loss which the blade-element theory does not take into consideration. The thrust and torque forces as computed by means of the theory are therefore greater for the elements near the tip than those found by experiment.
In order to eliminate scale effect, the wind tunnel tests on model wings should be run at the same value of Reynolds number (scale) as the corresponding elements in the propeller blades. Airfoil characteristics measured at such a low scale as, for example, an air velocity of 30 m.p.h. with a 3-in. chord airfoil, show peculiarities not found when the tests are run at a scale comparable with that of propeller elements. The standard propeller section characteristics given in Figs. 11, 12, 13, and 14 were obtained from high Reynolds-number tests in the Variable Density Tunnel of the NACA, and, fortunately, for all excepting the thickest of these sections there is very little difference in characteristics at high and low Reynolds numbers. These values may be used with reasonable accuracy as to scale for propellers operating at tip speeds well below the speed of sound in air, and therefore relatively free from any effects of compressibility.
The poor accuracy of the simple blade-element theory is very well shown in a report by Durand and Lesley, in which they have computed the performance of a large number of model propellers (80) and compared the computed values with the actual performances obtained from tests on the model propellers themselves. In the words of the authors: The divergencies between the two sets of results, while showing certain elements of consistency, are on the whole too large and too capriciously distributed to justify the use of the theory in this simplest form for other than approximate estimates or for comparative purposes. The airfoils were tested in two different wind tunnels and in one of the tunnels at two different air velocities, and the propeller characteristics computed from the three sets of airfoil data differ by as much as 28%, illustrating quite forcibly the necessity for having the airfoil tests made at the correct scale.
In spite of all its inaccuracies the simple blade-element theory has been a useful tool in the hands of experienced propeller designers. With it a skilful designer having a knowledge of suitable empirical factors can design propellers which usually fit the main conditions imposed upon them fairly well in that they absorb the engine power at very nearly the proper revolution speed. They are not, however, necessarily the most efficient propellers for their purpose, for the simple theory is not sufficiently accurate to show slight differences in efficiency due to changes in pitch distribution, plan forms, etc.
=== Example ===
In choosing a propeller to analyze, it is desirable that its aerodynamic characteristics be known so that the accuracy of the calculated results can be checked. It is also desirable that the analysis be made of a propeller operating at a relatively low tip speed in order to be free from any effects of compressibility and that it be running free from body interference. The only propeller tests which satisfy all of these conditions are tests of model propellers in a wind tunnel. We shall therefore take for our example the central or master propeller of a series of model wood propellers of standard Navy form, tested by Dr. W. F. Durand at Stanford University. This is a two-bladed propeller 3 ft. in diameter, with a uniform geometrical pitch of 2.1 ft. (or a pitch-diameter ratio of 0.7). The blades have standard propeller sections based on the R.A.F-6 airfoil (Fig. 6), and the blade widths, thicknesses, and angles are as given in the first part of Table I. In our analysis we shall consider the propeller as advancing with a velocity of 40 m.p.h. and turning at the rate of 1,800 r.p.m.For the section at 75% of the tip radius, the radius is 1.125 ft., the blade width is 0.198 ft., the thickness ratio is 0.107, the lower camber is zero, and the blade angle β is 16.6°.
The forward velocity
V
=
40
m
.
p
.
h
.
=
40
×
88
60
=
58.65
ft./sec.
,
{\displaystyle {\begin{aligned}V&=40\ \mathrm {m.p.h.} \\&={\frac {40\times 88}{60}}\\&=58.65\ {\text{ft./sec.}},\end{aligned}}}
and
n
=
1800
60
=
30
r.p.s.
{\displaystyle {\begin{aligned}n&={\frac {1800}{60}}\\&=30\ {\text{r.p.s.}}\end{aligned}}}
The path angle
ϕ
=
arctan
V
2
π
r
n
=
arctan
58.65
2
π
×
1.125
×
30
=
15.5
∘
{\displaystyle {\begin{aligned}\phi &=\arctan {\frac {V}{2\pi rn}}\\&=\arctan {\frac {58.65}{2\pi \times 1.125\times 30}}\\&=15.5^{\circ }\end{aligned}}}
The angle of attack is therefore
α
=
β
−
ϕ
=
16.6
∘
−
15.5
∘
=
1.1
∘
{\displaystyle {\begin{aligned}\alpha &=\beta -\phi \\&=16.6^{\circ }-15.5^{\circ }\\&=1.1^{\circ }\\\end{aligned}}}
From Fig. 7, for a flat-faced section of thickness ratio 0.107 at an angle of attack of 1.1°, γ = 3.0°, and, from Fig. 9, CL = 0.425. (For sections having lower camber, CL should be corrected in accordance with the relation given in Fig. 8, and γ is given the same value as that for a flat-faced section having the upper camber only.)
Then
K
=
C
L
b
sin
2
ϕ
cos
γ
=
0.425
×
0.198
0.2672
2
×
0.999
=
1.180
,
{\displaystyle {\begin{aligned}K&={\frac {C_{L}b}{\sin ^{2}\phi \cos \gamma }}\\&={\frac {0.425\times 0.198}{0.2672^{2}\times 0.999}}\\&=1.180,\end{aligned}}}
and,
T
C
=
K
cos
(
ϕ
+
γ
)
=
1.180
×
cos
18.5
∘
=
1.119.
{\displaystyle {\begin{aligned}T_{C}&=K\cos(\phi +\gamma )\\&=1.180\times \cos 18.5^{\circ }\\&=1.119.\\\end{aligned}}}
Also,
Q
C
=
K
r
sin
(
ϕ
+
γ
)
=
1.180
×
1.125
×
sin
18.5
∘
=
1.421.
{\displaystyle {\begin{aligned}Q_{C}&=Kr\sin(\phi +\gamma )\\&=1.180\times 1.125\times \sin 18.5^{\circ }\\&=1.421.\end{aligned}}}
The computations of Tc and Qc for six representative elements of the propeller are given in convenient tabular form in Table I, and the values of Tc and Qc are plotted against radius in Fig. 9. The curves drawn through these points are sometimes referred to as the torque grading curves. The areas under the curve represent
∫
0
R
T
c
d
r
{\displaystyle \int _{0}^{R}T_{c}dr}
and
∫
0
R
Q
c
d
r
,
{\displaystyle \int _{0}^{R}Q_{c}dr,}
these being the expressions for the total thrust and torque per blade per unit of dynamic pressure due to the velocity of advance. The areas may be found by means of a planimeter, proper consideration, of course, being given to the scales of values, or the integration may be performed approximately (but with satisfactory accuracy) by means of Simpson's rule.
In using Simpson's rule the radius is divided into an even number of equal parts, such as ten. The ordinate at each division can then be found from the grading curve. If the original blade elements divide the blade into an even number of equal parts it is not necessary to plot the grading curves, but the curves are advantageous in that they show graphically the distribution of thrust and torque along the blade. They also provide a check upon the computations, for incorrect points will not usually form a fair curve.
If the abscissas are denoted by r and the ordinates at the various divisions by y1, y2, ..., y11, according to Simpson’s rule the area with ten equal divisions will be
∫
0
R
F
(
r
)
d
r
=
Δ
r
3
[
y
1
+
2
(
y
3
+
y
5
+
y
7
+
y
9
)
+
4
(
y
2
+
y
4
+
y
6
+
y
8
+
y
10
)
+
y
11
]
.
{\displaystyle \int _{0}^{R}F(r)\,dr={\frac {\Delta r}{3}}[y_{1}+2(y_{3}+y_{5}+y_{7}+y_{9})+4(y_{2}+y_{4}+y_{6}+y_{8}+y_{10})+y_{11}].}
The area under the thrust-grading curve of our example is therefore
∫
0
R
T
c
d
r
=
0.15
3
[
0
+
2
(
0.038
+
0.600
+
1.050
+
1.091
)
+
4
(
0
+
0.253
+
0.863
+
1.120
+
0912
)
+
0
]
=
0.9075
,
{\displaystyle {\begin{aligned}\int _{0}^{R}T_{c}dr&={\frac {0.15}{3}}[0+2(0.038+0.600+1.050+1.091)+4(0+0.253+0.863+1.120+0912)+0]\\&=0.9075,\end{aligned}}}
and in like manner
∫
0
R
Q
c
d
r
=
0.340.
{\displaystyle \int _{0}^{R}Q_{c}\,dr=0.340.}
The above integrations have also been made by means of a planimeter, and the average results from five trials agree with those obtained by means of Simpson’s rule within one-fourth of one per cent.
The thrust of the propeller in standard air is
T
=
1
2
ρ
V
2
B
∫
0
R
T
c
d
r
=
1
2
×
0.002378
×
58.65
2
×
2
×
0.9075
=
7.42
l
b
.
,
{\displaystyle {\begin{aligned}T&={\frac {1}{2}}\rho V^{2}B\int _{0}^{R}T_{c}dr\\&={\frac {1}{2}}\times 0.002378\times 58.65^{2}\times 2\times 0.9075\\&=7.42\ \mathrm {lb.} ,\end{aligned}}}
and the torque is
Q
=
1
2
ρ
V
2
B
∫
0
R
Q
c
d
r
=
1
2
×
0.002378
×
58.65
2
×
2
×
0.340
=
2.78
l
b
.
f
t
.
{\displaystyle {\begin{aligned}Q&={\frac {1}{2}}\rho V^{2}B\int _{0}^{R}Q_{c}dr\\&={\frac {1}{2}}\times 0.002378\times 58.65^{2}\times 2\times 0.340\\&=2.78\ \mathrm {lb.ft.} \end{aligned}}}
The power absorbed by the propeller is
P
=
2
π
n
Q
=
2
×
π
×
30
×
2.78
=
524
f
t
.
l
b
.
/
s
e
c
.
{\displaystyle {\begin{aligned}P&=2\pi nQ\\&=2\times \pi \times 30\times 2.78\\&=524\ \mathrm {ft.lb./\ sec} .\end{aligned}}}
or
H
P
=
524
550
=
0.953
,
{\displaystyle {\begin{aligned}HP&={\frac {524}{550}}\\&=0.953,\end{aligned}}}
and the efficiency is
η
=
T
V
2
π
n
Q
=
7.42
×
58.65
524
=
0.830.
{\displaystyle {\begin{aligned}\eta &={\frac {TV}{2\pi nQ}}\\&={\frac {7.42\times 58.65}{524}}\\&=0.830.\end{aligned}}}
The above-calculated performance compares with that measured in the wind tunnel as follows:
The power as calculated by the simple blade-element theory is in this case over 11% too low, the thrust is about 5 % low, and the efficiency is about 8% high. Of course, a differently calculated performance would have been obtained if propeller-section characteristics from tests on the same series of airfoils in a different wind tunnel had been used, but the variable-density tunnel tests are probably the most reliable of all.
Some light may be thrown upon the discrepancy between the calculated and observed performance by referring again to the pressure distribution tests on a model propeller. In these tests the pressure distribution over several sections of a propeller blade was measured while the propeller was running in a wind tunnel, and the three following sets of tests were made on corresponding airfoils:
The results of these three sets of airfoil tests are shown for the section at three-fourths of the tip radius in Fig. 10, which has been taken from the report. It will be noticed that the coefficients of resultant force CR agree quite well for the median section of the airfoil of aspect ratio 6 and the corresponding section of the special propeller-blade airfoil but that the resultant force coefficient for the entire airfoil of aspect ratio 6 is considerably lower. It is natural, then, that the calculated thrust and power of a propeller should be too low when based on airfoil characteristics for aspect ratio 6.
== Modifications ==
Many modifications to the simple blade-element theory have been suggested in order to make it more complete and to improve its accuracy. Most of these modified theories attempt to take into account the blade interference, and, in some of them, attempts are also made to eliminate the inaccuracy due to the use of airfoil data from tests on wings having a finite aspect ratio, such as 6. The first modification to be made was in the nature of a combination of the simple Drzewiecki theory with the Froude momentum theory.
== Diagrams ==
Standard propeller sections based of R.A.F.-6 Infinite aspect ratio.
Attribution
This article incorporates text from this source, which is in the public domain: Weick, Fred Ernest (1899). Aircraft propeller design. New York, McGraw-Hill Book Company, inc.
== See also ==
Circulation (fluid dynamics)
Computational fluid dynamics
== External links ==
Blade Element Analysis for Propellers
Helicopter Theory - Blade Element Theory in Forward Flight from Aerospaceweb.org
Blade element theory
Stefan Drzewiecki 1903
QBlade: Open Source Blade Element Method Software from H.F.I. TU Berlin
NASA-TM-102219: A survey of nonuniform inflow models for rotorcraft flight dynamics and control applications, by Robert Chen, NASA
== References == | Wikipedia/Blade_element_theory |
In computational fluid dynamics, the volume of fluid (VOF) method is a family of free-surface modelling techniques, i.e. numerical techniques for tracking and locating the free surface (or fluid–fluid interface). They belong to the class of Eulerian methods which are characterized by a mesh that is either stationary or is moving in a certain prescribed manner to accommodate the evolving shape of the interface. As such, VOF methods are advection schemes capturing the shape and position of the interface, but are not standalone flow solving algorithms. The Navier–Stokes equations describing the motion of the flow have to be solved separately.
== History ==
The volume of fluid method is based on earlier Marker-and-cell (MAC) methods developed at Los Alamos National Laboratory. MAC used Lagrangian marker particles to track the distribution of fluid in a fixed Eulerian grid. The use of marker particles was computationally expensive because it required many marker particles per grid cell, to reduce numerical noise when discrete marker particles move across grid cells. The original idea of the VOF method was to replace marker particles with a single scalar variable per grid cell representing the volume fraction of fluid in it. Thereby, the volume of fluid is governed by an advection equation. This idea arose from studies of two-phase mixture (water and steam) problems where it was customary to use a volume of steam variable. The VOF approach was first demonstrated in a 1975 publication “Methods for Calculating Multi-Dimensional, Transient Free Surface Flows Past Bodies” by Nichols and Hirt. This publication described how to advect the fluid fraction with a Donor-Acceptor scheme, how to estimate the orientation and position of the free surface inside surface cells, and how to prescribe appropriate boundary conditions (continuity and zero shear stress) at the free surface. This approach was much simpler than other techniques tracking the surface of fluid, yet more versatile as it could model the coalescence and breakup of fluid regions. In 1976, Noh & Woodward presented the Simple Line Interface Calculation (SLIC), a technique to approximate fluid interfaces based on volume fractions, designed for directional-split advection scheme of volume fractions. SLIC could also handle an arbitrary number of immiscible fluid phases per grid cells. Thereby, SLIC was well suited to the VOF approach, although the two methods were initially independent and remained separate till the 90s. The term “Volume of Fluid method” and it acronym “VOF” method were coined in the 1980 Los Alamos Scientific Laboratory report, “SOLA-VOF: A Solution Algorithm for Transient Fluid Flow with Multiple Free Boundaries,” by Nichols, Hirt and Hotchkiss and in the journal publication “Volume of Fluid (VOF) Method for the Dynamics of Free Boundaries” by Hirt and Nichols in 1981. These two publications provided more details about the specific procedures used to approximate the position of the free surface (locally represented by an inclined line in surface cells) and apply the free surface boundary conditions on it. Since VOF method surpassed MAC by lowering computer storage requirements, it quickly became popular. Early applications of the SOLA-VOF program developed at Los Alamos include light-water-reactor safety studies. A variant of the SOLA-VOF code was also adopted by NASA. In 1982, Youngs developed the Piecewise-Linear Interface Calculation (PLIC) scheme, which improved accuracy of interface reconstruction upon the SLIC and early VOF methods.
== Overview ==
The method is based on the idea of a so-called fraction function
C
{\displaystyle C}
. It is a scalar function, defined as the integral of a fluid's characteristic function in the control volume, namely the volume of a computational grid cell. The volume fraction of each fluid is tracked through every cell in the computational grid, while all fluids share a single set of momentum equations, i.e. one for each spatial direction. From a cell-volume averaged perspective, when a cell is empty of the tracked phase, the value of
C
{\displaystyle C}
is zero; when the cell is full of tracked phase,
C
=
1
{\displaystyle C=1}
; and when the cell contains an interface between the tracked and non-tracked volumes,
0
<
C
<
1
{\displaystyle 0<C<1}
. From a perspective of a local point that contains no volume,
C
{\displaystyle C}
is a discontinuous function insofar as its value jumps from 0 to 1 when the local point moves from the non-tracked to the tracked phase. The normal direction of the fluid interface is found where the value of
C
{\displaystyle C}
changes most rapidly. With this method, the free-surface is not defined sharply, instead it is distributed over the height of a cell. Thus, in order to attain accurate results, local grid refinements have to be done. The refinement criterion is simple, cells with
0
<
C
<
1
{\displaystyle 0<C<1}
have to be refined. A method for this, known as the marker and micro-cell method, has been developed by Raad and his colleagues in 1997.
The evolution of the
m
{\displaystyle m}
-th fluid in a system on
n
{\displaystyle n}
fluids is governed by the transport equation (actually the same equation that has to be fulfilled by the level-set method distance function
ϕ
{\displaystyle \phi }
):
∂
C
m
∂
t
+
v
⋅
∇
C
m
=
0
,
{\displaystyle {\frac {\partial C_{m}}{\partial t}}+\mathbf {v} \cdot \nabla C_{m}=0,}
with the following constraint
∑
m
=
1
n
C
m
=
1
{\displaystyle \sum _{m=1}^{n}C_{m}=1}
,
i.e., the volume of the fluids is constant. For each cell, properties such as density
ρ
{\displaystyle \rho }
are calculated by a volume fraction average of all fluids in the cell
ρ
=
∑
m
=
1
n
ρ
m
C
m
.
{\displaystyle \rho =\sum _{m=1}^{n}\rho _{m}C_{m}.}
These properties are then used to solve a single momentum equation through the domain, and the attained velocity field is shared among the fluids.
The VOF method is computationally friendly, as it introduces only one additional equation and thus requires minimal storage. The method is also characterized by its capability of dealing with highly non-linear problems in which the free-surface experiences sharp topological changes. By using the VOF method, one also evades the use of complicated mesh deformation algorithms used by surface-tracking methods. The major difficulty associated with the method is the smearing of the free-surface. This problem originates from excessive diffusion of the transport equation.
== Discretization ==
To avoid smearing of the free-surface, the transport equation has to be solved without excessive diffusion. Thus, the success of a VOF method depends heavily on the scheme used for the advection of the
C
{\displaystyle C}
field. Any chosen scheme needs to cope with the fact that
C
{\displaystyle C}
is discontinuous, unlike e.g. the distance function
ϕ
{\displaystyle \phi }
used in the Level-Set method.
Whereas a first order upwind scheme smears the interface, a downwind scheme of the same order will cause a false distribution problem which will cause erratic behavior in case of the flow is not oriented along a grid line. As these lower-order schemes are inaccurate, and higher-order schemes are unstable and induce oscillations, it has been necessary to develop schemes which keep the free-surface sharp while also producing monotonic profiles for
C
{\displaystyle C}
. Over the years, a multitude of different methods for treating the advection have been developed. In the original VOF-article by Hirt, a donor-acceptor scheme was employed. This scheme formed a basis for the compressive differencing schemes.
The different methods for treating VOF can be roughly divided into three categories, namely the donor-acceptor formulation, higher order differencing schemes and line techniques.
=== The Donor-Acceptor Schemes ===
The donor-acceptor scheme is based on two fundamental criteria, namely the boundedness criterion and the availability criterion. The first one states that the value of
C
{\displaystyle C}
has to be bounded between zero and one. The latter criterion ensures that the amount of fluid convected over a face during a time step is less than or equal to the amount available in the donor cell, i.e., the cell from which the fluid is flowing to the acceptor cell. In his original work, Hirt treated this with a blended scheme consisting of controlled downwinding and upwind differencing.
=== Higher Order Differencing Schemes ===
In the higher order differencing schemes, as the name suggests, the convective transport equation is discretized with higher order or blended differencing schemes. Such methods include the Compressive Interface Capturing Scheme for Arbitrary Meshes (CICSAM) and High Resolution Interface Capturing (HRIC) scheme, which are both based on the Normalized Variable Diagram (NVD) by Leonard.
=== Geometrical Reconstruction Techniques ===
Line techniques circumvent the problems associated with the discretization of the transport equation by not tracking the interface in a cell explicitly. Instead, the fluid distribution in a cell an interface is obtained by using the volume fraction distribution of neighbouring cells. The Simple Line Interface Calculation (SLIC) by Noh and Woodward from 1976 uses a simple geometry to reconstruct the interface. In each cell the interface is approximated as a line parallel to one of the coordinate axes and assumes different fluid configurations for the horizontal and vertical movements respectively. A widely used technique today is the Piecewise Linear Interface Calculation by Youngs. PLIC is based on the idea that the interface can be represented as a line in R2 or a plane in R3; in the latter case we may describe the interface by:
n
x
+
n
y
+
n
z
=
α
,
{\displaystyle \mathbf {n} _{x}+\mathbf {n} _{y}+\mathbf {n} _{z}=\alpha ,}
where
n
{\displaystyle \mathbf {n} }
is a vector normal to the interface. Components of the normal are found e.g. by using the finite difference method or its combination with least squares optimization. The free term
α
{\displaystyle \alpha }
is then found (analytically or by approximation) by enforcing mass conservation within computational cell. Once the description of the interface is established, the advection equation of
C
{\displaystyle C}
is solved using geometrical techniques such as finding the flux of
C
{\displaystyle C}
between grid cells, or advecting the endpoints of interface using discrete values of fluid velocity.
== Interface capture issues ==
In two-phase flows in which the properties of the two phases are vastly different, errors in the computation of the surface tension force at the interface cause Front-Capturing methods such as Volume of Fluid (VOF) and Level-Set method (LS) to develop interfacial spurious currents. To better solve such flows, special treatment is required to reduce such spurious currents. A few studies have looked at improving interface tracking by combining Level-set method and Volume of fluid methods while a few others have looked at improving the numerical solving algorithm by adding smoothening loops or improving property averaging techniques.
== See also ==
Immersed boundary method
Stochastic Eulerian Lagrangian methods
Level-set method
Sloshing
== References == | Wikipedia/Volume_of_fluid_method |
A finite difference is a mathematical expression of the form f(x + b) − f(x + a). Finite differences (or the associated difference quotients) are often used as approximations of derivatives, such as in numerical differentiation.
The difference operator, commonly denoted
Δ
{\displaystyle \Delta }
, is the operator that maps a function f to the function
Δ
[
f
]
{\displaystyle \Delta [f]}
defined by
Δ
[
f
]
(
x
)
=
f
(
x
+
1
)
−
f
(
x
)
.
{\displaystyle \Delta [f](x)=f(x+1)-f(x).}
A difference equation is a functional equation that involves the finite difference operator in the same way as a differential equation involves derivatives. There are many similarities between difference equations and differential equations. Certain recurrence relations can be written as difference equations by replacing iteration notation with finite differences.
In numerical analysis, finite differences are widely used for approximating derivatives, and the term "finite difference" is often used as an abbreviation of "finite difference approximation of derivatives".
Finite differences were introduced by Brook Taylor in 1715 and have also been studied as abstract self-standing mathematical objects in works by George Boole (1860), L. M. Milne-Thomson (1933), and Károly Jordan (1939). Finite differences trace their origins back to one of Jost Bürgi's algorithms (c. 1592) and work by others including Isaac Newton. The formal calculus of finite differences can be viewed as an alternative to the calculus of infinitesimals.
== Basic types ==
Three basic types are commonly considered: forward, backward, and central finite differences.
A forward difference, denoted
Δ
h
[
f
]
,
{\displaystyle \Delta _{h}[f],}
of a function f is a function defined as
Δ
h
[
f
]
(
x
)
=
f
(
x
+
h
)
−
f
(
x
)
.
{\displaystyle \Delta _{h}[f](x)=f(x+h)-f(x).}
Depending on the application, the spacing h may be variable or constant. When omitted, h is taken to be 1; that is,
Δ
[
f
]
(
x
)
=
Δ
1
[
f
]
(
x
)
=
f
(
x
+
1
)
−
f
(
x
)
.
{\displaystyle \Delta [f](x)=\Delta _{1}[f](x)=f(x+1)-f(x).}
A backward difference uses the function values at x and x − h, instead of the values at x + h and x:
∇
h
[
f
]
(
x
)
=
f
(
x
)
−
f
(
x
−
h
)
=
Δ
h
[
f
]
(
x
−
h
)
.
{\displaystyle \nabla _{h}[f](x)=f(x)-f(x-h)=\Delta _{h}[f](x-h).}
Finally, the central difference is given by
δ
h
[
f
]
(
x
)
=
f
(
x
+
h
2
)
−
f
(
x
−
h
2
)
=
Δ
h
/
2
[
f
]
(
x
)
+
∇
h
/
2
[
f
]
(
x
)
.
{\displaystyle \delta _{h}[f](x)=f(x+{\tfrac {h}{2}})-f(x-{\tfrac {h}{2}})=\Delta _{h/2}[f](x)+\nabla _{h/2}[f](x).}
== Relation with derivatives ==
The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems.
The derivative of a function f at a point x is defined by the limit
f
′
(
x
)
=
lim
h
→
0
f
(
x
+
h
)
−
f
(
x
)
h
.
{\displaystyle f'(x)=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}.}
If h has a fixed (non-zero) value instead of approaching zero, then the right-hand side of the above equation would be written
f
(
x
+
h
)
−
f
(
x
)
h
=
Δ
h
[
f
]
(
x
)
h
.
{\displaystyle {\frac {f(x+h)-f(x)}{h}}={\frac {\Delta _{h}[f](x)}{h}}.}
Hence, the forward difference divided by h approximates the derivative when h is small. The error in this approximation can be derived from Taylor's theorem. Assuming that f is twice differentiable, we have
Δ
h
[
f
]
(
x
)
h
−
f
′
(
x
)
=
o
(
h
)
→
0
as
h
→
0.
{\displaystyle {\frac {\Delta _{h}[f](x)}{h}}-f'(x)=o(h)\to 0\quad {\text{as }}h\to 0.}
The same formula holds for the backward difference:
∇
h
[
f
]
(
x
)
h
−
f
′
(
x
)
=
o
(
h
)
→
0
as
h
→
0.
{\displaystyle {\frac {\nabla _{h}[f](x)}{h}}-f'(x)=o(h)\to 0\quad {\text{as }}h\to 0.}
However, the central (also called centered) difference yields a more accurate approximation. If f is three times differentiable,
δ
h
[
f
]
(
x
)
h
−
f
′
(
x
)
=
o
(
h
2
)
.
{\displaystyle {\frac {\delta _{h}[f](x)}{h}}-f'(x)=o\left(h^{2}\right).}
The main problem with the central difference method, however, is that oscillating functions can yield zero derivative. If f(nh) = 1 for n odd, and f(nh) = 2 for n even, then f′(nh) = 0 if it is calculated with the central difference scheme. This is particularly troublesome if the domain of f is discrete. See also Symmetric derivative.
Authors for whom finite differences mean finite difference approximations define the forward/backward/central differences as the quotients given in this section (instead of employing the definitions given in the previous section).
== Higher-order differences ==
In an analogous way, one can obtain finite difference approximations to higher order derivatives and differential operators. For example, by using the above central difference formula for f′(x + h/2) and f′(x − h/2) and applying a central difference formula for the derivative of f′ at x, we obtain the central difference approximation of the second derivative of f:
Second-order central
f
″
(
x
)
≈
δ
h
2
[
f
]
(
x
)
h
2
=
f
(
x
+
h
)
−
f
(
x
)
h
−
f
(
x
)
−
f
(
x
−
h
)
h
h
=
f
(
x
+
h
)
−
2
f
(
x
)
+
f
(
x
−
h
)
h
2
.
{\displaystyle f''(x)\approx {\frac {\delta _{h}^{2}[f](x)}{h^{2}}}={\frac {{\frac {f(x+h)-f(x)}{h}}-{\frac {f(x)-f(x-h)}{h}}}{h}}={\frac {f(x+h)-2f(x)+f(x-h)}{h^{2}}}.}
Similarly we can apply other differencing formulas in a recursive manner.
Second order forward
f
″
(
x
)
≈
Δ
h
2
[
f
]
(
x
)
h
2
=
f
(
x
+
2
h
)
−
f
(
x
+
h
)
h
−
f
(
x
+
h
)
−
f
(
x
)
h
h
=
f
(
x
+
2
h
)
−
2
f
(
x
+
h
)
+
f
(
x
)
h
2
.
{\displaystyle f''(x)\approx {\frac {\Delta _{h}^{2}[f](x)}{h^{2}}}={\frac {{\frac {f(x+2h)-f(x+h)}{h}}-{\frac {f(x+h)-f(x)}{h}}}{h}}={\frac {f(x+2h)-2f(x+h)+f(x)}{h^{2}}}.}
Second order backward
f
″
(
x
)
≈
∇
h
2
[
f
]
(
x
)
h
2
=
f
(
x
)
−
f
(
x
−
h
)
h
−
f
(
x
−
h
)
−
f
(
x
−
2
h
)
h
h
=
f
(
x
)
−
2
f
(
x
−
h
)
+
f
(
x
−
2
h
)
h
2
.
{\displaystyle f''(x)\approx {\frac {\nabla _{h}^{2}[f](x)}{h^{2}}}={\frac {{\frac {f(x)-f(x-h)}{h}}-{\frac {f(x-h)-f(x-2h)}{h}}}{h}}={\frac {f(x)-2f(x-h)+f(x-2h)}{h^{2}}}.}
More generally, the n-th order forward, backward, and central differences are given by, respectively,
Forward
Δ
h
n
[
f
]
(
x
)
=
∑
i
=
0
n
(
−
1
)
n
−
i
(
n
i
)
f
(
x
+
i
h
)
,
{\displaystyle \Delta _{h}^{n}[f](x)=\sum _{i=0}^{n}(-1)^{n-i}{\binom {n}{i}}f{\bigl (}x+ih{\bigr )},}
Backward
∇
h
n
[
f
]
(
x
)
=
∑
i
=
0
n
(
−
1
)
i
(
n
i
)
f
(
x
−
i
h
)
,
{\displaystyle \nabla _{h}^{n}[f](x)=\sum _{i=0}^{n}(-1)^{i}{\binom {n}{i}}f(x-ih),}
Central
δ
h
n
[
f
]
(
x
)
=
∑
i
=
0
n
(
−
1
)
i
(
n
i
)
f
(
x
+
(
n
2
−
i
)
h
)
.
{\displaystyle \delta _{h}^{n}[f](x)=\sum _{i=0}^{n}(-1)^{i}{\binom {n}{i}}f\left(x+\left({\frac {n}{2}}-i\right)h\right).}
These equations use binomial coefficients after the summation sign shown as (ni). Each row of Pascal's triangle provides the coefficient for each value of i.
Note that the central difference will, for odd n, have h multiplied by non-integers. This is often a problem because it amounts to changing the interval of discretization. The problem may be remedied substituting the average of
δ
n
[
f
]
(
x
−
h
2
)
{\displaystyle \ \delta ^{n}[f](\ x-{\tfrac {\ h\ }{2}}\ )\ }
and
δ
n
[
f
]
(
x
+
h
2
)
.
{\displaystyle \ \delta ^{n}[f](\ x+{\tfrac {\ h\ }{2}}\ )~.}
Forward differences applied to a sequence are sometimes called the binomial transform of the sequence, and have a number of interesting combinatorial properties. Forward differences may be evaluated using the Nörlund–Rice integral. The integral representation for these types of series is interesting, because the integral can often be evaluated using asymptotic expansion or saddle-point techniques; by contrast, the forward difference series can be extremely hard to evaluate numerically, because the binomial coefficients grow rapidly for large n.
The relationship of these higher-order differences with the respective derivatives is straightforward,
d
n
f
d
x
n
(
x
)
=
Δ
h
n
[
f
]
(
x
)
h
n
+
o
(
h
)
=
∇
h
n
[
f
]
(
x
)
h
n
+
o
(
h
)
=
δ
h
n
[
f
]
(
x
)
h
n
+
o
(
h
2
)
.
{\displaystyle {\frac {d^{n}f}{dx^{n}}}(x)={\frac {\Delta _{h}^{n}[f](x)}{h^{n}}}+o(h)={\frac {\nabla _{h}^{n}[f](x)}{h^{n}}}+o(h)={\frac {\delta _{h}^{n}[f](x)}{h^{n}}}+o\left(h^{2}\right).}
Higher-order differences can also be used to construct better approximations. As mentioned above, the first-order difference approximates the first-order derivative up to a term of order h. However, the combination
Δ
h
[
f
]
(
x
)
−
1
2
Δ
h
2
[
f
]
(
x
)
h
=
−
f
(
x
+
2
h
)
−
4
f
(
x
+
h
)
+
3
f
(
x
)
2
h
{\displaystyle {\frac {\Delta _{h}[f](x)-{\frac {1}{2}}\Delta _{h}^{2}[f](x)}{h}}=-{\frac {f(x+2h)-4f(x+h)+3f(x)}{2h}}}
approximates f′(x) up to a term of order h2. This can be proven by expanding the above expression in Taylor series, or by using the calculus of finite differences, explained below.
If necessary, the finite difference can be centered about any point by mixing forward, backward, and central differences.
== Polynomials ==
For a given polynomial of degree n ≥ 1, expressed in the function P(x), with real numbers a ≠ 0 and b and lower order terms (if any) marked as l.o.t.:
P
(
x
)
=
a
x
n
+
b
x
n
−
1
+
l
.
o
.
t
.
{\displaystyle P(x)=ax^{n}+bx^{n-1}+l.o.t.}
After n pairwise differences, the following result can be achieved, where h ≠ 0 is a real number marking the arithmetic difference:
Δ
h
n
[
P
]
(
x
)
=
a
h
n
n
!
{\displaystyle \Delta _{h}^{n}[P](x)=ah^{n}n!}
Only the coefficient of the highest-order term remains. As this result is constant with respect to x, any further pairwise differences will have the value 0.
=== Inductive proof ===
==== Base case ====
Let Q(x) be a polynomial of degree 1:
Δ
h
[
Q
]
(
x
)
=
Q
(
x
+
h
)
−
Q
(
x
)
=
[
a
(
x
+
h
)
+
b
]
−
[
a
x
+
b
]
=
a
h
=
a
h
1
1
!
{\displaystyle \Delta _{h}[Q](x)=Q(x+h)-Q(x)=[a(x+h)+b]-[ax+b]=ah=ah^{1}1!}
This proves it for the base case.
==== Inductive step ====
Let R(x) be a polynomial of degree m − 1 where m ≥ 2 and the coefficient of the highest-order term be a ≠ 0. Assuming the following holds true for all polynomials of degree m − 1:
Δ
h
m
−
1
[
R
]
(
x
)
=
a
h
m
−
1
(
m
−
1
)
!
{\displaystyle \Delta _{h}^{m-1}[R](x)=ah^{m-1}(m-1)!}
Let S(x) be a polynomial of degree m. With one pairwise difference:
Δ
h
[
S
]
(
x
)
=
[
a
(
x
+
h
)
m
+
b
(
x
+
h
)
m
−
1
+
l.o.t.
]
−
[
a
x
m
+
b
x
m
−
1
+
l.o.t.
]
=
a
h
m
x
m
−
1
+
l.o.t.
=
T
(
x
)
{\displaystyle \Delta _{h}[S](x)=[a(x+h)^{m}+b(x+h)^{m-1}+{\text{l.o.t.}}]-[ax^{m}+bx^{m-1}+{\text{l.o.t.}}]=ahmx^{m-1}+{\text{l.o.t.}}=T(x)}
As ahm ≠ 0, this results in a polynomial T(x) of degree m − 1, with ahm as the coefficient of the highest-order term. Given the assumption above and m − 1 pairwise differences (resulting in a total of m pairwise differences for S(x)), it can be found that:
Δ
h
m
−
1
[
T
]
(
x
)
=
a
h
m
⋅
h
m
−
1
(
m
−
1
)
!
=
a
h
m
m
!
{\displaystyle \Delta _{h}^{m-1}[T](x)=ahm\cdot h^{m-1}(m-1)!=ah^{m}m!}
This completes the proof.
=== Application ===
This identity can be used to find the lowest-degree polynomial that intercepts a number of points (x, y) where the difference on the x-axis from one point to the next is a constant h ≠ 0. For example, given the following points:
We can use a differences table, where for all cells to the right of the first y, the following relation to the cells in the column immediately to the left exists for a cell (a + 1, b + 1), with the top-leftmost cell being at coordinate (0, 0):
(
a
+
1
,
b
+
1
)
=
(
a
,
b
+
1
)
−
(
a
,
b
)
{\displaystyle (a+1,b+1)=(a,b+1)-(a,b)}
To find the first term, the following table can be used:
This arrives at a constant 648. The arithmetic difference is h = 3, as established above. Given the number of pairwise differences needed to reach the constant, it can be surmised this is a polynomial of degree 3. Thus, using the identity above:
648
=
a
⋅
3
3
⋅
3
!
=
a
⋅
27
⋅
6
=
a
⋅
162
{\displaystyle 648=a\cdot 3^{3}\cdot 3!=a\cdot 27\cdot 6=a\cdot 162}
Solving for a, it can be found to have the value 4. Thus, the first term of the polynomial is 4x3.
Then, subtracting out the first term, which lowers the polynomial's degree, and finding the finite difference again:
Here, the constant is achieved after only two pairwise differences, thus the following result:
−
306
=
a
⋅
3
2
⋅
2
!
=
a
⋅
18
{\displaystyle -306=a\cdot 3^{2}\cdot 2!=a\cdot 18}
Solving for a, which is −17, the polynomial's second term is −17x2.
Moving on to the next term, by subtracting out the second term:
Thus the constant is achieved after only one pairwise difference:
108
=
a
⋅
3
1
⋅
1
!
=
a
⋅
3
{\displaystyle 108=a\cdot 3^{1}\cdot 1!=a\cdot 3}
It can be found that a = 36 and thus the third term of the polynomial is 36x. Subtracting out the third term:
Without any pairwise differences, it is found that the 4th and final term of the polynomial is the constant −19. Thus, the lowest-degree polynomial intercepting all the points in the first table is found:
4
x
3
−
17
x
2
+
36
x
−
19
{\displaystyle 4x^{3}-17x^{2}+36x-19}
== Arbitrarily sized kernels ==
Using linear algebra one can construct finite difference approximations which utilize an arbitrary number of points to the left and a (possibly different) number of points to the right of the evaluation point, for any order derivative. This involves solving a linear system such that the Taylor expansion of the sum of those points around the evaluation point best approximates the Taylor expansion of the desired derivative. Such formulas can be represented graphically on a hexagonal or diamond-shaped grid.
This is useful for differentiating a function on a grid, where, as one approaches the edge of the grid, one must sample fewer and fewer points on one side.
Finite difference approximations for non-standard (and even non-integer) stencils given an arbitrary stencil and a desired derivative order may be constructed.
=== Properties ===
For all positive k and n
Δ
k
h
n
(
f
,
x
)
=
∑
i
1
=
0
k
−
1
∑
i
2
=
0
k
−
1
⋯
∑
i
n
=
0
k
−
1
Δ
h
n
(
f
,
x
+
i
1
h
+
i
2
h
+
⋯
+
i
n
h
)
.
{\displaystyle \Delta _{kh}^{n}(f,x)=\sum \limits _{i_{1}=0}^{k-1}\sum \limits _{i_{2}=0}^{k-1}\cdots \sum \limits _{i_{n}=0}^{k-1}\Delta _{h}^{n}\left(f,x+i_{1}h+i_{2}h+\cdots +i_{n}h\right).}
Leibniz rule:
Δ
h
n
(
f
g
,
x
)
=
∑
k
=
0
n
(
n
k
)
Δ
h
k
(
f
,
x
)
Δ
h
n
−
k
(
g
,
x
+
k
h
)
.
{\displaystyle \Delta _{h}^{n}(fg,x)=\sum \limits _{k=0}^{n}{\binom {n}{k}}\Delta _{h}^{k}(f,x)\Delta _{h}^{n-k}(g,x+kh).}
== In differential equations ==
An important application of finite differences is in numerical analysis, especially in numerical differential equations, which aim at the numerical solution of ordinary and partial differential equations. The idea is to replace the derivatives appearing in the differential equation by finite differences that approximate them. The resulting methods are called finite difference methods.
Common applications of the finite difference method are in computational science and engineering disciplines, such as thermal engineering, fluid mechanics, etc.
== Newton's series ==
The Newton series consists of the terms of the Newton forward difference equation, named after Isaac Newton; in essence, it is the Gregory–Newton interpolation formula (named after Isaac Newton and James Gregory), first published in his Principia Mathematica in 1687, namely the discrete analog of the continuous Taylor expansion,
which holds for any polynomial function f and for many (but not all) analytic functions. (It does not hold when f is exponential type
π
{\displaystyle \pi }
. This is easily seen, as the sine function vanishes at integer multiples of
π
{\displaystyle \pi }
; the corresponding Newton series is identically zero, as all finite differences are zero in this case. Yet clearly, the sine function is not zero.) Here, the expression
(
x
k
)
=
(
x
)
k
k
!
{\displaystyle {\binom {x}{k}}={\frac {(x)_{k}}{k!}}}
is the binomial coefficient, and
(
x
)
k
=
x
(
x
−
1
)
(
x
−
2
)
⋯
(
x
−
k
+
1
)
{\displaystyle (x)_{k}=x(x-1)(x-2)\cdots (x-k+1)}
is the "falling factorial" or "lower factorial", while the empty product (x)0 is defined to be 1. In this particular case, there is an assumption of unit steps for the changes in the values of x, h = 1 of the generalization below.
Note the formal correspondence of this result to Taylor's theorem. Historically, this, as well as the Chu–Vandermonde identity,
(
x
+
y
)
n
=
∑
k
=
0
n
(
n
k
)
(
x
)
n
−
k
(
y
)
k
,
{\displaystyle (x+y)_{n}=\sum _{k=0}^{n}{\binom {n}{k}}(x)_{n-k}\,(y)_{k},}
(following from it, and corresponding to the binomial theorem), are included in the observations that matured to the system of umbral calculus.
Newton series expansions can be superior to Taylor series expansions when applied to discrete quantities like quantum spins (see Holstein–Primakoff transformation), bosonic operator functions or discrete counting statistics.
To illustrate how one may use Newton's formula in actual practice, consider the first few terms of doubling the Fibonacci sequence f = 2, 2, 4, ... One can find a polynomial that reproduces these values, by first computing a difference table, and then substituting the differences that correspond to x0 (underlined) into the formula as follows,
x
f
=
Δ
0
Δ
1
Δ
2
1
2
_
0
_
2
2
2
_
2
3
4
f
(
x
)
=
Δ
0
⋅
1
+
Δ
1
⋅
(
x
−
x
0
)
1
1
!
+
Δ
2
⋅
(
x
−
x
0
)
2
2
!
(
x
0
=
1
)
=
2
⋅
1
+
0
⋅
x
−
1
1
+
2
⋅
(
x
−
1
)
(
x
−
2
)
2
=
2
+
(
x
−
1
)
(
x
−
2
)
{\displaystyle {\begin{matrix}{\begin{array}{|c||c|c|c|}\hline x&f=\Delta ^{0}&\Delta ^{1}&\Delta ^{2}\\\hline 1&{\underline {2}}&&\\&&{\underline {0}}&\\2&2&&{\underline {2}}\\&&2&\\3&4&&\\\hline \end{array}}&\quad {\begin{aligned}f(x)&=\Delta ^{0}\cdot 1+\Delta ^{1}\cdot {\dfrac {(x-x_{0})_{1}}{1!}}+\Delta ^{2}\cdot {\dfrac {(x-x_{0})_{2}}{2!}}\quad (x_{0}=1)\\\\&=2\cdot 1+0\cdot {\dfrac {x-1}{1}}+2\cdot {\dfrac {(x-1)(x-2)}{2}}\\\\&=2+(x-1)(x-2)\\\end{aligned}}\end{matrix}}}
For the case of nonuniform steps in the values of x, Newton computes the divided differences,
Δ
j
,
0
=
y
j
,
Δ
j
,
k
=
Δ
j
+
1
,
k
−
1
−
Δ
j
,
k
−
1
x
j
+
k
−
x
j
∋
{
k
>
0
,
j
≤
max
(
j
)
−
k
}
,
Δ
0
k
=
Δ
0
,
k
{\displaystyle \Delta _{j,0}=y_{j},\qquad \Delta _{j,k}={\frac {\Delta _{j+1,k-1}-\Delta _{j,k-1}}{x_{j+k}-x_{j}}}\quad \ni \quad \left\{k>0,\;j\leq \max \left(j\right)-k\right\},\qquad \Delta 0_{k}=\Delta _{0,k}}
the series of products,
P
0
=
1
,
P
k
+
1
=
P
k
⋅
(
ξ
−
x
k
)
,
{\displaystyle {P_{0}}=1,\quad \quad P_{k+1}=P_{k}\cdot \left(\xi -x_{k}\right),}
and the resulting polynomial is the scalar product,
f
(
ξ
)
=
Δ
0
⋅
P
(
ξ
)
.
{\displaystyle f(\xi )=\Delta 0\cdot P\left(\xi \right).}
In analysis with p-adic numbers, Mahler's theorem states that the assumption that f is a polynomial function can be weakened all the way to the assumption that f is merely continuous.
Carlson's theorem provides necessary and sufficient conditions for a Newton series to be unique, if it exists. However, a Newton series does not, in general, exist.
The Newton series, together with the Stirling series and the Selberg series, is a special case of the general difference series, all of which are defined in terms of suitably scaled forward differences.
In a compressed and slightly more general form and equidistant nodes the formula reads
f
(
x
)
=
∑
k
=
0
(
x
−
a
h
k
)
∑
j
=
0
k
(
−
1
)
k
−
j
(
k
j
)
f
(
a
+
j
h
)
.
{\displaystyle f(x)=\sum _{k=0}{\binom {\frac {x-a}{h}}{k}}\sum _{j=0}^{k}(-1)^{k-j}{\binom {k}{j}}f(a+jh).}
== Calculus of finite differences ==
The forward difference can be considered as an operator, called the difference operator, which maps the function f to Δh[f]. This operator amounts to
Δ
h
=
T
h
−
I
,
{\displaystyle \Delta _{h}=\operatorname {T} _{h}-\operatorname {I} \ ,}
where Th is the shift operator with step h, defined by Th[f](x) = f(x + h), and I is the identity operator.
The finite difference of higher orders can be defined in recursive manner as Δnh ≡ Δh(Δn − 1h). Another equivalent definition is Δnh ≡ [Th − I]n.
The difference operator Δh is a linear operator, as such it satisfies Δh[α f + β g](x) = α Δh[f](x) + β Δh[g](x).
It also satisfies a special Leibniz rule:
Δ
h
(
f
(
x
)
g
(
x
)
)
=
(
Δ
h
f
(
x
)
)
g
(
x
+
h
)
+
f
(
x
)
(
Δ
h
g
(
x
)
)
.
{\displaystyle \ \operatorname {\Delta } _{h}{\bigl (}\ f(x)\ g(x)\ {\bigr )}\ =\ {\bigl (}\ \operatorname {\Delta } _{h}f(x)\ {\bigr )}\ g(x+h)\ +\ f(x)\ {\bigl (}\ \operatorname {\Delta } _{h}g(x)\ {\bigr )}~.}
Similar Leibniz rules hold for the backward and central differences.
Formally applying the Taylor series with respect to h, yields the operator equation
Δ
h
=
h
D
+
1
2
!
h
2
D
2
+
1
3
!
h
3
D
3
+
⋯
=
e
h
D
−
I
,
{\displaystyle \operatorname {\Delta } _{h}=h\operatorname {D} +{\frac {1}{2!}}h^{2}\operatorname {D} ^{2}+{\frac {1}{3!}}h^{3}\operatorname {D} ^{3}+\cdots =e^{h\operatorname {D} }-\operatorname {I} \ ,}
where D denotes the conventional, continuous derivative operator, mapping f to its derivative f′. The expansion is valid when both sides act on analytic functions, for sufficiently small h; in the special case that the series of derivatives terminates (when the function operated on is a finite polynomial) the expression is exact, for all finite stepsizes, h . Thus Th = eh D, and formally inverting the exponential yields
h
D
=
ln
(
1
+
Δ
h
)
=
Δ
h
−
1
2
Δ
h
2
+
1
3
Δ
h
3
−
⋯
.
{\displaystyle h\operatorname {D} =\ln(1+\Delta _{h})=\Delta _{h}-{\tfrac {1}{2}}\,\Delta _{h}^{2}+{\tfrac {1}{3}}\,\Delta _{h}^{3}-\cdots ~.}
This formula holds in the sense that both operators give the same result when applied to a polynomial.
Even for analytic functions, the series on the right is not guaranteed to converge; it may be an asymptotic series. However, it can be used to obtain more accurate approximations for the derivative. For instance, retaining the first two terms of the series yields the second-order approximation to f ′(x) mentioned at the end of the section § Higher-order differences.
The analogous formulas for the backward and central difference operators are
h
D
=
−
ln
(
1
−
∇
h
)
and
h
D
=
2
arsinh
(
1
2
δ
h
)
.
{\displaystyle h\operatorname {D} =-\ln(1-\nabla _{h})\quad {\text{ and }}\quad h\operatorname {D} =2\operatorname {arsinh} \left({\tfrac {1}{2}}\,\delta _{h}\right)~.}
The calculus of finite differences is related to the umbral calculus of combinatorics. This remarkably systematic correspondence is due to the identity of the commutators of the umbral quantities to their continuum analogs (h → 0 limits),
A large number of formal differential relations of standard calculus involving
functions f(x) thus systematically map to umbral finite-difference analogs involving f( x T−1h ).
For instance, the umbral analog of a monomial xn is a generalization of the above falling factorial (Pochhammer k-symbol),
(
x
)
n
≡
(
x
T
h
−
1
)
n
=
x
(
x
−
h
)
(
x
−
2
h
)
⋯
(
x
−
(
n
−
1
)
h
)
,
{\displaystyle \ (x)_{n}\equiv \left(\ x\ \operatorname {T} _{h}^{-1}\right)^{n}=x\left(x-h\right)\left(x-2h\right)\cdots {\bigl (}x-\left(n-1\right)\ h{\bigr )}\ ,}
so that
Δ
h
h
(
x
)
n
=
n
(
x
)
n
−
1
,
{\displaystyle \ {\frac {\Delta _{h}}{h}}(x)_{n}=n\ (x)_{n-1}\ ,}
hence the above Newton interpolation formula (by matching coefficients in the expansion of an arbitrary function f(x) in such symbols), and so on.
For example, the umbral sine is
sin
(
x
T
h
−
1
)
=
x
−
(
x
)
3
3
!
+
(
x
)
5
5
!
−
(
x
)
7
7
!
+
⋯
{\displaystyle \ \sin \left(x\ \operatorname {T} _{h}^{-1}\right)=x-{\frac {(x)_{3}}{3!}}+{\frac {(x)_{5}}{5!}}-{\frac {(x)_{7}}{7!}}+\cdots \ }
As in the continuum limit, the eigenfunction of Δh/h also happens to be an exponential,
Δ
h
h
(
1
+
λ
h
)
x
h
=
Δ
h
h
e
ln
(
1
+
λ
h
)
x
h
=
λ
e
ln
(
1
+
λ
h
)
x
h
,
{\displaystyle \ {\frac {\Delta _{h}}{h}}(1+\lambda h)^{\frac {x}{h}}={\frac {\Delta _{h}}{h}}e^{\ln(1+\lambda h){\frac {x}{h}}}=\lambda e^{\ln(1+\lambda h){\frac {x}{h}}}\ ,}
and hence Fourier sums of continuum functions are readily, faithfully mapped to umbral Fourier sums, i.e., involving the same Fourier coefficients multiplying these umbral basis exponentials. This umbral exponential thus amounts to the exponential generating function of the Pochhammer symbols.
Thus, for instance, the Dirac delta function maps to its umbral correspondent, the cardinal sine function
δ
(
x
)
↦
sin
[
π
2
(
1
+
x
h
)
]
π
(
x
+
h
)
,
{\displaystyle \ \delta (x)\mapsto {\frac {\sin \left[{\frac {\pi }{2}}\left(1+{\frac {x}{h}}\right)\right]}{\pi (x+h)}}\ ,}
and so forth. Difference equations can often be solved with techniques very similar to those for solving differential equations.
The inverse operator of the forward difference operator, so then the umbral integral, is the indefinite sum or antidifference operator.
=== Rules for calculus of finite difference operators ===
Analogous to rules for finding the derivative, we have:
Constant rule: If c is a constant, then
Δ
c
=
0
{\displaystyle \ \Delta c=0\ }
Linearity: If a and b are constants,
Δ
(
a
f
+
b
g
)
=
a
Δ
f
+
b
Δ
g
{\displaystyle \ \Delta (a\ f+b\ g)=a\ \Delta f+b\ \Delta g\ }
All of the above rules apply equally well to any difference operator as to Δ, including δ and ∇.
Product rule:
Δ
(
f
g
)
=
f
Δ
g
+
g
Δ
f
+
Δ
f
Δ
g
∇
(
f
g
)
=
f
∇
g
+
g
∇
f
−
∇
f
∇
g
{\displaystyle {\begin{aligned}\ \Delta (fg)&=f\,\Delta g+g\,\Delta f+\Delta f\,\Delta g\\[4pt]\nabla (fg)&=f\,\nabla g+g\,\nabla f-\nabla f\,\nabla g\ \end{aligned}}}
Quotient rule:
∇
(
f
g
)
=
(
det
[
∇
f
∇
g
f
g
]
)
/
(
g
⋅
det
[
g
∇
g
1
1
]
)
{\displaystyle \ \nabla \left({\frac {f}{g}}\right)=\left.\left(\det {\begin{bmatrix}\nabla f&\nabla g\\f&g\end{bmatrix}}\right)\right/\left(g\cdot \det {\begin{bmatrix}g&\nabla g\\1&1\end{bmatrix}}\right)}
or
∇
(
f
g
)
=
g
∇
f
−
f
∇
g
g
⋅
(
g
−
∇
g
)
{\displaystyle \nabla \left({\frac {f}{g}}\right)={\frac {g\,\nabla f-f\,\nabla g}{g\cdot (g-\nabla g)}}\ }
Summation rules:
∑
n
=
a
b
Δ
f
(
n
)
=
f
(
b
+
1
)
−
f
(
a
)
∑
n
=
a
b
∇
f
(
n
)
=
f
(
b
)
−
f
(
a
−
1
)
{\displaystyle {\begin{aligned}\ \sum _{n=a}^{b}\Delta f(n)&=f(b+1)-f(a)\\\sum _{n=a}^{b}\nabla f(n)&=f(b)-f(a-1)\ \end{aligned}}}
See references.
== Generalizations ==
A generalized finite difference is usually defined as
Δ
h
μ
[
f
]
(
x
)
=
∑
k
=
0
N
μ
k
f
(
x
+
k
h
)
,
{\displaystyle \Delta _{h}^{\mu }[f](x)=\sum _{k=0}^{N}\mu _{k}f(x+kh),}
where μ = (μ0, …, μN) is its coefficient vector. An infinite difference is a further generalization, where the finite sum above is replaced by an infinite series. Another way of generalization is making coefficients μk depend on point x: μk = μk(x), thus considering weighted finite difference. Also one may make the step h depend on point x: h = h(x). Such generalizations are useful for constructing different modulus of continuity.
The generalized difference can be seen as the polynomial rings R[Th]. It leads to difference algebras.
Difference operator generalizes to Möbius inversion over a partially ordered set.
As a convolution operator: Via the formalism of incidence algebras, difference operators and other Möbius inversion can be represented by convolution with a function on the poset, called the Möbius function μ; for the difference operator, μ is the sequence (1, −1, 0, 0, 0, …).
== Multivariate finite differences ==
Finite differences can be considered in more than one variable. They are analogous to partial derivatives in several variables.
Some partial derivative approximations are:
f
x
(
x
,
y
)
≈
f
(
x
+
h
,
y
)
−
f
(
x
−
h
,
y
)
2
h
f
y
(
x
,
y
)
≈
f
(
x
,
y
+
k
)
−
f
(
x
,
y
−
k
)
2
k
f
x
x
(
x
,
y
)
≈
f
(
x
+
h
,
y
)
−
2
f
(
x
,
y
)
+
f
(
x
−
h
,
y
)
h
2
f
y
y
(
x
,
y
)
≈
f
(
x
,
y
+
k
)
−
2
f
(
x
,
y
)
+
f
(
x
,
y
−
k
)
k
2
f
x
y
(
x
,
y
)
≈
f
(
x
+
h
,
y
+
k
)
−
f
(
x
+
h
,
y
−
k
)
−
f
(
x
−
h
,
y
+
k
)
+
f
(
x
−
h
,
y
−
k
)
4
h
k
.
{\displaystyle {\begin{aligned}f_{x}(x,y)&\approx {\frac {f(x+h,y)-f(x-h,y)}{2h}}\\f_{y}(x,y)&\approx {\frac {f(x,y+k)-f(x,y-k)}{2k}}\\f_{xx}(x,y)&\approx {\frac {f(x+h,y)-2f(x,y)+f(x-h,y)}{h^{2}}}\\f_{yy}(x,y)&\approx {\frac {f(x,y+k)-2f(x,y)+f(x,y-k)}{k^{2}}}\\f_{xy}(x,y)&\approx {\frac {f(x+h,y+k)-f(x+h,y-k)-f(x-h,y+k)+f(x-h,y-k)}{4hk}}.\end{aligned}}}
Alternatively, for applications in which the computation of f is the most costly step, and both first and second derivatives must be computed, a more efficient formula for the last case is
f
x
y
(
x
,
y
)
≈
f
(
x
+
h
,
y
+
k
)
−
f
(
x
+
h
,
y
)
−
f
(
x
,
y
+
k
)
+
2
f
(
x
,
y
)
−
f
(
x
−
h
,
y
)
−
f
(
x
,
y
−
k
)
+
f
(
x
−
h
,
y
−
k
)
2
h
k
,
{\displaystyle f_{xy}(x,y)\approx {\frac {f(x+h,y+k)-f(x+h,y)-f(x,y+k)+2f(x,y)-f(x-h,y)-f(x,y-k)+f(x-h,y-k)}{2hk}},}
since the only values to compute that are not already needed for the previous four equations are f(x + h, y + k) and f(x − h, y − k).
== See also ==
== References ==
== External links ==
"Finite-difference calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Table of useful finite difference formula generated using Mathematica
D. Gleich (2005), Finite Calculus: A Tutorial for Solving Nasty Sums
Discrete Second Derivative from Unevenly Spaced Points | Wikipedia/Finite_difference_equation |
In mathematics, an expression or equation is in closed form if it is formed with constants, variables, and a set of functions considered as basic and connected by arithmetic operations (+, −, ×, /, and integer powers) and function composition. Commonly, the basic functions that are allowed in closed forms are nth root, exponential function, logarithm, and trigonometric functions. However, the set of basic functions depends on the context. For example, if one adds polynomial roots to the basic functions, the functions that have a closed form are called elementary functions.
The closed-form problem arises when new ways are introduced for specifying mathematical objects, such as limits, series, and integrals: given an object specified with such tools, a natural problem is to find, if possible, a closed-form expression of this object; that is, an expression of this object in terms of previous ways of specifying it.
== Example: roots of polynomials ==
The quadratic formula
x
=
−
b
±
b
2
−
4
a
c
2
a
{\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}}
is a closed form of the solutions to the general quadratic equation
a
x
2
+
b
x
+
c
=
0.
{\displaystyle ax^{2}+bx+c=0.}
More generally, in the context of polynomial equations, a closed form of a solution is a solution in radicals; that is, a closed-form expression for which the allowed functions are only nth-roots and field operations
(
+
,
−
,
×
,
/
)
.
{\displaystyle (+,-,\times ,/).}
In fact, field theory allows showing that if a solution of a polynomial equation has a closed form involving exponentials, logarithms or trigonometric functions, then it has also a closed form that does not involve these functions.
There are expressions in radicals for all solutions of cubic equations (degree 3) and quartic equations (degree 4). The size of these expressions increases significantly with the degree, limiting their usefulness.
In higher degrees, the Abel–Ruffini theorem states that there are equations whose solutions cannot be expressed in radicals, and, thus, have no closed forms. A simple example is the equation
x
5
−
x
−
1
=
0.
{\displaystyle x^{5}-x-1=0.}
Galois theory provides an algorithmic method for deciding whether a particular polynomial equation can be solved in radicals.
== Symbolic integration ==
Symbolic integration consists essentially of the search of closed forms for antiderivatives of functions that are specified by closed-form expressions. In this context, the basic functions used for defining closed forms are commonly logarithms, exponential function and polynomial roots. Functions that have a closed form for these basic functions are called elementary functions and include trigonometric functions, inverse trigonometric functions, hyperbolic functions, and inverse hyperbolic functions.
The fundamental problem of symbolic integration is thus, given an elementary function specified by a closed-form expression, to decide whether its antiderivative is an elementary function, and, if it is, to find a closed-form expression for this antiderivative.
For rational functions; that is, for fractions of two polynomial functions; antiderivatives are not always rational fractions, but are always elementary functions that may involve logarithms and polynomial roots. This is usually proved with partial fraction decomposition. The need for logarithms and polynomial roots is illustrated by the formula
∫
f
(
x
)
g
(
x
)
d
x
=
∑
α
∈
Roots
(
g
(
x
)
)
f
(
α
)
g
′
(
α
)
ln
(
x
−
α
)
,
{\displaystyle \int {\frac {f(x)}{g(x)}}\,dx=\sum _{\alpha \in \operatorname {Roots} (g(x))}{\frac {f(\alpha )}{g'(\alpha )}}\ln(x-\alpha ),}
which is valid if
f
{\displaystyle f}
and
g
{\displaystyle g}
are coprime polynomials such that
g
{\displaystyle g}
is square free and
deg
f
<
deg
g
.
{\displaystyle \deg f<\deg g.}
== Alternative definitions ==
Changing the basic functions to include additional functions can change the set of equations with closed-form solutions. Many cumulative distribution functions cannot be expressed in closed form, unless one considers special functions such as the error function or gamma function to be basic. It is possible to solve the quintic equation if general hypergeometric functions are included, although the solution is far too complicated algebraically to be useful. For many practical computer applications, it is entirely reasonable to assume that the gamma function and other special functions are basic since numerical implementations are widely available.
== Analytic expression ==
This is a term that is sometimes understood as a synonym for closed-form (see "Wolfram Mathworld".) but this usage is contested (see "Math Stackexchange".). It is unclear the extent to which this term is genuinely in use as opposed to the result of uncited earlier versions of this page.
== Comparison of different classes of expressions ==
The closed-form expressions do not include infinite series or continued fractions; neither includes integrals or limits. Indeed, by the Stone–Weierstrass theorem, any continuous function on the unit interval can be expressed as a limit of polynomials, so any class of functions containing the polynomials and closed under limits will necessarily include all continuous functions.
Similarly, an equation or system of equations is said to have a closed-form solution if and only if at least one solution can be expressed as a closed-form expression; and it is said to have an analytic solution if and only if at least one solution can be expressed as an analytic expression. There is a subtle distinction between a "closed-form function" and a "closed-form number" in the discussion of a "closed-form solution", discussed in (Chow 1999) and below. A closed-form or analytic solution is sometimes referred to as an explicit solution.
== Dealing with non-closed-form expressions ==
=== Transformation into closed-form expressions ===
The expression:
f
(
x
)
=
∑
n
=
0
∞
x
2
n
{\displaystyle f(x)=\sum _{n=0}^{\infty }{\frac {x}{2^{n}}}}
is not in closed form because the summation entails an infinite number of elementary operations. However, by summing a geometric series this expression can be expressed in the closed form:
f
(
x
)
=
2
x
.
{\displaystyle f(x)=2x.}
=== Differential Galois theory ===
The integral of a closed-form expression may or may not itself be expressible as a closed-form expression. This study is referred to as differential Galois theory, by analogy with algebraic Galois theory.
The basic theorem of differential Galois theory is due to Joseph Liouville in the 1830s and 1840s and hence referred to as Liouville's theorem.
A standard example of an elementary function whose antiderivative does not have a closed-form expression is:
e
−
x
2
,
{\displaystyle e^{-x^{2}},}
whose one antiderivative is (up to a multiplicative constant) the error function:
erf
(
x
)
=
2
π
∫
0
x
e
−
t
2
d
t
.
{\displaystyle \operatorname {erf} (x)={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}\,dt.}
=== Mathematical modelling and computer simulation ===
Equations or systems too complex for closed-form or analytic solutions can often be analysed by mathematical modelling and computer simulation (for an example in physics, see).
== Closed-form number ==
Three subfields of the complex numbers C have been suggested as encoding the notion of a "closed-form number"; in increasing order of generality, these are the Liouvillian numbers (not to be confused with Liouville numbers in the sense of rational approximation), EL numbers and elementary numbers. The Liouvillian numbers, denoted L, form the smallest algebraically closed subfield of C closed under exponentiation and logarithm (formally, intersection of all such subfields)—that is, numbers which involve explicit exponentiation and logarithms, but allow explicit and implicit polynomials (roots of polynomials); this is defined in (Ritt 1948, p. 60). L was originally referred to as elementary numbers, but this term is now used more broadly to refer to numbers defined explicitly or implicitly in terms of algebraic operations, exponentials, and logarithms. A narrower definition proposed in (Chow 1999, pp. 441–442), denoted E, and referred to as EL numbers, is the smallest subfield of C closed under exponentiation and logarithm—this need not be algebraically closed, and corresponds to explicit algebraic, exponential, and logarithmic operations. "EL" stands both for "exponential–logarithmic" and as an abbreviation for "elementary".
Whether a number is a closed-form number is related to whether a number is transcendental. Formally, Liouvillian numbers and elementary numbers contain the algebraic numbers, and they include some but not all transcendental numbers. In contrast, EL numbers do not contain all algebraic numbers, but do include some transcendental numbers. Closed-form numbers can be studied via transcendental number theory, in which a major result is the Gelfond–Schneider theorem, and a major open question is Schanuel's conjecture.
== Numerical computations ==
For purposes of numeric computations, being in closed form is not in general necessary, as many limits and integrals can be efficiently computed. Some equations have no closed form solution, such as those that represent the Three-body problem or the Hodgkin–Huxley model. Therefore, the future states of these systems must be computed numerically.
== Conversion from numerical forms ==
There is software that attempts to find closed-form expressions for numerical values, including RIES, identify in Maple and SymPy, Plouffe's Inverter, and the Inverse Symbolic Calculator.
== See also ==
Algebraic solution – Solution in radicals of a polynomial equationPages displaying short descriptions of redirect targets
Computer simulation – Process of mathematical modelling, performed on a computer
Elementary function – A kind of mathematical function
Finitary operation – Addition, multiplication, division, ...Pages displaying short descriptions of redirect targets
Numerical solution – Methods for numerical approximationsPages displaying short descriptions of redirect targets
Liouvillian function – Elementary functions and their finitely iterated integrals
Symbolic regression – Type of regression analysis
Tarski's high school algebra problem – Mathematical problem
Term (logic) – Components of a mathematical or logical formula
Tupper's self-referential formula – Formula that visually represents itself when graphed
== Notes ==
== References ==
== Further reading ==
Ritt, J. F. (1948), Integration in finite terms
Chow, Timothy Y. (May 1999), "What is a Closed-Form Number?", American Mathematical Monthly, 106 (5): 440–448, arXiv:math/9805045, doi:10.2307/2589148, JSTOR 2589148
Jonathan M. Borwein and Richard E. Crandall (January 2013), "Closed Forms: What They Are and Why We Care", Notices of the American Mathematical Society, 60 (1): 50–65, doi:10.1090/noti936
== External links ==
Weisstein, Eric W. "Closed-Form Solution". MathWorld.
Closed-form continuous-time neural networks | Wikipedia/Closed-form_solution |
Computational fluid dynamics (CFD) are used to understand complex thermal flow regimes in power plants. The thermal power plant may be divided into different subsectors and the CFD analysis applied to critical equipment/components - mainly different types of heat exchangers - which are of crucial significance for efficient and trouble free long-term operation of the plant.
== Overview ==
The thermal power station subsystem involves multiphase flow, phase transformation and complex chemical reaction associated with conjugate heat transfer.
.
== Methods ==
=== Finite difference method ===
Finite difference method describes the unknowns of the flow problem by means of point samples at the node points of a grid co-ordinate lines. Taylor series expansions are used to generate finite difference approximations of derivatives in terms of point samples at each grid point and its immediate neighbours. Those derivatives appearing in the governing equations are replaced by finite differences yielding an algebraic equation.
=== Finite element method ===
Finite element method uses piece wise functions valid on elements to describe the local variations of unknown flow variables. Here also a set of algebraic equations are generated to determine unknown co-efficients.
=== Finite volume method ===
Finite volume method is probably the most popular method used for numerical discretization in CFD. This method is similar in some ways to the finite difference method. This approach involves the discretization of the spatial domain into finite control volumes. The governing equations in their differential form are integrated over each control volume. The resulting integral conservation laws are exactly satisfied for each control volume and for the entire domain, which is a distinct advantage of the finite volume method. Each integral term is then converted into a discrete form, thus yielding discretised equations at the centroids, or nodal points, of the control volumes.
== Application of CFD in thermal power plants ==
=== Low NOx burner design ===
When fossil fuels are burned, Nitric oxide and Nitrogen dioxide are produced. These pollutants initiate reactions which result in production of ozone and acid rain. NOx formation takes place due to (1) High temperature combustion i.e. thermal NOx and (2)Nitrogen bound to fuel i.e. fuel NOx and which is insignificant. In the majority of cases the level of thermal NOx can be reduced by lowering flame temperature. This can be done by modifying the burner to create a larger (hence lower temperature) flame, in turn reducing the NOx formation. The role of CFD analysis is vital for design and analysis of such low NOx burners. Many available CFD tools, such as CFX, Fluent, Star CCM++ with different models as RNG k-ε turbulence models with hybrid and CONDIF upwind differencing schemes has been used for analysis purpose and the data obtained with these analysis helped in modifying the burner design in turn lowering the adverse effect on the environment due to NOx formation during combustion.
=== CFD analysis of economiser ===
The economiser is a crucial component for efficient performance of a thermal power plant. It is a non-steaming type of heat exchanger which is placed in the convective zone of the furnace. It takes the heat energy of the flue gases for heating the feed water before it enters the boiler drum. The thermal efficiency/boiler efficiency largely depends on the performance of the economiser. CFD analysis helps in optimizing the thermal performance of the economiser by analysing the pressure, velocity and temperature distribution, and to identify the critical areas for further improvement with the result obtained by CFD analysis.
=== CFD analysis of superheaters ===
Superheaters, which are generally placed in the radiant zone of the furnace, are used for increasing the temperature of dry saturated steam coming out from boiler drum and to maintain the required parameters before sending it to the steam turbine. The thermal efficiency of a thermal power plant depends on the performance of the superheater. The CFD analysis of superheaters is done at design stage and later at the troubleshooting and performance evaluation during the operation of the plant. The CFD results obtained can be useful for the maintenance engineer to make suitable predictions of the tube life and make suitable arrangements for the high temperature zone to reduce the erosion of the tube coil and restricting the tube leakage problem. CFD analysis consists of modelling the superheater and doing analysis to study the velocity, pressure and temperature distribution of the steam inside the superheater. The uneven temperature distribution of steam in the tube leads to boiler tube leakage. CFD also helps to study the effect of the operating parameters on the tube erosion rate. Thermal power plants operates round the year and it is not always possible to shut down and analyse the problem. CFD helps in this.
=== CFD analysis of pulverized coal combustion ===
In a thermal power plant combustion of fuel, especially pulverized coal, is of significant importance. Proper and complete combustion, with the required proportions of air and fuel, is required for total energy transfer to water for steam generation and to reduce pollutants. CFD models based on fundamental conservation equations of mass, energy, chemical species and momentum can be used to simulate the flow of air and coal through the burners. The results obtained from CFD analyses give insight to identify the potential areas for improvement.
=== CFD application in other areas of thermal power plants ===
There are some other areas of importance where CFD can play a significant role in performance and efficiency improvement. The unbalanced coal/air flow in the pipe systems of coal fired power plants leads to non-uniform combustion in the furnace, and hence an overall lower efficiency of the boiler. A common solution to this problem is to put orifices in the pipe systems to balance the flow. If the orifices are sized to balance clean airflow to individual burners connected to a pulverizer, the coal/airflow would still be unbalanced and vice versa. The CFD with standard k–e two-phase flow model can be used to calculate pressure drop coefficients for the coal/air as well as the clean air flow.
The CFD is also used to obtain the numerical solution to address the problem of water wall erosion of the furnace of a thermal power plant. This is caused by flame misalignment, thermal attack and erosion due to the contact with chemicals. The flame misalignment occurs because of alteration in fluid dynamics factors due to burner geometry. CFD results show velocity profiles, pressure profiles, streamlines and other data that is helpful in understanding the fluid flow phenomena inside the equipment. It is clearly evident from above examples how crucial is the application of CFD in addressing the bottlenecks in thermal power plants, improving power plant efficiency and assisting in maintenance decisions.
== References ==
== Further reading ==
Krunal .P Mudafle, Hemant S. Farkade "CFD analysis of economizer in a tengential fired boiler", International Journal of Mechanical and Industrial Engineering (IJMIE) ISSN No. 2231 –6477, Vol-2, Iss-4, 2012.
Ajay N. Ingale, Vivek C. Pathade, Dr. Vivek H. Tatwawadi" CFD Analysis of Superheater in View of Boiler Tube Leakage" International Journal of Engineering and Innovative Technology (IJEIT) Volume 1, Issue 3, March 2012
H.Versteg, W.malalasekra " An Introduction to Computational Fluid Dynamics" Second edition, Pearson Publications. | Wikipedia/Application_of_CFD_in_thermal_power_plants |
Multi-particle collision dynamics (MPC), also known as stochastic rotation dynamics (SRD), is a particle-based mesoscale simulation technique for complex fluids which fully incorporates thermal fluctuations and hydrodynamic interactions. Coupling of embedded particles to the coarse-grained solvent is achieved through molecular dynamics.
== Method of simulation ==
The solvent is modelled as a set of
N
{\displaystyle N}
point particles of mass
m
{\displaystyle m}
with continuous coordinates
r
→
i
{\displaystyle {\vec {r}}_{i}}
and velocities
v
→
i
{\displaystyle {\vec {v}}_{i}}
. The simulation consists of streaming and collision steps.
During the streaming step, the coordinates of the particles are updated according to
r
→
i
(
t
+
δ
t
M
P
C
)
=
r
→
i
(
t
)
+
v
→
i
(
t
)
δ
t
M
P
C
{\displaystyle {\vec {r}}_{i}(t+\delta t_{\mathrm {MPC} })={\vec {r}}_{i}(t)+{\vec {v}}_{i}(t)\delta t_{\mathrm {MPC} }}
where
δ
t
M
P
C
{\displaystyle \delta t_{\mathrm {MPC} }}
is a chosen simulation time step which is typically much larger than a molecular dynamics time step.
After the streaming step, interactions between the solvent particles are modelled in the collision step. The particles are sorted into collision cells with a lateral size
a
{\displaystyle a}
. Particle velocities within each cell are updated according to the collision rule
v
→
i
→
v
→
C
M
S
+
R
^
(
v
→
i
−
v
→
C
M
S
)
{\displaystyle {\vec {v}}_{i}\rightarrow {\vec {v}}_{\mathrm {CMS} }+{\hat {\mathbf {R} }}({\vec {v}}_{i}-{\vec {v}}_{\mathrm {CMS} })}
where
v
→
C
M
S
{\displaystyle {\vec {v}}_{\mathrm {CMS} }}
is the centre of mass velocity of the particles in the collision cell and
R
^
{\displaystyle {\hat {\mathbf {R} }}}
is a rotation matrix. In two dimensions,
R
^
{\displaystyle {\hat {\mathbf {R} }}}
performs a rotation by an angle
+
α
{\displaystyle +\alpha }
or
−
α
{\displaystyle -\alpha }
with probability
1
/
2
{\displaystyle 1/2}
. In three dimensions, the rotation is performed by an angle
α
{\displaystyle \alpha }
around a random rotation axis. The same rotation is applied for all particles within a given collision cell, but the direction (axis) of rotation is statistically independent both between all cells and for a given cell in time.
If the structure of the collision grid defined by the positions of the collision cells is fixed, Galilean invariance is violated. It is restored with the introduction of a random shift of the collision grid.
Explicit expressions for the diffusion coefficient and viscosity derived based on Green-Kubo relations are in excellent agreement with simulations.
== Simulation parameters ==
The set of parameters for the simulation of the solvent are:
solvent particle mass
m
{\displaystyle m}
average number of solvent particles per collision box
n
s
{\displaystyle n_{s}}
lateral collision box size
a
{\displaystyle a}
stochastic rotation angle
α
{\displaystyle \alpha }
kT (energy)
time step
δ
t
M
P
C
{\displaystyle \delta t_{\mathrm {MPC} }}
The simulation parameters define the solvent properties, such as
mean free path
λ
=
δ
t
M
P
C
k
T
/
m
{\displaystyle \lambda =\delta t_{\mathrm {MPC} }{\sqrt {kT/m}}}
diffusion coefficient
D
=
k
T
δ
t
M
P
C
2
m
[
d
n
s
(
1
−
cos
(
α
)
)
(
n
s
−
1
+
exp
−
n
s
)
−
1
]
{\displaystyle D={\frac {kT\delta t_{\mathrm {M} PC}}{2m}}{\Bigg [}{\frac {dn_{s}}{(1-\cos(\alpha ))(n_{s}-1+\exp ^{-n_{s}})}}-1{\Bigg ]}}
shear viscosity
ν
{\displaystyle \nu }
thermal diffusivity
D
T
{\displaystyle D_{T}}
where
d
{\displaystyle d}
is the dimensionality of the system.
A typical choice for normalisation is
a
=
1
,
k
T
=
1
,
m
=
1
{\displaystyle a=1,\;kT=1,\;m=1}
. To reproduce fluid-like behaviour, the remaining parameters may be fixed as
α
=
130
o
,
n
s
=
10
,
δ
t
M
P
C
∈
[
0.01
;
0.1
]
{\displaystyle \alpha =130^{o},\;n_{s}=10,\;\delta t_{\mathrm {MPC} }\in [0.01;0.1]}
.
== Applications ==
MPC has become a notable tool in the simulations of many soft-matter systems, including
colloid dynamics
polymer dynamics
vesicles
active systems
liquid crystals
== References == | Wikipedia/Multi-particle_collision_dynamics |
Cavitation modelling is a type of computational fluid dynamic (CFD) that represents the flow of fluid during cavitation. It covers a wide range of applications, such as pumps, water turbines, pump inducers, and fuel cavitation in orifices as commonly encountered in fuel injection systems.
== Modelling categories ==
Modelling efforts can be divided into two broad categories: vapor transport models and discrete bubble models.
=== Vapor transport model ===
Vapor transport models are best suited to large-scale cavitation, like sheet cavitation that often occurs on rudders and propellers. These models include two-way interactions between the phases.
=== Discrete bubble model ===
The discrete bubble model includes the effects of the surrounding fluid on the bubbles. Discrete bubble models, e.g. the Rayleigh-Plesset, Gilmore and Keller-Miksis, describe the relation between the external pressure, bubble radius and the velocity and acceleration of the bubble wall.
== Two-phase modeling ==
Two-phase modeling is the modelling of the two phases, as in a free surface code. Two common types of two phase models are homogeneous mixture models and sharp interface models. The difference between both the models is in the treatment of the contents of cells containing both phases.
=== Homogeneous mixture models ===
Most recent cavitation modelling efforts have used homogeneous mixture models, in which the contents of individual cells are assumed to be uniform. This approach is best suited to modeling large numbers of bubbles that are much smaller than one cell. The disadvantage of this approach is that when the cavities are larger than one cell, the vapor fraction is diffused across neighboring cells by the vapor transport model.
This is different from the sharp interface models in that the vapor and liquid are modeled as distinct phases separated by an interface.
=== Sharp interface models ===
In sharp interface models, the interface is not diffused by advection. The model maintains a sharp interface. Naturally, this is only appropriate when the bubble size is at least on the order of a few cells.
== Phase change models ==
Phase change models represent the mass transfer between the phases. In cavitation, pressure is responsible for the mass transfer between liquid and vapor phases. This is in contrast to boiling, in which the temperature causes the phase change. There are two general categories of phase change models used for cavitation: the barotropic models and equilibrium models. This section will briefly discuss the advantages and disadvantages of each type.
=== Barotropic model ===
If the pressure is greater than vapor pressure, then the fluid is liquid, otherwise vapor. This means density of liquid water is considered as the density of fluid if the pressure is greater than vapor pressure and the density of water vapor is considered when pressure is less than vapor pressure of water at the ambient temperature.
=== Equilibrium model ===
The equilibrium model requires the solution of the energy equation. The equation for state of water is used, with the energy absorbed or released by phase change creating local temperature gradients which control the rate of phase change.
== Bubble dynamics models ==
Several models for the bubble dynamics have been proposed:
=== Rayleigh ===
The Rayleigh model is the oldest, dating from 1917. This model was derived by Lord Rayleigh It describes an empty space in the water, influenced by a constant external pressure. His assumption of an empty space led to the name cavity still used.
The Rayleigh equation, derived from the Navier-Stokes equation for a spherically symmetric bubble convected with the flow with constant external pressure, reads
R
R
¨
+
3
2
R
˙
2
=
p
(
R
)
−
p
∞
ρ
L
{\displaystyle R{\ddot {R}}+{\frac {3}{2}}{\dot {R}}^{2}={\frac {p(R)-p_{\infty }}{\rho _{L}}}}
=== Rayleigh-Plesset ===
Building on the work of Lord Rayleigh, Plesset included the effects of viscosity, surface tension and a non-constant external pressure to the equation. This equation reads
R
R
¨
+
3
2
R
˙
2
=
p
i
−
p
∞
−
2
σ
R
−
4
μ
R
R
˙
ρ
L
{\displaystyle R{\ddot {R}}+{\frac {3}{2}}{\dot {R}}^{2}={\frac {p_{i}-p_{\infty }-{\frac {2\sigma }{R}}-{\frac {4\mu }{R}}{\dot {R}}}{\rho _{L}}}}
=== Gilmore ===
The equation by Gilmore accounted for the compressibility of the liquid. In its derivation, the viscous term is only present as a product with the compressibility. This term is neglected. The resulting term is:
(
1
−
R
˙
(
t
)
c
(
R
)
)
R
(
t
)
R
¨
(
t
)
+
3
2
(
1
−
R
˙
(
t
)
3
c
(
R
)
)
R
˙
2
(
t
)
=
(
1
+
R
˙
(
t
)
c
(
R
)
)
H
(
R
)
+
(
1
−
R
˙
c
(
R
)
)
R
c
(
R
)
H
˙
(
R
)
{\displaystyle (1-{\frac {{\dot {R}}(t)}{c(R)}})R(t){\ddot {R}}(t)+{\frac {3}{2}}(1-{\frac {{\dot {R}}(t)}{3c(R)}}){\dot {R}}^{2}(t)=(1+{\frac {{\dot {R}}(t)}{c(R)}})H(R)+(1-{\frac {\dot {R}}{c(R)}}){\frac {R}{c(R)}}{\dot {H}}(R)}
In which:
H
=
n
n
−
1
p
∞
(
t
)
+
B
ρ
L
[
(
P
+
B
p
∞
(
t
)
+
B
)
n
−
1
n
−
1
]
{\displaystyle H={\frac {n}{n-1}}{\frac {p_{\infty }(t)+B}{\rho _{L}}}\left[({\frac {P+B}{p_{\infty }(t)+B}})^{\frac {n-1}{n}}-1\right]}
c
=
c
0
(
p
g
(
t
)
−
2
σ
/
R
+
B
p
∞
(
t
)
+
B
)
n
−
1
2
n
{\displaystyle c=c_{0}\left({\frac {p_{g}(t)-2\sigma /R+B}{p_{\infty }(t)+B}}\right)^{\frac {n-1}{2n}}}
H
˙
=
D
p
∞
(
t
)
+
B
H
−
D
ρ
(
P
+
B
p
∞
(
t
)
+
B
)
n
−
1
n
+
R
˙
ρ
L
R
[
p
∞
(
t
)
+
B
P
+
B
]
1
n
[
2
σ
R
−
3
k
p
g
(
t
)
]
{\displaystyle {\dot {H}}={\frac {D}{p_{\infty }(t)+B}}H-{\frac {D}{\rho }}({\frac {P+B}{p_{\infty }(t)+B}})^{\frac {n-1}{n}}+{\frac {\dot {R}}{\rho _{L}R}}\left[{\frac {p_{\infty }(t)+B}{P+B}}\right]^{\frac {1}{n}}\left[{\frac {2\sigma }{R}}-3kp_{g}(t)\right]}
=== Others ===
Over the years, several other models have been developed by making different assumptions in the derivation of the Navier-Stokes equations.
== References == | Wikipedia/Cavitation_modelling |
In fluid dynamics, potential flow or irrotational flow refers to a description of a fluid flow with no vorticity in it. Such a description typically arises in the limit of vanishing viscosity, i.e., for an inviscid fluid and with no vorticity present in the flow.
Potential flow describes the velocity field as the gradient of a scalar function: the velocity potential. As a result, a potential flow is characterized by an irrotational velocity field, which is a valid approximation for several applications. The irrotationality of a potential flow is due to the curl of the gradient of a scalar always being equal to zero.
In the case of an incompressible flow the velocity potential satisfies Laplace's equation, and potential theory is applicable. However, potential flows also have been used to describe compressible flows and Hele-Shaw flows. The potential flow approach occurs in the modeling of both stationary as well as nonstationary flows.
Applications of potential flow include: the outer flow field for aerofoils, water waves, electroosmotic flow, and groundwater flow. For flows (or parts thereof) with strong vorticity effects, the potential flow approximation is not applicable. In flow regions where vorticity is known to be important, such as wakes and boundary layers, potential flow theory is not able to provide reasonable predictions of the flow. However, there are often large regions of a flow in which the assumption of irrotationality is valid, allowing the use of potential flow for various applications; these include flow around aircraft, groundwater flow, acoustics, water waves, and electroosmotic flow.
== Description and characteristics ==
In potential or irrotational flow, the vorticity vector field is zero, i.e.,
ω
≡
∇
×
v
=
0
,
{\displaystyle {\boldsymbol {\omega }}\equiv \nabla \times \mathbf {v} =0,}
where
v
(
x
,
t
)
{\displaystyle \mathbf {v} (\mathbf {x} ,t)}
is the velocity field and
ω
(
x
,
t
)
{\displaystyle {\boldsymbol {\omega }}(\mathbf {x} ,t)}
is the vorticity field. Like any vector field having zero curl, the velocity field can be expressed as the gradient of certain scalar, say
φ
(
x
,
t
)
{\displaystyle \varphi (\mathbf {x} ,t)}
which is called the velocity potential, since the curl of the gradient is always zero. We therefore have
v
=
∇
φ
.
{\displaystyle \mathbf {v} =\nabla \varphi .}
The velocity potential is not uniquely defined since one can add to it an arbitrary function of time, say
f
(
t
)
{\displaystyle f(t)}
, without affecting the relevant physical quantity which is
v
{\displaystyle \mathbf {v} }
. The non-uniqueness is usually removed by suitably selecting appropriate initial or boundary conditions satisfied by
φ
{\displaystyle \varphi }
and as such the procedure may vary from one problem to another.
In potential flow, the circulation
Γ
{\displaystyle \Gamma }
around any simply-connected contour
C
{\displaystyle C}
is zero. This can be shown using the Stokes theorem,
Γ
≡
∮
C
v
⋅
d
l
=
∫
ω
⋅
d
f
=
0
{\displaystyle \Gamma \equiv \oint _{C}\mathbf {v} \cdot d\mathbf {l} =\int {\boldsymbol {\omega }}\cdot d\mathbf {f} =0}
where
d
l
{\displaystyle d\mathbf {l} }
is the line element on the contour and
d
f
{\displaystyle d\mathbf {f} }
is the area element of any surface bounded by the contour. In multiply-connected space (say, around a contour enclosing solid body in two dimensions or around a contour enclosing a torus in three-dimensions) or in the presence of concentrated vortices, (say, in the so-called irrotational vortices or point vortices, or in smoke rings), the circulation
Γ
{\displaystyle \Gamma }
need not be zero. In the former case, Stokes theorem cannot be applied and in the later case,
ω
{\displaystyle {\boldsymbol {\omega }}}
is non-zero within the region bounded by the contour. Around a contour encircling an infinitely long solid cylinder with which the contour loops
N
{\displaystyle N}
times, we have
Γ
=
N
κ
{\displaystyle \Gamma =N\kappa }
where
κ
{\displaystyle \kappa }
is a cyclic constant. This example belongs to a doubly-connected space. In an
n
{\displaystyle n}
-tuply connected space, there are
n
−
1
{\displaystyle n-1}
such cyclic constants, namely,
κ
1
,
κ
2
,
…
,
κ
n
−
1
.
{\displaystyle \kappa _{1},\kappa _{2},\dots ,\kappa _{n-1}.}
== Incompressible flow ==
In case of an incompressible flow — for instance of a liquid, or a gas at low Mach numbers; but not for sound waves — the velocity v has zero divergence:
∇
⋅
v
=
0
,
{\displaystyle \nabla \cdot \mathbf {v} =0\,,}
Substituting here
v
=
∇
φ
{\displaystyle \mathbf {v} =\nabla \varphi }
shows that
φ
{\displaystyle \varphi }
satisfies the Laplace equation
∇
2
φ
=
0
,
{\displaystyle \nabla ^{2}\varphi =0\,,}
where ∇2 = ∇ ⋅ ∇ is the Laplace operator (sometimes also written Δ). Since solutions of the Laplace equation are harmonic functions, every harmonic function represents a potential flow solution. As evident, in the incompressible case, the velocity field is determined completely from its kinematics: the assumptions of irrotationality and zero divergence of flow. Dynamics in connection with the momentum equations, only have to be applied afterwards, if one is interested in computing pressure field: for instance for flow around airfoils through the use of Bernoulli's principle.
In incompressible flows, contrary to common misconception, the potential flow indeed satisfies the full Navier–Stokes equations, not just the Euler equations, because the viscous term
μ
∇
2
v
=
μ
∇
(
∇
⋅
v
)
−
μ
∇
×
ω
=
0
{\displaystyle \mu \nabla ^{2}\mathbf {v} =\mu \nabla (\nabla \cdot \mathbf {v} )-\mu \nabla \times {\boldsymbol {\omega }}=0}
is identically zero. It is the inability of the potential flow to satisfy the required boundary conditions, especially near solid boundaries, makes it invalid in representing the required flow field. If the potential flow satisfies the necessary conditions, then it is the required solution of the incompressible Navier–Stokes equations.
In two dimensions, with the help of the harmonic function
φ
{\displaystyle \varphi }
and its conjugate harmonic function
ψ
{\displaystyle \psi }
(stream function), incompressible potential flow reduces to a very simple system that is analyzed using complex analysis (see below).
== Compressible flow ==
=== Steady flow ===
Potential flow theory can also be used to model irrotational compressible flow. The derivation of the governing equation for
φ
{\displaystyle \varphi }
from Eulers equation is quite straightforward. The continuity and the (potential flow) momentum equations for steady flows are given by
ρ
∇
⋅
v
+
v
⋅
∇
ρ
=
0
,
(
v
⋅
∇
)
v
=
−
1
ρ
∇
p
=
−
c
2
ρ
∇
ρ
{\displaystyle \rho \nabla \cdot \mathbf {v} +\mathbf {v} \cdot \nabla \rho =0,\quad (\mathbf {v} \cdot \nabla )\mathbf {v} =-{\frac {1}{\rho }}\nabla p=-{\frac {c^{2}}{\rho }}\nabla \rho }
where the last equation follows from the fact that entropy is constant for a fluid particle and that square of the sound speed is
c
2
=
(
∂
p
/
∂
ρ
)
s
{\displaystyle c^{2}=(\partial p/\partial \rho )_{s}}
. Eliminating
∇
ρ
{\displaystyle \nabla \rho }
from the two governing equations results in
c
2
∇
⋅
v
−
v
⋅
(
v
⋅
∇
)
v
=
0.
{\displaystyle c^{2}\nabla \cdot \mathbf {v} -\mathbf {v} \cdot (\mathbf {v} \cdot \nabla )\mathbf {v} =0.}
The incompressible version emerges in the limit
c
→
∞
{\displaystyle c\to \infty }
. Substituting here
v
=
∇
φ
{\displaystyle \mathbf {v} =\nabla \varphi }
results in
(
c
2
−
φ
x
2
)
φ
x
x
+
(
c
2
−
φ
y
2
)
φ
y
y
+
(
c
2
−
φ
z
2
)
φ
z
z
−
2
(
φ
x
φ
y
φ
x
y
+
φ
y
φ
z
φ
y
z
+
φ
z
φ
x
ϕ
z
x
)
=
0
{\displaystyle (c^{2}-\varphi _{x}^{2})\varphi _{xx}+(c^{2}-\varphi _{y}^{2})\varphi _{yy}+(c^{2}-\varphi _{z}^{2})\varphi _{zz}-2(\varphi _{x}\varphi _{y}\varphi _{xy}+\varphi _{y}\varphi _{z}\varphi _{yz}+\varphi _{z}\varphi _{x}\phi _{zx})=0}
where
c
=
c
(
v
)
{\displaystyle c=c(v)}
is expressed as a function of the velocity magnitude
v
2
=
(
∇
ϕ
)
2
{\displaystyle v^{2}=(\nabla \phi )^{2}}
. For a polytropic gas,
c
2
=
(
γ
−
1
)
(
h
0
−
v
2
/
2
)
{\displaystyle c^{2}=(\gamma -1)(h_{0}-v^{2}/2)}
, where
γ
{\displaystyle \gamma }
is the specific heat ratio and
h
0
{\displaystyle h_{0}}
is the stagnation enthalpy. In two dimensions, the equation simplifies to
(
c
2
−
φ
x
2
)
φ
x
x
+
(
c
2
−
φ
y
2
)
φ
y
y
−
2
φ
x
φ
y
φ
x
y
=
0.
{\displaystyle (c^{2}-\varphi _{x}^{2})\varphi _{xx}+(c^{2}-\varphi _{y}^{2})\varphi _{yy}-2\varphi _{x}\varphi _{y}\varphi _{xy}=0.}
Validity: As it stands, the equation is valid for any inviscid potential flows, irrespective of whether the flow is subsonic or supersonic (e.g. Prandtl–Meyer flow). However in supersonic and also in transonic flows, shock waves can occur which can introduce entropy and vorticity into the flow making the flow rotational. Nevertheless, there are two cases for which potential flow prevails even in the presence of shock waves, which are explained from the (not necessarily potential) momentum equation written in the following form
∇
(
h
+
v
2
/
2
)
−
v
×
ω
=
T
∇
s
{\displaystyle \nabla (h+v^{2}/2)-\mathbf {v} \times {\boldsymbol {\omega }}=T\nabla s}
where
h
{\displaystyle h}
is the specific enthalpy,
ω
{\displaystyle {\boldsymbol {\omega }}}
is the vorticity field,
T
{\displaystyle T}
is the temperature and
s
{\displaystyle s}
is the specific entropy. Since in front of the leading shock wave, we have a potential flow, Bernoulli's equation shows that
h
+
v
2
/
2
{\displaystyle h+v^{2}/2}
is constant, which is also constant across the shock wave (Rankine–Hugoniot conditions) and therefore we can write
v
×
ω
=
−
T
∇
s
{\displaystyle \mathbf {v} \times {\boldsymbol {\omega }}=-T\nabla s}
1) When the shock wave is of constant intensity, the entropy discontinuity across the shock wave is also constant i.e.,
∇
s
=
0
{\displaystyle \nabla s=0}
and therefore vorticity production is zero. Shock waves at the pointed leading edge of two-dimensional wedge or three-dimensional cone (Taylor–Maccoll flow) has constant intensity. 2) For weak shock waves, the entropy jump across the shock wave is a third-order quantity in terms of shock wave strength and therefore
∇
s
{\displaystyle \nabla s}
can be neglected. Shock waves in slender bodies lies nearly parallel to the body and they are weak.
Nearly parallel flows: When the flow is predominantly unidirectional with small deviations such as in flow past slender bodies, the full equation can be further simplified. Let
U
e
x
{\displaystyle U\mathbf {e} _{x}}
be the mainstream and consider small deviations from this velocity field. The corresponding velocity potential can be written as
φ
=
x
U
+
ϕ
{\displaystyle \varphi =xU+\phi }
where
ϕ
{\displaystyle \phi }
characterizes the small departure from the uniform flow and satisfies the linearized version of the full equation. This is given by
(
1
−
M
2
)
∂
2
ϕ
∂
x
2
+
∂
2
ϕ
∂
y
2
+
∂
2
ϕ
∂
z
2
=
0
{\displaystyle (1-M^{2}){\frac {\partial ^{2}\phi }{\partial x^{2}}}+{\frac {\partial ^{2}\phi }{\partial y^{2}}}+{\frac {\partial ^{2}\phi }{\partial z^{2}}}=0}
where
M
=
U
/
c
∞
{\displaystyle M=U/c_{\infty }}
is the constant Mach number corresponding to the uniform flow. This equation is valid provided
M
{\displaystyle M}
is not close to unity. When
|
M
−
1
|
{\displaystyle |M-1|}
is small (transonic flow), we have the following nonlinear equation
2
α
∗
∂
ϕ
∂
x
∂
2
ϕ
∂
x
2
=
∂
2
ϕ
∂
y
2
+
∂
2
ϕ
∂
z
2
{\displaystyle 2\alpha _{*}{\frac {\partial \phi }{\partial x}}{\frac {\partial ^{2}\phi }{\partial x^{2}}}={\frac {\partial ^{2}\phi }{\partial y^{2}}}+{\frac {\partial ^{2}\phi }{\partial z^{2}}}}
where
α
∗
{\displaystyle \alpha _{*}}
is the critical value of Landau derivative
α
=
(
c
4
/
2
υ
3
)
(
∂
2
υ
/
∂
p
2
)
s
{\displaystyle \alpha =(c^{4}/2\upsilon ^{3})(\partial ^{2}\upsilon /\partial p^{2})_{s}}
and
υ
=
1
/
ρ
{\displaystyle \upsilon =1/\rho }
is the specific volume. The transonic flow is completely characterized by the single parameter
α
∗
{\displaystyle \alpha _{*}}
, which for polytropic gas takes the value
α
∗
=
α
=
(
γ
+
1
)
/
2
{\displaystyle \alpha _{*}=\alpha =(\gamma +1)/2}
. Under hodograph transformation, the transonic equation in two-dimensions becomes the Euler–Tricomi equation.
=== Unsteady flow ===
The continuity and the (potential flow) momentum equations for unsteady flows are given by
∂
ρ
∂
t
+
ρ
∇
⋅
v
+
v
⋅
∇
ρ
=
0
,
∂
v
∂
t
+
(
v
⋅
∇
)
v
=
−
1
ρ
∇
p
=
−
c
2
ρ
∇
ρ
=
−
∇
h
.
{\displaystyle {\frac {\partial \rho }{\partial t}}+\rho \nabla \cdot \mathbf {v} +\mathbf {v} \cdot \nabla \rho =0,\quad {\frac {\partial \mathbf {v} }{\partial t}}+(\mathbf {v} \cdot \nabla )\mathbf {v} =-{\frac {1}{\rho }}\nabla p=-{\frac {c^{2}}{\rho }}\nabla \rho =-\nabla h.}
The first integral of the (potential flow) momentum equation is given by
∂
φ
∂
t
+
v
2
2
+
h
=
f
(
t
)
,
⇒
∂
h
∂
t
=
−
∂
2
φ
∂
t
2
−
1
2
∂
v
2
∂
t
+
d
f
d
t
{\displaystyle {\frac {\partial \varphi }{\partial t}}+{\frac {v^{2}}{2}}+h=f(t),\quad \Rightarrow \quad {\frac {\partial h}{\partial t}}=-{\frac {\partial ^{2}\varphi }{\partial t^{2}}}-{\frac {1}{2}}{\frac {\partial v^{2}}{\partial t}}+{\frac {df}{dt}}}
where
f
(
t
)
{\displaystyle f(t)}
is an arbitrary function. Without loss of generality, we can set
f
(
t
)
=
0
{\displaystyle f(t)=0}
since
φ
{\displaystyle \varphi }
is not uniquely defined. Combining these equations, we obtain
∂
2
φ
∂
t
2
+
∂
v
2
∂
t
=
c
2
∇
⋅
v
−
v
⋅
(
v
⋅
∇
)
v
.
{\displaystyle {\frac {\partial ^{2}\varphi }{\partial t^{2}}}+{\frac {\partial v^{2}}{\partial t}}=c^{2}\nabla \cdot \mathbf {v} -\mathbf {v} \cdot (\mathbf {v} \cdot \nabla )\mathbf {v} .}
Substituting here
v
=
∇
φ
{\displaystyle \mathbf {v} =\nabla \varphi }
results in
φ
t
t
+
(
φ
x
2
+
φ
y
2
+
φ
z
2
)
t
=
(
c
2
−
φ
x
2
)
φ
x
x
+
(
c
2
−
φ
y
2
)
φ
y
y
+
(
c
2
−
φ
z
2
)
φ
z
z
−
2
(
φ
x
φ
y
φ
x
y
+
φ
y
φ
z
φ
y
z
+
φ
z
φ
x
ϕ
z
x
)
.
{\displaystyle \varphi _{tt}+(\varphi _{x}^{2}+\varphi _{y}^{2}+\varphi _{z}^{2})_{t}=(c^{2}-\varphi _{x}^{2})\varphi _{xx}+(c^{2}-\varphi _{y}^{2})\varphi _{yy}+(c^{2}-\varphi _{z}^{2})\varphi _{zz}-2(\varphi _{x}\varphi _{y}\varphi _{xy}+\varphi _{y}\varphi _{z}\varphi _{yz}+\varphi _{z}\varphi _{x}\phi _{zx}).}
Nearly parallel flows: As in before, for nearly parallel flows, we can write (after introudcing a recaled time
τ
=
c
∞
t
{\displaystyle \tau =c_{\infty }t}
)
∂
2
ϕ
∂
τ
2
+
2
M
∂
2
ϕ
∂
x
∂
τ
=
(
1
−
M
2
)
∂
2
ϕ
∂
x
2
+
∂
2
ϕ
∂
y
2
+
∂
2
ϕ
∂
z
2
{\displaystyle {\frac {\partial ^{2}\phi }{\partial \tau ^{2}}}+2M{\frac {\partial ^{2}\phi }{\partial x\partial \tau }}=(1-M^{2}){\frac {\partial ^{2}\phi }{\partial x^{2}}}+{\frac {\partial ^{2}\phi }{\partial y^{2}}}+{\frac {\partial ^{2}\phi }{\partial z^{2}}}}
provided the constant Mach number
M
{\displaystyle M}
is not close to unity. When
|
M
−
1
|
{\displaystyle |M-1|}
is small (transonic flow), we have the following nonlinear equation
∂
2
ϕ
∂
τ
2
+
2
∂
2
ϕ
∂
x
∂
τ
=
−
2
α
∗
∂
ϕ
∂
x
∂
2
ϕ
∂
x
2
+
∂
2
ϕ
∂
y
2
+
∂
2
ϕ
∂
z
2
.
{\displaystyle {\frac {\partial ^{2}\phi }{\partial \tau ^{2}}}+2{\frac {\partial ^{2}\phi }{\partial x\partial \tau }}=-2\alpha _{*}{\frac {\partial \phi }{\partial x}}{\frac {\partial ^{2}\phi }{\partial x^{2}}}+{\frac {\partial ^{2}\phi }{\partial y^{2}}}+{\frac {\partial ^{2}\phi }{\partial z^{2}}}.}
Sound waves: In sound waves, the velocity magntiude
v
{\displaystyle v}
(or the Mach number) is very small, although the unsteady term is now comparable to the other leading terms in the equation. Thus neglecting all quadratic and higher-order terms and noting that in the same approximation,
c
{\displaystyle c}
is a constant (for example, in polytropic gas
c
2
=
(
γ
−
1
)
h
0
{\displaystyle c^{2}=(\gamma -1)h_{0}}
), we have
∂
2
φ
∂
t
2
=
c
2
∇
2
φ
,
{\displaystyle {\frac {\partial ^{2}\varphi }{\partial t^{2}}}=c^{2}\nabla ^{2}\varphi ,}
which is a linear wave equation for the velocity potential φ. Again the oscillatory part of the velocity vector v is related to the velocity potential by v = ∇φ, while as before Δ is the Laplace operator, and c is the average speed of sound in the homogeneous medium. Note that also the oscillatory parts of the pressure p and density ρ each individually satisfy the wave equation, in this approximation.
== Applicability and limitations ==
Potential flow does not include all the characteristics of flows that are encountered in the real world. Potential flow theory cannot be applied for viscous internal flows, except for flows between closely spaced plates. Richard Feynman considered potential flow to be so unphysical that the only fluid to obey the assumptions was "dry water" (quoting John von Neumann). Incompressible potential flow also makes a number of invalid predictions, such as d'Alembert's paradox, which states that the drag on any object moving through an infinite fluid otherwise at rest is zero. More precisely, potential flow cannot account for the behaviour of flows that include a boundary layer. Nevertheless, understanding potential flow is important in many branches of fluid mechanics. In particular, simple potential flows (called elementary flows) such as the free vortex and the point source possess ready analytical solutions. These solutions can be superposed to create more complex flows satisfying a variety of boundary conditions. These flows correspond closely to real-life flows over the whole of fluid mechanics; in addition, many valuable insights arise when considering the deviation (often slight) between an observed flow and the corresponding potential flow. Potential flow finds many applications in fields such as aircraft design. For instance, in computational fluid dynamics, one technique is to couple a potential flow solution outside the boundary layer to a solution of the boundary layer equations inside the boundary layer. The absence of boundary layer effects means that any streamline can be replaced by a solid boundary with no change in the flow field, a technique used in many aerodynamic design approaches. Another technique would be the use of Riabouchinsky solids.
== Analysis for two-dimensional incompressible flow ==
Potential flow in two dimensions is simple to analyze using conformal mapping, by the use of transformations of the complex plane. However, use of complex numbers is not required, as for example in the classical analysis of fluid flow past a cylinder. It is not possible to solve a potential flow using complex numbers in three dimensions.
The basic idea is to use a holomorphic (also called analytic) or meromorphic function f, which maps the physical domain (x, y) to the transformed domain (φ, ψ). While x, y, φ and ψ are all real valued, it is convenient to define the complex quantities
z
=
x
+
i
y
,
and
w
=
φ
+
i
ψ
.
{\displaystyle {\begin{aligned}z&=x+iy\,,{\text{ and }}&w&=\varphi +i\psi \,.\end{aligned}}}
Now, if we write the mapping f as
f
(
x
+
i
y
)
=
φ
+
i
ψ
,
or
f
(
z
)
=
w
.
{\displaystyle {\begin{aligned}f(x+iy)&=\varphi +i\psi \,,{\text{ or }}&f(z)&=w\,.\end{aligned}}}
Then, because f is a holomorphic or meromorphic function, it has to satisfy the Cauchy–Riemann equations
∂
φ
∂
x
=
∂
ψ
∂
y
,
∂
φ
∂
y
=
−
∂
ψ
∂
x
.
{\displaystyle {\begin{aligned}{\frac {\partial \varphi }{\partial x}}&={\frac {\partial \psi }{\partial y}}\,,&{\frac {\partial \varphi }{\partial y}}&=-{\frac {\partial \psi }{\partial x}}\,.\end{aligned}}}
The velocity components (u, v), in the (x, y) directions respectively, can be obtained directly from f by differentiating with respect to z. That is
d
f
d
z
=
u
−
i
v
{\displaystyle {\frac {df}{dz}}=u-iv}
So the velocity field v = (u, v) is specified by
u
=
∂
φ
∂
x
=
∂
ψ
∂
y
,
v
=
∂
φ
∂
y
=
−
∂
ψ
∂
x
.
{\displaystyle {\begin{aligned}u&={\frac {\partial \varphi }{\partial x}}={\frac {\partial \psi }{\partial y}},&v&={\frac {\partial \varphi }{\partial y}}=-{\frac {\partial \psi }{\partial x}}\,.\end{aligned}}}
Both φ and ψ then satisfy Laplace's equation:
Δ
φ
=
∂
2
φ
∂
x
2
+
∂
2
φ
∂
y
2
=
0
,
and
Δ
ψ
=
∂
2
ψ
∂
x
2
+
∂
2
ψ
∂
y
2
=
0
.
{\displaystyle {\begin{aligned}\Delta \varphi &={\frac {\partial ^{2}\varphi }{\partial x^{2}}}+{\frac {\partial ^{2}\varphi }{\partial y^{2}}}=0\,,{\text{ and }}&\Delta \psi &={\frac {\partial ^{2}\psi }{\partial x^{2}}}+{\frac {\partial ^{2}\psi }{\partial y^{2}}}=0\,.\end{aligned}}}
So φ can be identified as the velocity potential and ψ is called the stream function. Lines of constant ψ are known as streamlines and lines of constant φ are known as equipotential lines (see equipotential surface).
Streamlines and equipotential lines are orthogonal to each other, since
∇
φ
⋅
∇
ψ
=
∂
φ
∂
x
∂
ψ
∂
x
+
∂
φ
∂
y
∂
ψ
∂
y
=
∂
ψ
∂
y
∂
ψ
∂
x
−
∂
ψ
∂
x
∂
ψ
∂
y
=
0
.
{\displaystyle \nabla \varphi \cdot \nabla \psi ={\frac {\partial \varphi }{\partial x}}{\frac {\partial \psi }{\partial x}}+{\frac {\partial \varphi }{\partial y}}{\frac {\partial \psi }{\partial y}}={\frac {\partial \psi }{\partial y}}{\frac {\partial \psi }{\partial x}}-{\frac {\partial \psi }{\partial x}}{\frac {\partial \psi }{\partial y}}=0\,.}
Thus the flow occurs along the lines of constant ψ and at right angles to the lines of constant φ.
Δψ = 0 is also satisfied, this relation being equivalent to ∇ × v = 0. So the flow is irrotational. The automatic condition ∂2Ψ/∂x ∂y = ∂2Ψ/∂y ∂x then gives the incompressibility constraint ∇ · v = 0.
== Examples of two-dimensional incompressible flows ==
Any differentiable function may be used for f. The examples that follow use a variety of elementary functions; special functions may also be used. Note that multi-valued functions such as the natural logarithm may be used, but attention must be confined to a single Riemann surface.
=== Power laws ===
In case the following power-law conformal map is applied, from z = x + iy to w = φ + iψ:
w
=
A
z
n
,
{\displaystyle w=Az^{n}\,,}
then, writing z in polar coordinates as z = x + iy = reiθ, we have
φ
=
A
r
n
cos
n
θ
and
ψ
=
A
r
n
sin
n
θ
.
{\displaystyle \varphi =Ar^{n}\cos n\theta \qquad {\text{and}}\qquad \psi =Ar^{n}\sin n\theta \,.}
In the figures to the right examples are given for several values of n. The black line is the boundary of the flow, while the darker blue lines are streamlines, and the lighter blue lines are equi-potential lines. Some interesting powers n are:
n = 1/2: this corresponds with flow around a semi-infinite plate,
n = 2/3: flow around a right corner,
n = 1: a trivial case of uniform flow,
n = 2: flow through a corner, or near a stagnation point, and
n = −1: flow due to a source doublet
The constant A is a scaling parameter: its absolute value |A| determines the scale, while its argument arg(A) introduces a rotation (if non-zero).
==== Power laws with n = 1: uniform flow ====
If w = Az1, that is, a power law with n = 1, the streamlines (i.e. lines of constant ψ) are a system of straight lines parallel to the x-axis. This is easiest to see by writing in terms of real and imaginary components:
f
(
x
+
i
y
)
=
A
(
x
+
i
y
)
=
A
x
+
i
A
y
{\displaystyle f(x+iy)=A\,(x+iy)=Ax+iAy}
thus giving φ = Ax and ψ = Ay. This flow may be interpreted as uniform flow parallel to the x-axis.
==== Power laws with n = 2 ====
If n = 2, then w = Az2 and the streamline corresponding to a particular value of ψ are those points satisfying
ψ
=
A
r
2
sin
2
θ
,
{\displaystyle \psi =Ar^{2}\sin 2\theta \,,}
which is a system of rectangular hyperbolae. This may be seen by again rewriting in terms of real and imaginary components. Noting that sin 2θ = 2 sin θ cos θ and rewriting sin θ = y/r and cos θ = x/r it is seen (on simplifying) that the streamlines are given by
ψ
=
2
A
x
y
.
{\displaystyle \psi =2Axy\,.}
The velocity field is given by ∇φ, or
(
u
v
)
=
(
∂
φ
∂
x
∂
φ
∂
y
)
=
(
+
∂
ψ
∂
y
−
∂
ψ
∂
x
)
=
(
+
2
A
x
−
2
A
y
)
.
{\displaystyle {\begin{pmatrix}u\\v\end{pmatrix}}={\begin{pmatrix}{\frac {\partial \varphi }{\partial x}}\\[2px]{\frac {\partial \varphi }{\partial y}}\end{pmatrix}}={\begin{pmatrix}+{\partial \psi \over \partial y}\\[2px]-{\partial \psi \over \partial x}\end{pmatrix}}={\begin{pmatrix}+2Ax\\[2px]-2Ay\end{pmatrix}}\,.}
In fluid dynamics, the flowfield near the origin corresponds to a stagnation point. Note that the fluid at the origin is at rest (this follows on differentiation of f(z) = z2 at z = 0). The ψ = 0 streamline is particularly interesting: it has two (or four) branches, following the coordinate axes, i.e. x = 0 and y = 0. As no fluid flows across the x-axis, it (the x-axis) may be treated as a solid boundary. It is thus possible to ignore the flow in the lower half-plane where y < 0 and to focus on the flow in the upper halfplane. With this interpretation, the flow is that of a vertically directed jet impinging on a horizontal flat plate. The flow may also be interpreted as flow into a 90 degree corner if the regions specified by (say) x, y < 0 are ignored.
==== Power laws with n = 3 ====
If n = 3, the resulting flow is a sort of hexagonal version of the n = 2 case considered above. Streamlines are given by, ψ = 3x2y − y3 and the flow in this case may be interpreted as flow into a 60° corner.
==== Power laws with n = −1: doublet ====
If n = −1, the streamlines are given by
ψ
=
−
A
r
sin
θ
.
{\displaystyle \psi =-{\frac {A}{r}}\sin \theta .}
This is more easily interpreted in terms of real and imaginary components:
ψ
=
−
A
y
r
2
=
−
A
y
x
2
+
y
2
,
x
2
+
y
2
+
A
y
ψ
=
0
,
x
2
+
(
y
+
A
2
ψ
)
2
=
(
A
2
ψ
)
2
.
{\displaystyle {\begin{aligned}\psi ={\frac {-Ay}{r^{2}}}&={\frac {-Ay}{x^{2}+y^{2}}}\,,\\x^{2}+y^{2}+{\frac {Ay}{\psi }}&=0\,,\\x^{2}+\left(y+{\frac {A}{2\psi }}\right)^{2}&=\left({\frac {A}{2\psi }}\right)^{2}\,.\end{aligned}}}
Thus the streamlines are circles that are tangent to the x-axis at the origin. The circles in the upper half-plane thus flow clockwise, those in the lower half-plane flow anticlockwise. Note that the velocity components are proportional to r−2; and their values at the origin is infinite. This flow pattern is usually referred to as a doublet, or dipole, and can be interpreted as the combination of a source-sink pair of infinite strength kept an infinitesimally small distance apart. The velocity field is given by
(
u
,
v
)
=
(
∂
ψ
∂
y
,
−
∂
ψ
∂
x
)
=
(
A
y
2
−
x
2
(
x
2
+
y
2
)
2
,
−
A
2
x
y
(
x
2
+
y
2
)
2
)
.
{\displaystyle (u,v)=\left({\frac {\partial \psi }{\partial y}},-{\frac {\partial \psi }{\partial x}}\right)=\left(A{\frac {y^{2}-x^{2}}{\left(x^{2}+y^{2}\right)^{2}}},-A{\frac {2xy}{\left(x^{2}+y^{2}\right)^{2}}}\right)\,.}
or in polar coordinates:
(
u
r
,
u
θ
)
=
(
1
r
∂
ψ
∂
θ
,
−
∂
ψ
∂
r
)
=
(
−
A
r
2
cos
θ
,
−
A
r
2
sin
θ
)
.
{\displaystyle (u_{r},u_{\theta })=\left({\frac {1}{r}}{\frac {\partial \psi }{\partial \theta }},-{\frac {\partial \psi }{\partial r}}\right)=\left(-{\frac {A}{r^{2}}}\cos \theta ,-{\frac {A}{r^{2}}}\sin \theta \right)\,.}
==== Power laws with n = −2: quadrupole ====
If n = −2, the streamlines are given by
ψ
=
−
A
r
2
sin
2
θ
.
{\displaystyle \psi =-{\frac {A}{r^{2}}}\sin 2\theta \,.}
This is the flow field associated with a quadrupole.
=== Line source and sink ===
A line source or sink of strength
Q
{\displaystyle Q}
(
Q
>
0
{\displaystyle Q>0}
for source and
Q
<
0
{\displaystyle Q<0}
for sink) is given by the potential
w
=
Q
2
π
ln
z
{\displaystyle w={\frac {Q}{2\pi }}\ln z}
where
Q
{\displaystyle Q}
in fact is the volume flux per unit length across a surface enclosing the source or sink. The velocity field in polar coordinates are
u
r
=
Q
2
π
r
,
u
θ
=
0
{\displaystyle u_{r}={\frac {Q}{2\pi r}},\quad u_{\theta }=0}
i.e., a purely radial flow.
=== Line vortex ===
A line vortex of strength
Γ
{\displaystyle \Gamma }
is given by
w
=
Γ
2
π
i
ln
z
{\displaystyle w={\frac {\Gamma }{2\pi i}}\ln z}
where
Γ
{\displaystyle \Gamma }
is the circulation around any simple closed contour enclosing the vortex. The velocity field in polar coordinates are
u
r
=
0
,
u
θ
=
Γ
2
π
r
{\displaystyle u_{r}=0,\quad u_{\theta }={\frac {\Gamma }{2\pi r}}}
i.e., a purely azimuthal flow.
== Analysis for three-dimensional incompressible flows ==
For three-dimensional flows, complex potential cannot be obtained.
=== Point source and sink ===
The velocity potential of a point source or sink of strength
Q
{\displaystyle Q}
(
Q
>
0
{\displaystyle Q>0}
for source and
Q
<
0
{\displaystyle Q<0}
for sink) in spherical polar coordinates is given by
ϕ
=
−
Q
4
π
r
{\displaystyle \phi =-{\frac {Q}{4\pi r}}}
where
Q
{\displaystyle Q}
in fact is the volume flux across a closed surface enclosing the source or sink. The velocity field in spherical polar coordinates are
u
r
=
Q
4
π
r
2
,
u
θ
=
0
,
u
ϕ
=
0.
{\displaystyle u_{r}={\frac {Q}{4\pi r^{2}}},\quad u_{\theta }=0,\quad u_{\phi }=0.}
== See also ==
Potential flow around a circular cylinder
Aerodynamic potential-flow code
Conformal mapping
Darwin drift
Flownet
Laplacian field
Laplace equation for irrotational flow
Potential theory
Stream function
Velocity potential
Helmholtz decomposition
== Notes ==
== References ==
Batchelor, G.K. (1973), An introduction to fluid dynamics, Cambridge University Press, ISBN 0-521-09817-3
Chanson, H. (2009), Applied Hydrodynamics: An Introduction to Ideal and Real Fluid Flows, CRC Press, Taylor & Francis Group, Leiden, The Netherlands, 478 pages, ISBN 978-0-415-49271-3
Lamb, H. (1994) [1932], Hydrodynamics (6th ed.), Cambridge University Press, ISBN 978-0-521-45868-9
Milne-Thomson, L.M. (1996) [1968], Theoretical hydrodynamics (5th ed.), Dover, ISBN 0-486-68970-0
== Further reading ==
Chanson, H. (2007), "Le potentiel de vitesse pour les écoulements de fluides réels: la contribution de Joseph-Louis Lagrange [Velocity potential in real fluid flows: Joseph-Louis Lagrange's contribution]", La Houille Blanche (in French), 93 (5): 127–131, Bibcode:2007LHBl...93..127C, doi:10.1051/lhb:2007072
Wehausen, J.V.; Laitone, E.V. (1960), "Surface waves", in Flügge, S.; Truesdell, C. (eds.), Encyclopedia of Physics, vol. IX, Springer Verlag, pp. 446–778, archived from the original on 2009-01-05, retrieved 2009-03-29
== External links ==
"Irrotational flow of an inviscid fluid". University of Genoa, Faculty of Engineering. Retrieved 2009-03-29.
"Conformal Maps Gallery". 3D-XplorMath. Retrieved 2009-03-29. — Java applets for exploring conformal maps
Potential Flow Visualizations - Interactive WebApps | Wikipedia/Full_potential_equation |
In computational fluid dynamics (CFD), the SIMPLE algorithm is a widely used numerical procedure to solve the Navier–Stokes equations. SIMPLE is an acronym for Semi-Implicit Method for Pressure Linked Equations.
The SIMPLE algorithm was developed by Prof. Brian Spalding and his student Suhas Patankar at Imperial College London in the early 1970s. Since then it has been extensively used by many researchers to solve different kinds of fluid flow and heat transfer problems.
Many popular books on computational fluid dynamics discuss the SIMPLE algorithm in detail.
A modified variant is the SIMPLER algorithm (SIMPLE Revised), that was introduced by Patankar in 1979.
== Algorithm ==
The algorithm is iterative. The basic steps in the solution update are as follows:
Set the boundary conditions.
Compute the gradients of velocity and pressure.
Solve the discretized momentum equation to compute the intermediate velocity field.
Compute the uncorrected mass fluxes at faces.
Solve the pressure correction equation to produce cell values of the pressure correction.
Update the pressure field:
p
k
+
1
=
p
k
+
urf
⋅
p
′
{\displaystyle p^{k+1}=p^{k}+{\text{urf}}\cdot p^{'}}
where urf is the under-relaxation factor for pressure.
Update the boundary pressure corrections
p
b
′
{\displaystyle p_{b}^{'}}
.
Correct the face mass fluxes:
m
˙
f
k
+
1
=
m
˙
f
∗
+
m
˙
f
′
{\displaystyle {\dot {m}}_{f}^{k+1}={\dot {m}}_{f}^{*}+{\dot {m}}_{f}^{'}}
Correct the cell velocities:
v
→
k
+
1
=
v
→
∗
−
Vol
∇
p
′
a
→
P
v
{\displaystyle {\vec {v}}^{k+1}={\vec {v}}^{*}-{\frac {{\text{Vol}}\ \nabla p^{'}}{{\vec {a}}_{P}^{v}}}}
; where
∇
p
′
{\displaystyle {\nabla p^{'}}}
is the gradient of the pressure corrections,
a
→
P
v
{\displaystyle {{\vec {a}}_{P}^{v}}}
is the vector of central coefficients for the discretized linear system representing the velocity equation and Vol is the cell volume.
Update density due to pressure changes.
== See also ==
PISO algorithm
SIMPLEC algorithm
== References == | Wikipedia/SIMPLE_algorithm |
Boundary conditions in fluid dynamics are the set of constraints to boundary value problems in computational fluid dynamics. These boundary conditions include inlet boundary conditions, outlet boundary conditions, wall boundary conditions, constant pressure boundary conditions, axisymmetric boundary conditions, symmetric boundary conditions, and periodic or cyclic boundary conditions.
Transient problems require one more thing i.e., initial conditions where initial values of flow variables are specified at nodes in the flow domain. Various types of boundary conditions are used in CFD for different conditions and purposes and are discussed as follows.
== Inlet boundary conditions ==
In inlet boundary conditions, the distribution of all flow variables needs to be specified at inlet boundaries, mainly flow velocity. This type of boundary conditions are common and specified mostly where inlet flow velocity is known.
== Outlet boundary condition ==
In outlet boundary conditions, the distribution of all flow variables needs to be specified, mainly flow velocity. This can be thought as a conjunction to inlet boundary condition. This type of boundary conditions is common and specified mostly where outlet velocity is known.
The flow attains a fully developed state where no change occurs in the flow direction when the outlet is selected far away from the geometrical disturbances. In such region, an outlet could be outlined and the gradient of all variables could be equated to zero in the flow direction except pressure.
== No-slip boundary condition ==
The most common boundary that comes upon in confined fluid flow problems is the wall of the conduit. The appropriate requirement is called the no-slip boundary condition, wherein the normal component of velocity is fixed at zero, and the tangential component is set equal to the velocity of the wall. It may run counter to intuition, but the no-slip condition has been firmly established in both experiment and theory, though only after decades of controversy and debate.
V
normal
=
0
{\displaystyle V_{\text{normal}}=0}
V
tangential
=
V
wall
{\displaystyle V_{\text{tangential}}=V_{\text{wall}}}
Heat transfer through the wall can be specified or if the walls are considered adiabatic, then heat transfer across the wall is set to zero.
Q
Adiabatic Walls
=
0
{\displaystyle Q_{\text{Adiabatic Walls}}=0}
== Constant pressure boundary conditions ==
This type of boundary condition is used where boundary values of pressure are known and the exact details of the flow distribution are unknown. This includes pressure inlet and outlet conditions mainly. Typical examples that utilize this boundary condition include buoyancy driven flows, internal flows with multiple outlets, free surface flows and external flows around objects. An example is flow outlet into atmosphere where pressure is atmospheric.
== Axisymmetric boundary conditions ==
In this boundary condition, the model is axisymmetric with respect to the main axis such that at a particular r = R, all θs and each z = Z-slice, each flow variable has the same value. A good example is the flow in a circular pipe where the flow and pipe axes coincide.
V
r
(
R
,
θ
,
Z
)
=
C
o
n
s
t
a
n
t
{\displaystyle V_{r}(R,\theta ,Z)=Constant}
(
r
=
R
,
θ
,
Z
)
{\displaystyle (r=R,\theta ,Z)}
== Symmetric boundary condition ==
In this boundary condition, it is assumed that on the two sides of the boundary, same physical processes exist. All the variables have same value and gradients at the same distance from the boundary. It acts as a mirror that reflects all the flow distribution to the other side.
The conditions at symmetric boundary are no flow across boundary and no scalar flux across boundary.
A good example is of a pipe flow with a symmetric obstacle in the flow. The obstacle divides the upper flow and lower flow as mirrored flow.
== Periodic or cyclic boundary condition ==
A periodic or cyclic boundary condition arises from a different type of symmetry in a problem. If a component has a repeated pattern in flow distribution more than twice, thus violating the mirror image requirements required for symmetric boundary condition. A good example would be swept vane pump (Fig.), where the marked area is repeated four times in r-theta coordinates. The cyclic-symmetric areas should have the same flow variables and distribution and should satisfy that in every Z-slice.
== See also ==
Flow conditioning
Initial value problem
== Notes ==
== References ==
Versteeg (1995). "Chapter 9". An Introduction to Computational Fluid Dynamics The Finite Volume Method, 2/e. Longman Scientific & Technical. pp. 192–206. ISBN 0-582-21884-5. | Wikipedia/Boundary_conditions_in_fluid_dynamics |
Computational Fluid Dynamics (CFD) modeling and simulation for phase change materials (PCMs) is a technique used to analyze the performance and behavior of PCMs. The CFD models have been successful in studying and analyzing the air quality, natural ventilation and stratified ventilation, air flow initiated by buoyancy forces and temperature space for the systems integrated with PCMs. Simple shapes like flat plates, cylinders or annular tubes, fins, macro- and micro-encapsulations with containers of different shapes are often modeled in CFD software's to study.
Typically the CFD models include Reynold's Averaged Navier-Stokes equation (RANS) modeling and Large Eddy Simulation (LES). Conservation equations of mass, momentum and energy (Navier – Stokes) are linearised, discretised, and applied to finite volumes to obtain a detailed solution for field distributions of air pressure, velocity and temperature for both indoor spaces integrated with PCMs.
== Governing Equations ==
=== Mass Equation ===
∂
ρ
∂
t
+
∇
⋅
(
ρ
u
)
=
S
m
{\displaystyle {\partial \rho \over \partial t}+\nabla \cdot (\rho \mathbf {u} )=S_{m}}
where
ρ is fluid density,
t is time,
u is the flow velocity vector field
S_m is a Constant.
=== Energy Equation ===
∂
(
ρ
H
)
∂
t
+
∂
∂
x
j
(
ρ
∗
u
j
∗
c
p
∗
T
)
=
∂
∂
x
j
(
λ
⋅
∂
T
∂
x
j
)
+
S
E
{\displaystyle {\begin{aligned}{\partial (\rho {\mathbf {H} }) \over \partial t}+{\partial \over \partial x_{j}}{(\rho *u_{j}*c_{p}*{\mathbf {T} })}={\partial \over \partial x_{j}}(\lambda \cdot {\partial {\mathbf {T} } \over \partial x_{j}})+\mathbf {S_{E}} \end{aligned}}}
where
ρ is the fluid mass density,
S_E is the source term.
=== Navier Stokes equation ===
ρ
(
∂
u
i
∂
t
+
u
j
∂
u
i
∂
x
j
)
=
−
∂
p
∂
x
i
+
μ
∂
2
u
i
∂
x
j
∂
x
j
+
f
i
{\displaystyle \rho \left({\frac {\partial u_{i}}{\partial t}}+u_{j}{\frac {\partial u_{i}}{\partial x_{j}}}\right)=-{\frac {\partial p}{\partial x_{i}}}+\mu {\frac {\partial ^{2}u_{i}}{\partial x_{j}\partial x_{j}}}+f_{i}}
Here f represents "other" body forces (per unit volume), such as gravity or centrifugal force. The shear stress term
∇
⋅
T
{\displaystyle \nabla \cdot {\boldsymbol {\mathsf {T}}}}
becomes
μ
∇
2
v
{\displaystyle \mu \nabla ^{2}\mathbf {v} }
, where
∇
2
{\displaystyle \nabla ^{2}}
is the vector Laplacian.
=== Boussinesq eddy-viscosity approximation ===
−
υ
i
′
υ
j
′
¯
=
2
ν
t
S
i
j
−
2
3
K
δ
i
j
{\displaystyle -{\overline {\upsilon _{i}^{\prime }\upsilon _{j}^{\prime }}}=2\nu _{t}S_{ij}-{\frac {2}{3}}K\delta _{ij}}
where
S
i
j
{\displaystyle S_{ij}}
is the mean rate of strain tensor
ν
t
{\displaystyle \nu _{t}}
is the turbulence eddy viscosity
K
=
1
2
υ
i
′
υ
i
′
¯
{\displaystyle K={\frac {1}{2}}{\overline {\upsilon _{i}'\upsilon _{i}'}}}
is the turbulence kinetic energy
and
δ
i
j
{\displaystyle \delta _{ij}}
is the Kronecker delta.
== Assumptions ==
commonly used assumptions are
Incompressible fluid,
Boussinesq approximation (density is considered constant, except in the gravity forces term).
Constant thermo-physical properties (properties of solid and liquid states are assumed to be equal)
== Phase Change Model ==
Two main thermal characteristics of phase change are the enthalpy-temperature relationship and temperature hysteresis. PCMs tend to have varying enthalpy temperature relationships due to the fact that they are blends of different materials, but pure PCMs have a more localized relationship, which can be approximated by single values for the enthalpy and phase change temperature.
Hysteresis is the phenomenon which causes the PCM to melt and freezes in different temperature ranges and with different enthalpies, which results in a different temperature-enthalpy curve for melting and freezing. Hysteresis is related to the chemical and kinetic properties of the material.
The commonly used enthalpy-porosity model in commercial CFD codes assumes, a linear enthalpy-temperature relationship and ignores hysteresis.[8]
The alternate is to use enthalpy-porosity method. When used to simulate PCM sails and a PCM plate-fin unit it produce reasonable temperature prediction in global space temperature terms. However, there are inaccuracies in transient simulations where time dependent PCM and local wall and air temperatures are of interest. This is over come by use of source terms that considers hysteresis and varying enthalpy-temperature relationship. [9][10]
CFD-DEM model are also used sometimes. Phase motion of discrete solids or particles is obtained by the Discrete Element Method (DEM) which applies Newton's laws of motion to every particle and the flow of continuum fluid is described by the local averaged Navier–Stokes equations that can be solved by the traditional Computational Fluid Dynamics (CFD).CFDEMcoupling (DCS Computing GmbH) is one such open source toolbox for CFD-DEM coupling.
== Process ==
The Governing equations are discretized using an explicit Finite Volume Method. The velocity-pressure coupling is resolved by adopting a Fractional Step Method. The adoption of the enthalpy method allows working with a fixed grid instead of an interface tracking method.
The momentum source term intended to model the presence of solid is only needed in the control volumes that contain solid and liquid, not in the pure solid containing volumes.
The final form of the source term coefficient(S)depends on the approximation adopted for the behavior of the flow in the “mushy zone” (where mixed solid and liquid states are present). However, in the case of constant phase change temperature, the solid-liquid interface should be of infinitesimal width (although it cannot be thinner than one control volume width in our simulations); therefore, the formulation used for the source term is not very important in a physical sense, as long as it manages to bring the velocity to zero in mostly solid control volumes and to vanish if the volume contains pure liquid.[11]
== Applications ==
CFD applications for latent thermal energy storage in PCM
The various CFD codes[1-3] has been employed for the modeling and simulation of the PCM system to understand the heat transfer mechanism, solidification and melting process, distribution of temperature profile and prediction of the air flow. Various commercial packages have been coupled with the CFD analysis to appreciate the feasibility of evaluating the behavior of PCM integrated system.
CFD modeling in PCM in mobilized thermal energy storage
The simulated heat transfer behavior of the PCM in Mobilized Thermal Energy Storage, during the charging process can be successfully conducted by CFD modeling [4] The Volume-Of-Fluid method is employed to solve for the temperature distribution in the multiphase, 2-dimensional pressure-based model. It accounts for the heat transfer mechanism, melting time, and the influence of the structure in charging process using Fluent 12.1. The governing equations employed are mass conversion and continuity equations.
CFD analysis on selection of geometry and type of PCM to be used
Integral, quasi-1D calculations have been reported [5] mainly for conduction-dominated problem using CFD simulation. It was reported that out of three geometries (cubic, cylindrical and spherical), the spherical capsule will have the maximum heat for the heat transfer fluid. Also it is concluded that salt hydrates based PCMs are the better choice over organic PCMs.
CFD analysis on PCM in shell and tube latent thermal heat storage system
The systems are developed in such manner that phase change materials are in the shell portion of the module and passage for the flow of air through the tubes. Conjugate steady state CFD heat transfer analysis has been carried out [6] to analyze the flow and temperature variation of heat transfer fluid in the system. It paves the way for selection and assessment of the geometrical and flow parameters, PCM solidification characteristics for the given boundary conditions
The comparative analysis, to further enhance the effectiveness of shell and tube PCMs has also been accomplished via CFD analysis[7]. Various CFD models with different configuration such as pins embedded on a tube with heat transfer fluid (HTF) flowing in it, with PCM surrounding the tube, fins embedded instead of pins and different configurations of fins on the tube are analyzed, by employing ANSYS code.
== References ==
[1] N. Tay, F. Bruno, M. Belusko. Experimental validation of a CFD model for tubes in a phase change thermal energy storage system. International Journal of Heat and Mass Transfer. 55 (2012) 574–85.
[2] G. Zhou, Y. Zhang, Q. Zhang, K. Lin, H. Di. Performance of a hybrid heating system with thermal storage using shape-stabilized phase-change material plates. Applied Energy. 84 (2007) 1068–77.
[3] C. Arkar, S. Medved. Influence of accuracy of thermal property data of a phase change material on the result of a numerical model of a packed bed latent heat storage with spheres. Thermochimica Acta. 438 (2005) 192–201.
[4] A. Hesaraki, J. Yan, H. Li. CFD modeling of heat charging process in a direct-contact container: for mobilized thermal energy storage. LAP LAMBERT Academic Publishing2012.
[5] E.B. Retterstøl. Thermal energy storage for environmental energy supply. (2012).
[6] V. Antony Aroul Raj, R. Velraj. Heat transfer and pressure drop studies on a PCM-heat exchanger module for free cooling applications. International Journal of Thermal Sciences. 50 (2011) 1573–82.
[7] N. Tay, F. Bruno, M. Belusko. Comparison of pinned and finned tubes in a phase change thermal energy storage system using CFD. Applied Energy. 104 (2013) 79–86.
[8] Mehling H, Cabeza LF, Heat and cold storage with PCM. 1st Ed. Springer-Verlag Heidelberg; 2008
[9] Ye WB, Zhu DS, Wang N. Numerical simulation on phase-change thermal storage/ release in a plate-fin unit, Applied Thermal Engineering 31 (2011), pp. 3871–3884
[10] Gowreesunker BL, Tassou SA, Kolokotroni M. Improved simulation of phase change processes in applications where conduction is the dominant heat transfer mode, Energy and Buildings 47 (2012), pp. 353–359
[11] P. A. Galione et al., Numerical Simulations of Thermal Energy Storage Systems With Phase Change Materials. | Wikipedia/Computational_Fluid_Dynamics_for_Phase_Change_Materials |
Fluid motion is governed by the Navier–Stokes equations, a set of coupled and nonlinear
partial differential equations derived from the basic laws of conservation of mass, momentum
and energy. The unknowns are usually the flow velocity, the pressure and density and temperature. The analytical solution of this equation is impossible hence scientists resort to laboratory experiments in such situations. The answers delivered are, however, usually qualitatively different since dynamical and geometric similitude are difficult to enforce simultaneously between the lab experiment and the prototype. Furthermore, the design and construction of these experiments can be difficult (and costly), particularly for stratified rotating flows. Computational fluid dynamics (CFD) is an additional tool in the arsenal of scientists. In its early days CFD was often controversial, as it involved additional approximation to the governing equations and raised additional (legitimate) issues. Nowadays CFD is an established discipline alongside theoretical and experimental methods. This position is in large part due to the exponential growth of computer power which has allowed us to tackle ever larger and more complex problems.
== Discretization ==
The central process in CFD is the process of discretization, i.e. the process of taking differential equations with an infinite number of degrees of freedom, and reducing it to a system of finite degrees of freedom. Hence, instead of determining the solution everywhere and for all times, we will be satisfied with its calculation at a finite number of locations and at specified time intervals. The partial differential equations are then reduced to a system of algebraic equations that can be solved on a computer. Errors creep in during the discretization process. The nature and characteristics of the errors must be controlled in order to ensure that:
we are solving the correct equations (consistency property)
that the error can be decreased as we increase the number of degrees of freedom (stability and convergence).
Once these two criteria are established, the power of computing machines can be leveraged to solve the problem in a numerically reliable fashion. Various discretization schemes have been developed to cope with a variety of issues. The most notable for our purposes are: finite difference methods, finite volume methods, finite element methods, and spectral methods.
== Finite difference method ==
Finite difference replace the infinitesimal limiting process of derivative calculation:
lim
Δ
x
→
0
f
′
(
x
)
=
f
(
x
+
Δ
x
)
−
f
(
x
)
Δ
x
{\displaystyle \lim _{\Delta x\to 0}f'(x)={\frac {f(x+\Delta x)-f(x)}{\Delta x}}}
with a finite limiting process, i.e.
f
′
(
x
)
=
f
(
x
+
Δ
x
)
−
f
(
x
)
Δ
x
+
O
(
Δ
x
)
{\displaystyle f'(x)={\frac {f(x+\Delta x)-f(x)}{\Delta x}}+O(\Delta x)}
The term
O
(
Δ
x
)
{\displaystyle O(\Delta x)}
gives an indication of the magnitude of the error as a function of the mesh spacing. In this instance, the error is halved if the grid spacing, _x is halved, and we say that this is a first order method. Most FDM used in practice are at least second order accurate except in very special circumstances. Finite Difference method is still the most popular numerical method for solution of PDEs because of their simplicity, efficiency and low computational cost. Their major drawback is in their geometric inflexibility which complicates their applications to general complex domains. These can be alleviated by the use of either mapping techniques and/or masking to fit the computational mesh to the computational domain.
== Finite element method ==
The finite element method was designed to deal with problem with complicated computational regions. The PDE is first recast into a variational form which essentially forces the mean error to be small everywhere. The discretization step proceeds by dividing the computational domain into elements of triangular or rectangular shape. The solution within each element is interpolated with a polynomial of usually low order. Again, the unknowns are the solution at the collocation points. The CFD community adopted the FEM in the 1980s when reliable methods for dealing with advection dominated problems were devised.
== Spectral method ==
Both finite element and finite difference methods are low order methods, usually of 2nd − 4th order, and have local approximation property. By local we mean that a particular collocation point is affected by a limited number of points around it. In contrast, spectral method have global approximation property. The interpolation functions, either polynomials or trigonomic functions are global in nature. Their main benefits is in the rate of convergence which depends on the smoothness of the solution (i.e. how many continuous derivatives does it admit). For infinitely smooth solution, the error decreases exponentially, i.e. faster than algebraic. Spectral methods are mostly used in the computations of homogeneous turbulence, and require relatively simple geometries. Atmospheric model have also adopted spectral methods because of their convergence properties and the regular spherical shape of their computational domain.
== Finite volume method ==
Finite volume methods are primarily used in aerodynamics applications where strong shocks and discontinuities in the solution occur. Finite volume method solves an integral form of the governing equations so that local continuity property do not have to hold.
== Computational cost ==
The CPU time to solve the system of equations differs substantially from method to method. Finite differences are usually the cheapest on a per grid point basis followed by the finite element method and spectral method. However, a per grid point basis comparison is a little like comparing apple and oranges. Spectral methods deliver more accuracy on a per grid point basis than either FEM or FDM. The comparison is more meaningful if the question is recast as ”what is the computational cost to achieve a given error tolerance?”. The problem becomes one of defining the error measure which is a complicated task in general situations.
== Forward Euler approximation ==
u
n
+
1
−
u
n
Δ
t
≈
κ
u
n
{\displaystyle {\frac {u^{n+1}-u^{n}}{\Delta t}}\approx \kappa u^{n}}
Equation is an explicit approximation to the original differential equation since no information about the unknown function at the future time (n + 1)t has been used on the right hand side of the equation. In order to derive the error committed in the approximation we rely again on Taylor series.
== Backward difference ==
This is an example of an implicit method since the unknown u(n + 1) has been used in evaluating the slope of the solution on the right hand side; this is not a problem to solve for u(n + 1) in this scalar and linear case. For more complicated situations like a nonlinear right hand side or a system of equations, a nonlinear system of equations may have to be inverted.
== References ==
Sources
Zalesak, S. T., 2005. The design of flux-corrected transport algorithms for structured grids. In: Kuzmin, D., Löhner, R., Turek, S. (Eds.), Flux-Corrected Transport. Springer
Zalesak, S. T., 1979. Fully multidimensional flux-corrected transport algorithms for fluids. Journal of Computational Physics.
Leonard, B. P., MacVean, M. K., Lock, A. P., 1995. The flux integral method for multi-dimensional convection and diffusion. Applied Mathematical Modelling.
Shchepetkin, A. F., McWilliams, J. C., 1998. Quasi-monotone advection schemes based on explicit locally adaptive dissipation. Monthly Weather Review
Jiang, C.-S., Shu, C.-W., 1996. Efficient implementation of weighed eno schemes. Journal of Computational Physics
Finlayson, B. A., 1972. The Method of Weighed Residuals and Variational Principles. Academic Press.
Durran, D. R., 1999. Numerical Methods for Wave Equations in Geophysical Fluid Dynamics. Springer, New York.
Dukowicz, J. K., 1995. Mesh effects for rossby waves. Journal of Computational Physics
Canuto, C., Hussaini, M. Y., Quarteroni, A., Zang, T. A., 1988. Spectral Methods in Fluid Dynamics. Springer Series in Computational Physics. Springer-Verlag, New York.
Butcher, J. C., 1987. The Numerical Analysis of Ordinary Differential Equations. John Wiley and Sons Inc., NY.
Boris, J. P., Book, D. L., 1973. Flux corrected transport, i: Shasta, a fluid transport algorithm that works. Journal of Computational Physics
Citations | Wikipedia/Numerical_methods_in_fluid_mechanics |
In computational fluid dynamics, shock-capturing methods are a class of techniques for computing inviscid flows with shock waves. The computation of flow containing shock waves is an extremely difficult task because such flows result in sharp, discontinuous changes in flow variables such as pressure, temperature, density, and velocity across the shock.
== Method ==
In shock-capturing methods, the governing equations of inviscid flows (i.e. Euler equations) are cast in conservation form and any shock waves or discontinuities are computed as part of the solution. Here, no special treatment is employed to take care of the shocks themselves, which is in contrast to the shock-fitting method, where shock waves are explicitly introduced in the solution using appropriate shock relations (Rankine–Hugoniot relations). The shock waves predicted by shock-capturing methods are generally not sharp and may be smeared over several grid elements. Also, classical shock-capturing methods have the disadvantage that unphysical oscillations (Gibbs phenomenon) may develop near strong shocks.
== Euler equations ==
The Euler equations are the governing equations for inviscid flow. To implement shock-capturing methods, the conservation form of the Euler equations are used. For a flow without external heat transfer and work transfer (isoenergetic flow), the conservation form of the Euler equation in Cartesian coordinate system can be written as
∂
U
∂
t
+
∂
F
∂
x
+
∂
G
∂
y
+
∂
H
∂
z
=
0
{\displaystyle {\frac {\partial {\mathbf {U} }}{\partial t}}+{\frac {\partial {\mathbf {F} }}{\partial x}}+{\frac {\partial {\mathbf {G} }}{\partial y}}+{\frac {\partial {\mathbf {H} }}{\partial z}}=0}
where the vectors U, F, G, and H are given by
U
=
[
ρ
ρ
u
ρ
v
ρ
w
ρ
e
t
]
,
F
=
[
ρ
u
ρ
u
2
+
p
ρ
u
v
ρ
u
w
(
ρ
e
t
+
p
)
u
]
,
G
=
[
ρ
v
ρ
v
u
ρ
v
2
+
p
ρ
v
w
(
ρ
e
t
+
p
)
v
]
,
H
=
[
ρ
w
ρ
w
u
ρ
w
v
ρ
w
2
+
p
(
ρ
e
t
+
p
)
w
]
{\displaystyle \mathbf {U} ={\begin{bmatrix}\rho \\\rho u\\\rho v\\\rho w\\\rho e_{t}\\\end{bmatrix}},\quad \mathbf {F} ={\begin{bmatrix}\rho u\\\rho u^{2}+p\\\rho uv\\\rho uw\\(\rho e_{t}+p)u\\\end{bmatrix}},\quad \mathbf {G} ={\begin{bmatrix}\rho v\\\rho vu\\\rho v^{2}+p\\\rho vw\\(\rho e_{t}+p)v\\\end{bmatrix}},\quad \mathbf {H} ={\begin{bmatrix}\rho w\\\rho wu\\\rho wv\\\rho w^{2}+p\\(\rho e_{t}+p)w\\\end{bmatrix}}}
where
e
t
{\displaystyle e_{t}}
is the total energy (internal energy + kinetic energy + potential energy) per unit mass. That is
e
t
=
e
+
u
2
+
v
2
+
w
2
2
+
g
z
{\displaystyle e_{t}=e+{\frac {u^{2}+v^{2}+w^{2}}{2}}+gz}
The Euler equations may be integrated with any of the shock-capturing methods available to obtain the solution.
== Classical and modern shock capturing methods ==
From a historical point of view, shock-capturing methods can be classified into two general categories: classical methods and modern shock capturing methods (also called high-resolution schemes). Modern shock-capturing methods are generally upwind biased in contrast to classical symmetric or central discretizations. Upwind-biased differencing schemes attempt to discretize hyperbolic partial differential equations by using differencing based on the direction of the flow. On the other hand, symmetric or central schemes do not consider any information about the direction of wave propagation.
Regardless of the shock-capturing scheme used, a stable calculation in the presence of shock waves requires a certain amount of numerical dissipation, in order to avoid the formation of unphysical numerical oscillations. In the case of classical shock-capturing methods, numerical dissipation terms are usually linear and the same amount is uniformly applied at all grid points. Classical shock-capturing methods only exhibit accurate results in the case of smooth and weak shock solutions, but when strong shock waves are present in the solution, non-linear instabilities and oscillations may arise across discontinuities. Modern shock-capturing methods usually employ nonlinear numerical dissipation, where a feedback mechanism adjusts the amount of artificial dissipation added in accord with the features in the solution. Ideally, artificial numerical dissipation needs to be added only in the vicinity of shocks or other sharp features, and regions of smooth flow must be left unmodified. These schemes have proven to be stable and accurate even for problems containing strong shock waves.
Some of the well-known classical shock-capturing methods include the MacCormack method (uses a discretization scheme for the numerical solution of hyperbolic partial differential equations), Lax–Wendroff method (based on finite differences, uses a numerical method for the solution of hyperbolic partial differential equations), and Beam–Warming method. Examples of modern shock-capturing schemes include higher-order total variation diminishing (TVD) schemes first proposed by Harten, flux-corrected transport scheme introduced by Boris and Book, Monotonic Upstream-centered Schemes for Conservation Laws (MUSCL) based on Godunov approach and introduced by van Leer, various essentially non-oscillatory schemes (ENO) proposed by Harten et al., and the piecewise parabolic method (PPM) proposed by Colella and Woodward. Another important class of high-resolution schemes belongs to the approximate Riemann solvers proposed by Roe and by Osher. The schemes proposed by Jameson and Baker, where linear numerical dissipation terms depend on nonlinear switch functions, fall in between the classical and modern shock-capturing methods.
== References ==
=== Books ===
Anderson, J. D., "Modern Compressible Flow with Historical Perspective", McGraw-Hill (2004).
Hirsch, C., "Numerical Computation of Internal and External Flows", Vol. II, 2nd ed., Butterworth-Heinemann (2007).
Laney, C. B., "Computational Gasdynamics", Cambridge Univ. Press 1998).
LeVeque, R. J., "Numerical Methods for Conservation Laws", Birkhauser-Verlag (1992).
Tannehill, J. C., Anderson, D. A., and Pletcher, R. H., "Computational Fluid Dynamics and Heat Transfer", 2nd ed., Taylor & Francis (1997).
Toro, E. F., "Riemann Solvers and Numerical Methods for Fluid Dynamics", 2nd ed., Springer-Verlag (1999).
=== Technical papers ===
Boris, J. P. and Book, D. L., "Flux-Corrected Transport III. Minimal Error FCT Algorithms", J. Comput. Phys., 20, 397–431 (1976).
Colella, P. and Woodward, P., "The Piecewise parabolic Method (PPM) for Gasdynamical Simulations", J. Comput. Phys., 54, 174–201 (1984).
Godunov, S. K., "A Difference Scheme for Numerical Computation of Discontinuous Solution of Hyperbolic Equations", Mat. Sbornik, 47, 271–306 (1959).
Harten, A., "High Resolution Schemes for Hyperbolic Conservation Laws", J. Comput. Phys., 49, 357–293 (1983).
Harten, A., Engquist, B., Osher, S., and Chakravarthy, S. R., "Uniformly High Order Accurate Essentially Non-Oscillatory Schemes III", J. Comput. Phys., 71, 231–303 (1987).
Jameson, A. and Baker, T., "Solution of the Euler Equations for Complex Configurations", AIAA Paper, 83–1929 (1983).
MacCormack, R. W., "The Effect of Viscosity in Hypervelocity Impact Cratering", AIAA Paper, 69–354 (1969).
Roe, P. L., "Approximate Riemann Solvers, Parameter Vectors and Difference Schemes", J. Comput. Phys. 43, 357–372 (1981).
Shu, C.-W., Osher, S., "Efficient Implementation of Essentially Non-Oscillatory Shock Capturing Schemes", J. Comput. Phys., 77, 439–471 (1988).
van Leer, B., "Towards the Ultimate Conservative Difference Scheme V; A Second-order Sequel to Godunov's Sequel", J. Comput. Phys., 32, 101–136, (1979). | Wikipedia/Shock_capturing_methods |
A computed tomography scan (CT scan), formerly called computed axial tomography scan (CAT scan), is a medical imaging technique used to obtain detailed internal images of the body. The personnel that perform CT scans are called radiographers or radiology technologists.
CT scanners use a rotating X-ray tube and a row of detectors placed in a gantry to measure X-ray attenuations by different tissues inside the body. The multiple X-ray measurements taken from different angles are then processed on a computer using tomographic reconstruction algorithms to produce tomographic (cross-sectional) images (virtual "slices") of a body. CT scans can be used in patients with metallic implants or pacemakers, for whom magnetic resonance imaging (MRI) is contraindicated.
Since its development in the 1970s, CT scanning has proven to be a versatile imaging technique. While CT is most prominently used in medical diagnosis, it can also be used to form images of non-living objects. The 1979 Nobel Prize in Physiology or Medicine was awarded jointly to South African-American physicist Allan MacLeod Cormack and British electrical engineer Godfrey Hounsfield "for the development of computer-assisted tomography".
== Types ==
On the basis of image acquisition and procedures, various type of scanners are available in the market.
=== Sequential CT ===
Sequential CT, also known as step-and-shoot CT, is a type of scanning method in which the CT table moves stepwise. The table increments to a particular location and then stops which is followed by the X-ray tube rotation and acquisition of a slice. The table then increments again, and another slice is taken. The table movement stops while taking slices. This results in an increased time of scanning.
=== Spiral CT ===
Spinning tube, commonly called spiral CT, or helical CT, is an imaging technique in which an entire X-ray tube is spun around the central axis of the area being scanned. These are the dominant type of scanners on the market because they have been manufactured longer and offer a lower cost of production and purchase. The main limitation of this type of CT is the bulk and inertia of the equipment (X-ray tube assembly and detector array on the opposite side of the circle) which limits the speed at which the equipment can spin. Some designs use two X-ray sources and detector arrays offset by an angle, as a technique to improve temporal resolution.
=== Electron beam tomography ===
Electron beam tomography (EBT) is a specific form of CT in which a large enough X-ray tube is constructed so that only the path of the electrons, travelling between the cathode and anode of the X-ray tube, are spun using deflection coils. This type had a major advantage since sweep speeds can be much faster, allowing for less blurry imaging of moving structures, such as the heart and arteries. Fewer scanners of this design have been produced when compared with spinning tube types, mainly due to the higher cost associated with building a much larger X-ray tube and detector array and limited anatomical coverage.
=== Dual energy CT ===
Dual energy CT, also known as spectral CT, is an advancement of computed Tomography in which two energies are used to create two sets of data. A dual energy CT may employ dual source, single source with dual detector layer, single source with energy switching methods to get two different sets of data.
Dual source CT is an advanced scanner with a two X-ray tube detector system, unlike conventional single tube systems. These two detector systems are mounted on a single gantry at 90° in the same plane. Dual source CT scanners allow fast scanning with higher temporal resolution by acquiring a full CT slice in only half a rotation. Fast imaging reduces motion blurring at high heart rates and potentially allowing for shorter breath-hold time. This is particularly useful for ill patients having difficulty holding their breath or unable to take heart-rate lowering medication.
Single source with energy switching is another mode of dual energy CT in which a single tube is operated at two different energies by switching the energies frequently.
=== CT perfusion imaging ===
CT perfusion imaging is a specific form of CT to assess flow through blood vessels whilst injecting a contrast agent. Blood flow, blood transit time, and organ blood volume, can all be calculated with reasonable sensitivity and specificity. This type of CT may be used on the heart, although sensitivity and specificity for detecting abnormalities are still lower than for other forms of CT. This may also be used on the brain, where CT perfusion imaging can often detect poor brain perfusion well before it is detected using a conventional spiral CT scan. This is better for stroke diagnosis than other CT types.
=== PET CT ===
Positron emission tomography–computed tomography is a hybrid CT modality which combines, in a single gantry, a positron emission tomography (PET) scanner and an X-ray computed tomography (CT) scanner, to acquire sequential images from both devices in the same session, which are combined into a single superposed (co-registered) image. Thus, functional imaging obtained by PET, which depicts the spatial distribution of metabolic or biochemical activity in the body can be more precisely aligned or correlated with anatomic imaging obtained by CT scanning.
PET-CT gives both anatomical and functional details of an organ under examination and is helpful in detecting different type of cancers.
== Medical use ==
Since its introduction in the 1970s, CT has become an important tool in medical imaging to supplement conventional X-ray imaging and medical ultrasonography. It has more recently been used for preventive medicine or screening for disease, for example, CT colonography for people with a high risk of colon cancer, or full-motion heart scans for people with a high risk of heart disease. Several institutions offer full-body scans for the general population although this practice goes against the advice and official position of many professional organizations in the field primarily due to the radiation dose applied.
The use of CT scans has increased dramatically over the last two decades in many countries. An estimated 72 million scans were performed in the United States in 2007 and more than 80 million in 2015.
=== Head ===
CT scanning of the head is typically used to detect infarction (stroke), tumors, calcifications, haemorrhage, and bone trauma. Of the above, hypodense (dark) structures can indicate edema and infarction, hyperdense (bright) structures indicate calcifications and haemorrhage and bone trauma can be seen as disjunction in bone windows. Tumors can be detected by the swelling and anatomical distortion they cause, or by surrounding edema. CT scanning of the head is also used in CT-guided stereotactic surgery and radiosurgery for treatment of intracranial tumors, arteriovenous malformations, and other surgically treatable conditions using a device known as the N-localizer.
=== Neck ===
Contrast CT is generally the initial study of choice for neck masses in adults. CT of the thyroid plays an important role in the evaluation of thyroid cancer. CT scan often incidentally finds thyroid abnormalities, and so is often the preferred investigation modality for thyroid abnormalities.
=== Lungs ===
A CT scan can be used for detecting both acute and chronic changes in the lung parenchyma, the tissue of the lungs. It is particularly relevant here because normal two-dimensional X-rays do not show such defects. A variety of techniques are used, depending on the suspected abnormality. For evaluation of chronic interstitial processes such as emphysema, and fibrosis, thin sections with high spatial frequency reconstructions are used; often scans are performed both on inspiration and expiration. This special technique is called high resolution CT that produces a sampling of the lung, and not continuous images.
Bronchial wall thickening can be seen on lung CTs and generally (but not always) implies inflammation of the bronchi.
An incidentally found nodule in the absence of symptoms (sometimes referred to as an incidentaloma) may raise concerns that it might represent a tumor, either benign or malignant. Perhaps persuaded by fear, patients and doctors sometimes agree to an intensive schedule of CT scans, sometimes up to every three months and beyond the recommended guidelines, in an attempt to do surveillance on the nodules. However, established guidelines advise that patients without a prior history of cancer and whose solid nodules have not grown over a two-year period are unlikely to have any malignant cancer. For this reason, and because no research provides supporting evidence that intensive surveillance gives better outcomes, and because of risks associated with having CT scans, patients should not receive CT screening in excess of those recommended by established guidelines.
=== Angiography ===
Computed tomography angiography (CTA) is a type of contrast CT to visualize the arteries and veins throughout the body. This ranges from arteries serving the brain to those bringing blood to the lungs, kidneys, arms and legs. An example of this type of exam is CT pulmonary angiogram (CTPA) used to diagnose pulmonary embolism (PE). It employs computed tomography and an iodine-based contrast agent to obtain an image of the pulmonary arteries. CT scans can reduce the risk of angiography by providing clinicians with more information about the positioning and number of clots prior to the procedure.
=== Cardiac ===
A CT scan of the heart is performed to gain knowledge about cardiac or coronary anatomy. Traditionally, cardiac CT scans are used to detect, diagnose, or follow up coronary artery disease. More recently CT has played a key role in the fast-evolving field of transcatheter structural heart interventions, more specifically in the transcatheter repair and replacement of heart valves.
The main forms of cardiac CT scanning are:
Coronary CT angiography (CCTA): the use of CT to assess the coronary arteries of the heart. The subject receives an intravenous injection of radiocontrast, and then the heart is scanned using a high-speed CT scanner, allowing radiologists to assess the extent of occlusion in the coronary arteries, usually to diagnose coronary artery disease.
Coronary CT calcium scan: also used for the assessment of severity of coronary artery disease. Specifically, it looks for calcium deposits in the coronary arteries that can narrow arteries and increase the risk of a heart attack. A typical coronary CT calcium scan is done without the use of radiocontrast, but it can possibly be done from contrast-enhanced images as well.
To better visualize the anatomy, post-processing of the images is common. Most common are multiplanar reconstructions (MPR) and volume rendering. For more complex anatomies and procedures, such as heart valve interventions, a true 3D reconstruction or a 3D print is created based on these CT images to gain a deeper understanding.
=== Abdomen and pelvis ===
CT is an accurate technique for diagnosis of abdominal diseases like Crohn's disease, GIT bleeding, and diagnosis and staging of cancer, as well as follow-up after cancer treatment to assess response. It is commonly used to investigate acute abdominal pain.
Non-contrast-enhanced CT scans are the gold standard for diagnosing kidney stone disease. They allow clinicians to estimate the size, volume, and density of stones, helping to guide further treatment; with size being especially important in predicting the time to spontaneous passage of a stone.
=== Axial skeleton and extremities ===
For the axial skeleton and extremities, CT is often used to image complex fractures, especially ones around joints, because of its ability to reconstruct the area of interest in multiple planes. Fractures, ligamentous injuries, and dislocations can easily be recognized with a 0.2 mm resolution. With modern dual-energy CT scanners, new areas of use have been established, such as aiding in the diagnosis of gout.
=== Biomechanical use ===
CT is used in biomechanics to quickly reveal the geometry, anatomy, density and elastic moduli of biological tissues.
== Other uses ==
=== Industrial use ===
Industrial CT scanning (industrial computed tomography) is a process which uses X-ray equipment to produce 3D representations of components both externally and internally. Industrial CT scanning has been used in many areas of industry for internal inspection of components. Some of the key uses for CT scanning have been flaw detection, failure analysis, metrology, assembly analysis, image-based finite element methods and reverse engineering applications. CT scanning is also employed in the imaging and conservation of museum artifacts.
=== Aviation security ===
CT scanning has also found an application in transport security (predominantly airport security) where it is currently used in a materials analysis context for explosives detection CTX (explosive-detection device) and is also under consideration for automated baggage/parcel security scanning using computer vision based object recognition algorithms that target the detection of specific threat items based on 3D appearance (e.g. guns, knives, liquid containers). Its usage in airport security pioneered at Shannon Airport in March 2022 has ended the ban on liquids over 100 ml there, a move that Heathrow Airport plans for a full roll-out on 1 December 2022 and the TSA spent $781.2 million on an order for over 1,000 scanners, ready to go live in the summer.
=== Geological use ===
X-ray CT is used in geological studies to quickly reveal materials inside a drill core. Dense minerals such as pyrite and barite appear brighter and less dense components such as clay appear dull in CT images.
=== Paleontological use ===
Traditional methods of studying fossils are often destructive, such as the use of thin sections and physical preparation. X-ray CT is used in paleontology to non-destructively visualize fossils in 3D. This has many advantages. For example, we can look at fragile structures that might never otherwise be able to be studied. In addition, one can freely move around models of fossils in virtual 3D space to inspect it without damaging the fossil.
=== Cultural heritage use ===
X-ray CT and micro-CT can also be used for the conservation and preservation of objects of cultural heritage. For many fragile objects, direct research and observation can be damaging and can degrade the object over time. Using CT scans, conservators and researchers are able to determine the material composition of the objects they are exploring, such as the position of ink along the layers of a scroll, without any additional harm. These scans have been optimal for research focused on the workings of the Antikythera mechanism or the text hidden inside the charred outer layers of the En-Gedi Scroll. However, they are not optimal for every object subject to these kinds of research questions, as there are certain artifacts like the Herculaneum papyri in which the material composition has very little variation along the inside of the object. After scanning these objects, computational methods can be employed to examine the insides of these objects, as was the case with the virtual unwrapping of the En-Gedi scroll and the Herculaneum papyri. Micro-CT has also proved useful for analyzing more recent artifacts such as still-sealed historic correspondence that employed the technique of letterlocking (complex folding and cuts) that provided a "tamper-evident locking mechanism". Further examples of use cases in archaeology is imaging the contents of sarcophagi or ceramics.
Recently, CWI in Amsterdam has collaborated with Rijksmuseum to investigate art object inside details in the framework called IntACT.
=== Microorganism research ===
Varied types of fungus can degrade wood to different degrees, one Belgium research group has been used X-ray CT 3 dimension with sub-micron resolution unveiled fungi can penetrate micropores of 0.6 μm under certain conditions.
=== Timber sawmill ===
Sawmills use industrial CT scanners to detect round defects, for instance knots, to improve total value of timber productions. Most sawmills are planning to incorporate this robust detection tool to improve productivity in the long run, however initial investment cost is high.
== Interpretation of results ==
=== Presentation ===
The result of a CT scan is a volume of voxels, which may be presented to a human observer by various methods, which broadly fit into the following categories:
Slices (of varying thickness). Thin slice is generally regarded as planes representing a thickness of less than 3 mm. Thick slice is generally regarded as planes representing a thickness between 3 mm and 5 mm.
Projection, including maximum intensity projection and average intensity projection
Volume rendering (VR)
Technically, all volume renderings become projections when viewed on a 2-dimensional display, making the distinction between projections and volume renderings a bit vague. The epitomes of volume rendering models feature a mix of for example coloring and shading in order to create realistic and observable representations.
Two-dimensional CT images are conventionally rendered so that the view is as though looking up at it from the patient's feet. Hence, the left side of the image is to the patient's right and vice versa, while anterior in the image also is the patient's anterior and vice versa. This left-right interchange corresponds to the view that physicians generally have in reality when positioned in front of patients.
==== Grayscale ====
Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3,071 (most attenuating) to −1,024 (least attenuating) on the Hounsfield scale. A pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a voxel, which is a three-dimensional unit. Water has an attenuation of 0 Hounsfield units (HU), while air is −1,000 HU, cancellous bone is typically +400 HU, and cranial bone can reach 2,000 HU. The attenuation of metallic implants depends on the atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely block the X-ray and is, therefore, responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between low- and high-density materials, which results in data values that exceed the dynamic range of the processing electronics.
==== Windowing ====
CT data sets have a very high dynamic range which must be reduced for display or printing. This is typically done via a process of "windowing", which maps a range (the "window") of pixel values to a grayscale ramp. For example, CT images of the brain are commonly viewed with a window extending from 0 HU to 80 HU. Pixel values of 0 and lower, are displayed as black; values of 80 and higher are displayed as white; values within the window are displayed as a gray intensity proportional to position within the window. The window used for display must be matched to the X-ray density of the object of interest, in order to optimize the visible detail. Window width and window level parameters are used to control the windowing of a scan.
==== Multiplanar reconstruction and projections ====
Multiplanar reconstruction (MPR) is the process of converting data from one anatomical plane (usually transverse) to other planes. It can be used for thin slices as well as projections. Multiplanar reconstruction is possible as present CT scanners provide almost isotropic resolution.
MPR is used almost in every scan. The spine is frequently examined with it. An image of the spine in axial plane can only show one vertebral bone at a time and cannot show its relation with other vertebral bones. By reformatting the data in other planes, visualization of the relative position can be achieved in sagittal and coronal plane.
New software allows the reconstruction of data in non-orthogonal (oblique) planes, which help in the visualization of organs which are not in orthogonal planes. It is better suited for visualization of the anatomical structure of the bronchi as they do not lie orthogonal to the direction of the scan.
Curved-plane reconstruction (or curved planar reformation = CPR) is performed mainly for the evaluation of vessels. This type of reconstruction helps to straighten the bends in a vessel, thereby helping to visualize a whole vessel in a single image or in multiple images. After a vessel has been "straightened", measurements such as cross-sectional area and length can be made. This is helpful in preoperative assessment of a surgical procedure.
For 2D projections used in radiation therapy for quality assurance and planning of external beam radiotherapy, including digitally reconstructed radiographs, see Beam's eye view.
==== Volume rendering ====
A threshold value of radiodensity is set by the operator (e.g., a level that corresponds to bone). With the help of edge detection image processing algorithms a 3D model can be constructed from the initial data and displayed on screen. Various thresholds can be used to get multiple models, each anatomical component such as muscle, bone and cartilage can be differentiated on the basis of different colours given to them. However, this mode of operation cannot show interior structures.
Surface rendering is limited technique as it displays only the surfaces that meet a particular threshold density, and which are towards the viewer. However, In volume rendering, transparency, colours and shading are used which makes it easy to present a volume in a single image. For example, Pelvic bones could be displayed as semi-transparent, so that, even viewing at an oblique angle one part of the image does not hide another.
=== Image quality ===
==== Dose versus image quality ====
An important issue within radiology today is how to reduce the radiation dose during CT examinations without compromising the image quality. In general, higher radiation doses result in higher-resolution images, while lower doses lead to increased image noise and unsharp images. However, increased dosage raises the adverse side effects, including the risk of radiation-induced cancer – a four-phase abdominal CT gives the same radiation dose as 300 chest X-rays. Several methods that can reduce the exposure to ionizing radiation during a CT scan exist.
New software technology can significantly reduce the required radiation dose. New iterative tomographic reconstruction algorithms (e.g., iterative Sparse Asymptotic Minimum Variance) could offer super-resolution without requiring higher radiation dose.
Individualize the examination and adjust the radiation dose to the body type and body organ examined. Different body types and organs require different amounts of radiation.
Higher resolution is not always suitable, such as detection of small pulmonary masses.
==== Artifacts ====
Although images produced by CT are generally faithful representations of the scanned volume, the technique is susceptible to a number of artifacts, such as the following:Chapters 3 and 5
Streak artifact
Streaks are often seen around materials that block most X-rays, such as metal or bone. Numerous factors contribute to these streaks: under sampling, photon starvation, motion, beam hardening, and Compton scatter. This type of artifact commonly occurs in the posterior fossa of the brain, or if there are metal implants. The streaks can be reduced using newer reconstruction techniques. Approaches such as metal artifact reduction (MAR) can also reduce this artifact. MAR techniques include spectral imaging, where CT images are taken with photons of different energy levels, and then synthesized into monochromatic images with special software such as GSI (Gemstone Spectral Imaging).
Partial volume effect
This appears as "blurring" of edges. It is due to the scanner being unable to differentiate between a small amount of high-density material (e.g., bone) and a larger amount of lower density (e.g., cartilage). The reconstruction assumes that the X-ray attenuation within each voxel is homogeneous; this may not be the case at sharp edges. This is most commonly seen in the z-direction (craniocaudal direction), due to the conventional use of highly anisotropic voxels, which have a much lower out-of-plane resolution, than in-plane resolution. This can be partially overcome by scanning using thinner slices, or an isotropic acquisition on a modern scanner.
Ring artifact
Probably the most common mechanical artifact, the image of one or many "rings" appears within an image. They are usually caused by the variations in the response from individual elements in a two dimensional X-ray detector due to defect or miscalibration. Ring artifacts can largely be reduced by intensity normalization, also referred to as flat field correction. Remaining rings can be suppressed by a transformation to polar space, where they become linear stripes. A comparative evaluation of ring artefact reduction on X-ray tomography images showed that the method of Sijbers and Postnov can effectively suppress ring artefacts.
Noise
This appears as grain on the image and is caused by a low signal to noise ratio. This occurs more commonly when a thin slice thickness is used. It can also occur when the power supplied to the X-ray tube is insufficient to penetrate the anatomy.
Windmill
Streaking appearances can occur when the detectors intersect the reconstruction plane. This can be reduced with filters or a reduction in pitch.
Beam hardening
This can give a "cupped appearance" when grayscale is visualized as height. It occurs because conventional sources, like X-ray tubes emit a polychromatic spectrum. Photons of higher photon energy levels are typically attenuated less. Because of this, the mean energy of the spectrum increases when passing the object, often described as getting "harder". This leads to an effect increasingly underestimating material thickness, if not corrected. Many algorithms exist to correct for this artifact. They can be divided into mono- and multi-material methods.
== Advantages ==
CT scanning has several advantages over traditional two-dimensional medical radiography. First, CT eliminates the superimposition of images of structures outside the area of interest. Second, CT scans have greater image resolution, enabling examination of finer details. CT can distinguish between tissues that differ in radiographic density by 1% or less. Third, CT scanning enables multiplanar reformatted imaging: scan data can be visualized in the transverse (or axial), coronal, or sagittal plane, depending on the diagnostic task.
The improved resolution of CT has permitted the development of new investigations. For example, CT angiography avoids the invasive insertion of a catheter. CT scanning can perform a virtual colonoscopy with greater accuracy and less discomfort for the patient than a traditional colonoscopy. Virtual colonography is far more accurate than a barium enema for detection of tumors and uses a lower radiation dose.
CT is a moderate-to-high radiation diagnostic technique. The radiation dose for a particular examination depends on multiple factors: volume scanned, patient build, number and type of scan protocol, and desired resolution and image quality. Two helical CT scanning parameters, tube current and pitch, can be adjusted easily and have a profound effect on radiation. CT scanning is more accurate than two-dimensional radiographs in evaluating anterior interbody fusion, although they may still over-read the extent of fusion.
== Adverse effects ==
=== Cancer ===
The radiation used in CT scans can damage body cells, including DNA molecules, which can lead to radiation-induced cancer. The radiation doses received from CT scans is variable. Compared to the lowest dose X-ray techniques, CT scans can have 100 to 1,000 times higher dose than conventional X-rays. However, a lumbar spine X-ray has a similar dose as a head CT. Articles in the media often exaggerate the relative dose of CT by comparing the lowest-dose X-ray techniques (chest X-ray) with the highest-dose CT techniques. In general, a routine abdominal CT has a radiation dose similar to three years of average background radiation.
Large scale population-based studies have consistently demonstrated that low dose radiation from CT scans has impacts on cancer incidence in a variety of cancers. For example, in a large population-based Australian cohort it was found that up to 3.7% of brain cancers were caused by CT scan radiation. Some experts project that in the future, between three and five percent of all cancers would result from medical imaging. An Australian study of 10.9 million people reported that the increased incidence of cancer after CT scan exposure in this cohort was mostly due to irradiation. In this group, one in every 1,800 CT scans was followed by an excess cancer. If the lifetime risk of developing cancer is 40% then the absolute risk rises to 40.05% after a CT. The risks of CT scan radiation are especially important in patients undergoing recurrent CT scans within a short time span of one to five years.
Some experts note that CT scans are known to be "overused," and "there is distressingly little evidence of better health outcomes associated with the current high rate of scans." On the other hand, a recent paper analyzing the data of patients who received high cumulative doses showed a high degree of appropriate use. This creates an important issue of cancer risk to these patients. Moreover, a highly significant finding that was previously unreported is that some patients received >100 mSv dose from CT scans in a single day, which counteracts existing criticisms some investigators may have on the effects of protracted versus acute exposure.
There are contrarian views and the debate is ongoing. Some studies have shown that publications indicating an increased risk of cancer from typical doses of body CT scans are plagued with serious methodological limitations and several highly improbable results, concluding that no evidence indicates such low doses cause any long-term harm.
One study estimated that as many as 0.4% of cancers in the United States resulted from CT scans, and that this may have increased to as much as 1.5 to 2% based on the rate of CT use in 2007. Others dispute this estimate, as there is no consensus that the low levels of radiation used in CT scans cause damage. Lower radiation doses are used in many cases, such as in the investigation of renal colic.
A person's age plays a significant role in the subsequent risk of cancer. Estimated lifetime cancer mortality risks from an abdominal CT of a one-year-old is 0.1%, or 1:1000 scans. The risk for someone who is 40 years old is half that of someone who is 20 years old with substantially less risk in the elderly. The International Commission on Radiological Protection estimates that the risk to a fetus being exposed to 10 mGy (a unit of radiation exposure) increases the rate of cancer before 20 years of age from 0.03% to 0.04% (for reference a CT pulmonary angiogram exposes a fetus to 4 mGy). A 2012 review did not find an association between medical radiation and cancer risk in children noting however the existence of limitations in the evidences over which the review is based. CT scans can be performed with different settings for lower exposure in children with most manufacturers of CT scans as of 2007 having this function built in. Furthermore, certain conditions can require children to be exposed to multiple CT scans.
Current recommendations are to inform patients of the risks of CT scanning. However, employees of imaging centers tend not to communicate such risks unless patients ask.
=== Contrast reactions ===
In the United States half of CT scans are contrast CTs using intravenously injected radiocontrast agents. The most common reactions from these agents are mild, including nausea, vomiting, and an itching rash. Severe life-threatening reactions may rarely occur. Overall reactions occur in 1 to 3% with nonionic contrast and 4 to 12% of people with ionic contrast. Skin rashes may appear within a week to 3% of people.
The old radiocontrast agents caused anaphylaxis in 1% of cases while the newer, low-osmolar agents cause reactions in 0.01–0.04% of cases. Death occurs in about 2 to 30 people per 1,000,000 administrations, with newer agents being safer.
There is a higher risk of mortality in those who are female, elderly or in poor health, usually secondary to either anaphylaxis or acute kidney injury.
The contrast agent may induce contrast-induced nephropathy. This occurs in 2 to 7% of people who receive these agents, with greater risk in those who have preexisting kidney failure, preexisting diabetes, or reduced intravascular volume. People with mild kidney impairment are usually advised to ensure full hydration for several hours before and after the injection. For moderate kidney failure, the use of iodinated contrast should be avoided; this may mean using an alternative technique instead of CT. Those with severe kidney failure requiring dialysis require less strict precautions, as their kidneys have so little function remaining that any further damage would not be noticeable and the dialysis will remove the contrast agent; it is normally recommended, however, to arrange dialysis as soon as possible following contrast administration to minimize any adverse effects of the contrast.
In addition to the use of intravenous contrast, orally administered contrast agents are frequently used when examining the abdomen. These are frequently the same as the intravenous contrast agents, merely diluted to approximately 10% of the concentration. However, oral alternatives to iodinated contrast exist, such as very dilute (0.5–1% w/v) barium sulfate suspensions. Dilute barium sulfate has the advantage that it does not cause allergic-type reactions or kidney failure, but cannot be used in patients with suspected bowel perforation or suspected bowel injury, as leakage of barium sulfate from damaged bowel can cause fatal peritonitis.
Side effects from contrast agents, administered intravenously in some CT scans, might impair kidney performance in patients with kidney disease, although this risk is now believed to be lower than previously thought.
=== Scan dose ===
The table reports average radiation exposures; however, there can be a wide variation in radiation doses between similar scan types, where the highest dose could be as much as 22 times higher than the lowest dose. A typical plain film X-ray involves radiation dose of 0.01 to 0.15 mGy, while a typical CT can involve 10–20 mGy for specific organs, and can go up to 80 mGy for certain specialized CT scans.
For purposes of comparison, the world average dose rate from naturally occurring sources of background radiation is 2.4 mSv per year, equal for practical purposes in this application to 2.4 mGy per year. While there is some variation, most people (99%) received less than 7 mSv per year as background radiation. Medical imaging as of 2007 accounted for half of the radiation exposure of those in the United States with CT scans making up two thirds of this amount. In the United Kingdom it accounts for 15% of radiation exposure. The average radiation dose from medical sources is ≈0.6 mSv per person globally as of 2007. Those in the nuclear industry in the United States are limited to doses of 50 mSv a year and 100 mSv every 5 years.
Lead is the main material used by radiography personnel for shielding against scattered X-rays.
==== Radiation dose units ====
The radiation dose reported in the gray or mGy unit is proportional to the amount of energy that the irradiated body part is expected to absorb, and the physical effect (such as DNA double strand breaks) on the cells' chemical bonds by X-ray radiation is proportional to that energy.
The sievert unit is used in the report of the effective dose. The sievert unit, in the context of CT scans, does not correspond to the actual radiation dose that the scanned body part absorbs but to another radiation dose of another scenario, the whole body absorbing the other radiation dose and the other radiation dose being of a magnitude, estimated to have the same probability to induce cancer as the CT scan. Thus, as is shown in the table above, the actual radiation that is absorbed by a scanned body part is often much larger than the effective dose suggests. A specific measure, termed the computed tomography dose index (CTDI), is commonly used as an estimate of the radiation absorbed dose for tissue within the scan region, and is automatically computed by medical CT scanners.
The equivalent dose is the effective dose of a case, in which the whole body would actually absorb the same radiation dose, and the sievert unit is used in its report. In the case of non-uniform radiation, or radiation given to only part of the body, which is common for CT examinations, using the local equivalent dose alone would overstate the biological risks to the entire organism.
==== Effects of radiation ====
Most adverse health effects of radiation exposure may be grouped in two general categories:
deterministic effects (harmful tissue reactions) due in large part to the killing/malfunction of cells following high doses;
stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells.
The added lifetime risk of developing cancer by a single abdominal CT of 8 mSv is estimated to be 0.05%, or 1 one in 2,000.
Because of increased susceptibility of fetuses to radiation exposure, the radiation dosage of a CT scan is an important consideration in the choice of medical imaging in pregnancy.
==== Excess doses ====
In October, 2009, the US Food and Drug Administration (FDA) initiated an investigation of brain perfusion CT (PCT) scans, based on radiation burns caused by incorrect settings at one particular facility for this particular type of CT scan. Over 200 patients were exposed to radiation at approximately eight times the expected dose for an 18-month period; over 40% of them lost patches of hair. This event prompted a call for increased CT quality assurance programs. It was noted that "while unnecessary radiation exposure should be avoided, a medically needed CT scan obtained with appropriate acquisition parameter has benefits that outweigh the radiation risks." Similar problems have been reported at other centers. These incidents are believed to be due to human error.
== Procedure ==
CT scan procedure varies according to the type of the study and the organ being imaged. The patient lies on the CT table and the centering of the table is done according to the body part. The IV line is established in case of contrast-enhanced CT. After selecting proper and rate of contrast from the pressure injector, the scout is taken to localize and plan the scan. Once the plan is selected, the contrast is given. The raw data is processed according to the study and proper windowing is done to make scans easy to diagnose.
=== Preparation ===
Patient preparation may vary according to the type of scan. The general patient preparation includes.
Signing the informed consent.
Removal of metallic objects and jewelry from the region of interest.
Changing to the hospital gown according to hospital protocol.
Checking of kidney function, especially creatinine and urea levels (in case of CECT).
== Mechanism ==
Computed tomography operates by using an X-ray generator that rotates around the object; X-ray detectors are positioned on the opposite side of the circle from the X-ray source. As the X-rays pass through the patient, they are attenuated differently by various tissues according to the tissue density. A visual representation of the raw data obtained is called a sinogram, yet it is not sufficient for interpretation. Once the scan data has been acquired, the data must be processed using a form of tomographic reconstruction, which produces a series of cross-sectional images. These cross-sectional images are made up of small units of pixels or voxels.
Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3,071 (most attenuating) to −1,024 (least attenuating) on the Hounsfield scale. A pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a voxel, which is a three-dimensional unit.
Water has an attenuation of 0 Hounsfield units (HU), while air is −1,000 HU, cancellous bone is typically +400 HU, and cranial bone can reach 2,000 HU or more (os temporale) and can cause artifacts. The attenuation of metallic implants depends on the atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely extinguish the X-ray and is, therefore, responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between low- and high-density materials, which results in data values that exceed the dynamic range of the processing electronics. Two-dimensional CT images are conventionally rendered so that the view is as though looking up at it from the patient's feet. Hence, the left side of the image is to the patient's right and vice versa, while anterior in the image also is the patient's anterior and vice versa. This left-right interchange corresponds to the view that physicians generally have in reality when positioned in front of patients.
Initially, the images generated in CT scans were in the transverse (axial) anatomical plane, perpendicular to the long axis of the body. Modern scanners allow the scan data to be reformatted as images in other planes. Digital geometry processing can generate a three-dimensional image of an object inside the body from a series of two-dimensional radiographic images taken by rotation around a fixed axis. These cross-sectional images are widely used for medical diagnosis and therapy.
=== Contrast ===
Contrast media used for X-ray CT, as well as for plain film X-ray, are called radiocontrasts. Radiocontrasts for CT are, in general, iodine-based. This is useful to highlight structures such as blood vessels that otherwise would be difficult to delineate from their surroundings. Using contrast material can also help to obtain functional information about tissues. Often, images are taken both with and without radiocontrast.
== History ==
The history of X-ray computed tomography goes back to at least 1917 with the mathematical theory of the Radon transform. In October 1963, William H. Oldendorf received a U.S. patent for a "radiant energy apparatus for investigating selected areas of interior objects obscured by dense material". The first commercially viable CT scanner was invented by Godfrey Hounsfield in 1972.
It is often claimed that revenues from the sales of The Beatles' records in the 1960s helped fund the development of the first CT scanner at EMI. The first production X-ray CT machines were in fact called EMI scanners.
=== Etymology ===
The word tomography is derived from the Greek tome 'slice' and graphein 'to write'. Computed tomography was originally known as the "EMI scan" as it was developed in the early 1970s at a research branch of EMI, a company best known today for its music and recording business. It was later known as computed axial tomography (CAT or CT scan) and body section röntgenography.
The term CAT scan is no longer in technical use because current CT scans enable for multiplanar reconstructions. This makes CT scan the most appropriate term, which is used by radiologists in common vernacular as well as in textbooks and scientific papers.
In Medical Subject Headings (MeSH), computed axial tomography was used from 1977 to 1979, but the current indexing explicitly includes X-ray in the title.
The term sinogram was introduced by Paul Edholm and Bertil Jacobson in 1975.
== Society and culture ==
=== Campaigns ===
In response to increased concern by the public and the ongoing progress of best practices, the Alliance for Radiation Safety in Pediatric Imaging was formed within the Society for Pediatric Radiology. In concert with the American Society of Radiologic Technologists, the American College of Radiology and the American Association of Physicists in Medicine, the Society for Pediatric Radiology developed and launched the Image Gently Campaign which is designed to maintain high-quality imaging studies while using the lowest doses and best radiation safety practices available on pediatric patients. This initiative has been endorsed and applied by a growing list of various professional medical organizations around the world and has received support and assistance from companies that manufacture equipment used in Radiology.
Following upon the success of the Image Gently campaign, the American College of Radiology, the Radiological Society of North America, the American Association of Physicists in Medicine and the American Society of Radiologic Technologists have launched a similar campaign to address this issue in the adult population called Image Wisely.
The World Health Organization and International Atomic Energy Agency (IAEA) of the United Nations have also been working in this area and have ongoing projects designed to broaden best practices and lower patient radiation dose.
=== Prevalence ===
Use of CT has increased dramatically over the last two decades. An estimated 72 million scans were performed in the United States in 2007, accounting for close to half of the total per-capita dose rate from radiologic and nuclear medicine procedures. Of the CT scans, six to eleven percent are done in children, an increase of seven to eightfold from 1980. Similar increases have been seen in Europe and Asia. In Calgary, Canada, 12.1% of people who present to the emergency with an urgent complaint received a CT scan, most commonly either of the head or of the abdomen. The percentage who received CT, however, varied markedly by the emergency physician who saw them from 1.8% to 25%. In the emergency department in the United States, CT or MRI imaging is done in 15% of people who present with injuries as of 2007 (up from 6% in 1998).
The increased use of CT scans has been the greatest in two fields: screening of adults (screening CT of the lung in smokers, virtual colonoscopy, CT cardiac screening, and whole-body CT in asymptomatic patients) and CT imaging of children. Shortening of the scanning time to around 1 second, eliminating the strict need for the subject to remain still or be sedated, is one of the main reasons for the large increase in the pediatric population (especially for the diagnosis of appendicitis). As of 2007, in the United States a proportion of CT scans are performed unnecessarily. Some estimates place this number at 30%. There are a number of reasons for this including: legal concerns, financial incentives, and desire by the public. For example, some healthy people avidly pay to receive full-body CT scans as screening. In that case, it is not at all clear that the benefits outweigh the risks and costs. Deciding whether and how to treat incidentalomas is complex, radiation exposure is not negligible, and the money for the scans involves opportunity cost.
== Manufacturers ==
Major manufacturers of CT scanning devices and equipment are:
Canon Medical Systems Corporation
Fujifilm Healthcare
GE HealthCare
Neusoft Medical Systems
Philips
Siemens Healthineers
United Imaging
== Research ==
Photon-counting computed tomography is a CT technique currently under development. Typical CT scanners use energy integrating detectors; photons are measured as a voltage on a capacitor which is proportional to the X-rays detected. However, this technique is susceptible to noise and other factors which can affect the linearity of the voltage to X-ray intensity relationship. Photon counting detectors (PCDs) are still affected by noise but it does not change the measured counts of photons. PCDs have several potential advantages, including improving signal (and contrast) to noise ratios, reducing doses, improving spatial resolution, and through use of several energies, distinguishing multiple contrast agents. PCDs have only recently become feasible in CT scanners due to improvements in detector technologies that can cope with the volume and rate of data required. As of February 2016, photon counting CT is in use at three sites. Some early research has found the dose reduction potential of photon counting CT for breast imaging to be very promising. In view of recent findings of high cumulative doses to patients from recurrent CT scans, there has been a push for scanning technologies and techniques that reduce ionising radiation doses to patients to sub-milliSievert (sub-mSv in the literature) levels during the CT scan process, a goal that has been lingering.
== See also ==
== References ==
== External links ==
Development of CT imaging
CT Artefacts—PPT by David Platten
Filler A (2009-06-30). "The History, Development and Impact of Computed Imaging in Neurological Diagnosis and Neurosurgery: CT, MRI, and DTI". Nature Precedings: 1. doi:10.1038/npre.2009.3267.4. ISSN 1756-0357.
Boone JM, McCollough CH (2021). "Computed tomography turns 50". Physics Today. 74 (9): 34–40. Bibcode:2021PhT....74i..34B. doi:10.1063/PT.3.4834. ISSN 0031-9228. S2CID 239718717. | Wikipedia/Computed_Tomography |
The Teller–Ulam design is a technical concept behind modern thermonuclear weapons, also known as hydrogen bombs. The design – the details of which are military secrets and known to only a handful of major nations – is believed to be used in virtually all modern nuclear weapons that make up the arsenals of the major nuclear powers.
== History ==
=== Teller's "Super" ===
The idea of using the energy from a fission device to begin a fusion reaction was first proposed by the Italian physicist Enrico Fermi to his colleague Edward Teller in late 1941 during what would soon become the Manhattan Project, the World War II effort by the United States and United Kingdom to develop the first nuclear weapons. Teller soon was a participant at Robert Oppenheimer's 1942 conference on the development of a fission bomb held at the University of California, Berkeley, where he guided discussion towards the idea of creating his "Super" bomb, which would hypothetically be many times more powerful than the yet-undeveloped fission weapon. Teller assumed creating the fission bomb would be nothing more than an engineering problem, and that the "Super" provided a much more interesting theoretical challenge.
For the remainder of the war the effort was focused on first developing fission weapons. Nevertheless, Teller continued to pursue the "Super", to the point of neglecting work assigned to him for the fission weapon at the secret Los Alamos lab where he worked. (Much of the work Teller declined to do was given instead to Klaus Fuchs, who was later discovered to be a spy for the Soviet Union.: 430 ) Teller was given some resources with which to study the "Super", and contacted his friend Maria Göppert-Mayer to help with laborious calculations relating to opacity. The "Super", however, proved elusive, and the calculations were difficult to perform. The properties of fission and fusion reactions could be studied by cyclotrons. The propagation of a fission detonation could be studied with the help of reactor and critical assembly experiments. But thermonuclear fusion and the extreme case of plasma ignition, were only considered achievable by a nuclear test (ultimately Greenhouse George), for which the justification was lacking. In the meantime, research proceeded via slow manual calculations and the early computer ENIAC.
Even though they had witnessed the Trinity test, after the atomic bombings of Japan scientists at Los Alamos were surprised by how devastating the effects of the weapon had been.: 35 Many of the scientists rebelled against the notion of creating a weapon thousands of times more powerful than the first atomic bombs. For the scientists the question was in part technical—the weapon design was still quite uncertain and unworkable—and in part moral: such a weapon, they argued, could only be used against large civilian populations, and could thus only be used as a weapon of genocide. Many scientists, such as Teller's colleague Hans Bethe (who had discovered stellar nucleosynthesis, the nuclear fusion that takes place in stars), urged that the United States should not develop such weapons and set an example towards the Soviet Union. Promoters of the weapon, including Teller and Berkeley physicists Ernest Lawrence and Luis Alvarez, argued that such a development was inevitable, and to deny such protection to the people of the United States—especially when the Soviet Union was likely to create such a weapon itself—was itself an immoral and unwise act. Still others, such as Oppenheimer, simply thought that the existing stockpile of fissile material was better spent in attempting to develop a large arsenal of tactical atomic weapons rather than potentially squandered on the development of a few massive "Supers".
In any case, work slowed greatly at Los Alamos, as some 5,500 of the 7,100 scientists and related staff who had been there at the conclusion of the war left to go back to their previous positions at universities and laboratories.: 89–90 A conference was held at Los Alamos in 1946 to examine the feasibility of building a Super; it concluded that it was feasible, but there were a number of dissenters to that conclusion.: 91
When the Soviet Union exploded their own atomic bomb (dubbed "Joe 1" by the US) in August 1949, it caught Western analysts off guard, and over the next several months there was an intense debate within the US government, military, and scientific communities on whether to proceed with the far-more-powerful Super.: 1–2 On January 31, 1950, US President Harry S. Truman ordered a program to develop a hydrogen bomb.: 406–408
Many scientists returned to Los Alamos to work on the "Super" program, but the initial attempts still seemed highly unworkable. In the "classical Super," it was thought that the heat alone from the fission bomb would be used to ignite the fusion material, but that proved to be impossible. For a while, many scientists thought (and hoped) that the weapon itself would be impossible to construct.: 91
=== Ulam's and Teller's contributions ===
The exact history of the Teller–Ulam breakthrough is not completely known, partly because of numerous conflicting personal accounts and also by the continued classification of documents that would reveal which was closer to the truth. Previous models of the "Super" had apparently placed the fusion fuel either surrounding the fission "trigger" (in a spherical formation) or at the heart of it (similar to a "boosted" weapon) in the hopes that the closer the fuel was to the fission explosion, the higher the chance it would ignite the fusion fuel by the sheer force of the heat generated.
In 1951, after many years of fruitless labor on the "Super", a breakthrough idea from the Polish émigré mathematician Stanislaw Ulam was seized upon by Teller and developed into the first workable design for a megaton-range hydrogen bomb. This concept, now called "staged implosion" was first proposed in a classified scientific paper, On Heterocatalytic Detonations I. Hydrodynamic Lenses and Radiation Mirrors by Teller and Ulam on March 9, 1951. The exact amount of contribution provided respectively from Ulam and Teller to what became known as the "Teller–Ulam design" is not definitively known in the public domain—the degree of credit assigned to Teller by his contemporaries is almost exactly commensurate with how well they thought of Teller in general. In an interview with Scientific American from 1999, Teller told the reporter:
I contributed; Ulam did not. I'm sorry I had to answer it in this abrupt way. Ulam was rightly dissatisfied with an old approach. He came to me with a part of an idea which I already had worked out and difficulty getting people to listen to. He was willing to sign a paper. When it then came to defending that paper and really putting work into it, he refused. He said, "I don't believe in it."
The issue is controversial. Bethe in his “Memorandum on the History of the Thermonuclear Program” (1952) cited Teller as the discoverer of an “entirely new approach to thermonuclear reactions”, which “was a matter of inspiration” and was “therefore, unpredictable” and “largely accidental.” At the Oppenheimer hearing, in 1954, Bethe spoke of Teller's “stroke of genius” in the invention of the H-bomb. And finally in 1997 Bethe stated that “the crucial invention was made in 1951, by Teller.”
Other scientists (antagonistic to Teller, such as J. Carson Mark) have claimed that Teller would have never gotten any closer without the idea of Ulam. The nuclear weapons designer Ted Taylor was clear about assigning credit for the basic staging and compression ideas to Ulam, while giving Teller the credit for recognizing the critical role of radiation as opposed to hydrodynamic pressure.
Priscilla Johnson McMillan in her book The Ruin of J. Robert Oppenheimer: And the Birth of the Modern Arms Race, writes that Teller sought to "conceal the role" of Ulam, and that only "radiation implosion" was Teller's idea. Teller went as far as refusing to sign the patent application because it would need Ulam's signature. Thomas Powers writes that "of course the bomb designers all knew the truth, and many considered Teller the lowest, most contemptible kind of offender in the world of science, a stealer of credit".
Teller became known in the press as the "father of the hydrogen bomb", a title which he did not seek to discourage. Many of Teller's colleagues were irritated that he seemed to enjoy taking full credit for something he had only a part in, and in response, with encouragement from Enrico Fermi, Teller authored an article titled "The Work of Many People", which appeared in Science magazine in February 1955, emphasizing that he was not alone in the weapon's development (he would later write in his memoirs that he had told a "white lie" in the 1955 article, and would imply that he should receive full credit for the weapon's invention). Hans Bethe, who also participated in the hydrogen bomb project, once said, "For the sake of history, I think it is more precise to say that Ulam is the father, because he provided the seed, and Teller is the mother, because he remained with the child. As for me, I guess I am the midwife.": 166
The Teller–Ulam breakthrough—the details of which are still classified—was apparently the separation of the fission and fusion components of the weapons, and to use the radiation produced by the fission bomb to first compress the fusion fuel before igniting it. Some sources have suggested that Ulam initially proposed compressing the secondary through the shock waves generated by the primary and that it was Teller who then realized that the radiation from the primary would be able to accomplish the task (hence "radiation implosion"). However, compression alone would not have been enough and the other crucial idea, staging the bomb by separating the primary and secondary, seems to have been exclusively contributed by Ulam. The elegance of the design impressed many scientists, to the point that some who previously wondered if it were feasible suddenly believed it was inevitable and that it would be created by both the US and the Soviet Union. Even Oppenheimer, who was originally opposed to the project, called the idea "technically sweet." The "George" shot of Operation Greenhouse in 1951 tested the basic concept for the first time on a very small scale (and the next shot in the series, "Item," was the first boosted fission weapon), raising expectations to a near certainty that the concept would work.
On November 1, 1952, the Teller–Ulam configuration was tested in the "Ivy Mike" shot at an island in the Enewetak atoll, with a yield of 10.4 megatons of TNT (44 PJ) (over 450 times more powerful than the bomb dropped on Nagasaki during World War II). The device, dubbed the Sausage, used an extra-large fission bomb as a "trigger" and liquid deuterium, kept in its liquid state by 20 short tons (18 tonnes) of cryogenic equipment, as its fusion fuel, and it had a mass of around 80 short tons (73 tonnes) altogether. An initial press blackout was attempted, but it was soon announced that the US had detonated a megaton-range hydrogen bomb.
The elaborate refrigeration plant necessary to keep its fusion fuel in a liquid state meant that the "Ivy Mike" device was too heavy and too complex to be of practical use. The first deployable Teller–Ulam weapon in the US was developed in 1954, when the liquid deuterium fuel of the "Ivy Mike" device was replaced with a dry fuel of lithium deuteride and tested in the "Castle Bravo" shot (the device was codenamed the Shrimp). The solid lithium deuteride performed much better than expected, and the "Castle Bravo" device that was detonated in 1954 had a yield two-and-a-half times greater than had been expected (at 15 Mt (63 PJ), it was also the most powerful bomb ever detonated by the United States). Because much of the yield came from the final fission stage of its 238U tamper, it generated severe nuclear fallout, which caused one of the worst nuclear accidents in US history after unforeseen weather patterns blew it over populated areas of the atoll and Japanese fishermen on board the Daigo Fukuryu Maru.
After an initial period focused on making multi-megaton hydrogen bombs, efforts in the United States shifted towards developing miniaturized Teller–Ulam weapons which could outfit intercontinental ballistic missiles and submarine launched ballistic missiles. The last major design breakthrough in this respect was accomplished by the mid-1970s, when versions of the Teller–Ulam design were created which could fit on the end of a small MIRVed missile.
=== Soviet research ===
In the Soviet Union, the scientists working on their own hydrogen bomb project also ran into difficulties in developing a megaton-range fusion weapon. Because Klaus Fuchs had only been at Los Alamos at a very early stage of the hydrogen bomb design (before the Teller–Ulam configuration had been completed), none of his espionage information was of much use, and the Soviet physicists working on the project had to develop their weapon independently.
The first Soviet fusion design, developed by Andrei Sakharov and Vitaly Ginzburg in 1949 (before the Soviet Union had a working fission bomb), was dubbed the Sloika, after a Russian layered puff pastry, and was not of the Teller–Ulam configuration, but rather used alternating layers of fissile material and lithium deuteride fusion fuel spiked with tritium (this was later dubbed Sakharov's "First Idea"). Though nuclear fusion was technically achieved, it did not have the scaling property of a staged weapon, and their first hydrogen bomb test, Joe 4, is considered a hybrid fission/fusion device more similar to a large boosted fission weapon than a Teller–Ulam weapon (though using an order of magnitude more fusion fuel than a boosted weapon). Detonated in 1953 with a yield equivalent to 400 kt (1,700 TJ) (only 15%–20% from fusion), the Sloika device did, however, have the advantage of being a weapon which could actually be delivered to a military target, unlike the "Ivy Mike" device, though it was never widely deployed. Teller had proposed a similar design as early as 1946, dubbed the "Alarm Clock" (meant to "wake up" research into the "Super"), though it was calculated to be ultimately not worth the effort and no prototype was ever developed or tested.
Attempts to use a Sloika design to achieve megaton-range results proved unfeasible in the Soviet Union as it had in the calculations done in the US, but its value as a practical weapon since it was 20 times more powerful than their first fission bomb, should not be underestimated. The Soviet physicists calculated that at best the design might yield a single megaton of energy if it was pushed to its limits. After the US tested the "Ivy Mike" device in 1952, proving that a multimegaton bomb could be created, the Soviet Union searched for an additional design and continued to work on improving the Sloika (the "First Idea"). The "Second Idea", as Sakharov referred to it in his memoirs, was a previous proposal by Ginzburg in November 1948 to use lithium deuteride in the bomb, which would, by the bombardment by neutrons, produce tritium.: 299, 314 In late 1953, physicist Viktor Davidenko achieved the first breakthrough, that of keeping the primary and secondary parts of the bombs in separate pieces ("staging"). The next breakthrough was discovered and developed by Sakharov and Yakov Zeldovich, that of using the X-rays from the fission bomb to compress the secondary before fusion ("radiation implosion"), in early 1954. Sakharov's "Third Idea", as the Teller–Ulam design was known in the Soviet Union, was tested in the shot "RDS-37" in November 1955 with a yield of 1.6 Mt (6.7 PJ).
If the Soviet Union had been able to analyze the fallout data from either the "Ivy Mike" or "Castle Bravo" tests, they could have been able to discern that the fission primary was being kept separate from the fusion secondary, a key part of the Teller–Ulam device, and perhaps that the fusion fuel had been subjected to high amounts of compression before detonation. One of the key Soviet bomb designers, Yuli Khariton, later said:
At that time, Soviet research was not organized on a sufficiently high level, and useful results were not obtained, although radiochemical analyses of samples of fallout could have provided some useful information about the materials used to produce the explosion. The relationship between certain short-lived isotopes formed in the course of thermonuclear reactions could have made it possible to judge the degree of compression of the thermonuclear fuel, but knowing the degree of compression would not have allowed Soviet scientists to conclude exactly how the exploded device had been made, and it would not have revealed its design.: 20
Sakharov stated in his memoirs that though he and Davidenko had fallout dust in cardboard boxes several days after the "Mike" test with the hope of analyzing it for information, a chemist at Arzamas-16 (the Soviet weapons laboratory) had mistakenly poured the concentrate down the drain before it could be analyzed. Only in late 1952 did the Soviet Union set up an organized system for monitoring fallout data. Nonetheless, the memoirs also say that the yield from one of the American tests, which became an international incident involving Japan, told Sakharov that the US design was much better than theirs, and he decided that they must have exploded a separate fission bomb and somehow used its energy to compress the lithium deuteride. He then turned his focus to finding a way for an explosion to one side to be used to compress the ball of fusion fuel within 5% of symmetry, which he realised could be achieved by focusing the X-rays.
The Soviet Union demonstrated the power of the "staging" concept in October 1961 when they detonated the massive and unwieldy Tsar Bomba, a 50 Mt (210 PJ) hydrogen bomb which derived almost 97% of its energy from fusion rather than fission—its uranium tamper was replaced with one of lead for the test, in an effort to prevent excessive nuclear fallout. Had it been fired in its "full" form, it would have yielded at around 100 Mt (420 PJ). The weapon was technically deployable (it was tested by dropping it from a specially modified bomber), but militarily impractical, and was developed and tested primarily as a show of Soviet strength. It is the largest nuclear weapon developed and tested by any country.
== Other countries ==
=== United Kingdom ===
The details of the development of the Teller–Ulam design in other countries are less well known. In any event, the United Kingdom initially had difficulty in its development of it and failed in its first attempt in May 1957 (its "Grapple I" test failed to ignite as planned, but much of its energy came from fusion in its secondary). However, it succeeded in its second attempt in its November 1957 "Grapple X" test, which yielded 1.8 Mt. The British development of the Teller–Ulam design was apparently independent, but it was allowed to share in some US fallout data which may have been useful. After the successful detonation of a megaton-range device and thus its practical understanding of the Teller–Ulam design "secret," the United States agreed to exchange some of its nuclear designs with the United Kingdom, which led to the 1958 US-UK Mutual Defence Agreement.
=== China ===
The People's Republic of China detonated its first device using a Teller–Ulam design June 1967 ("Test No. 6"), a mere 32 months after detonating its first fission weapon (the shortest fission-to-fusion development yet known), with a yield of 3.3 Mt. Little is known about the Chinese thermonuclear program.
Development of the bomb was led by Yu Min.
=== France ===
Very little is known about the French development of the Teller–Ulam design beyond the fact that it detonated a 2.6 Mt device in the "Canopus" test in August 1968.
=== India ===
On 11 May 1998, India announced that it has detonated a hydrogen bomb in its Operation Shakti tests ("Shakti I", specifically). Some non-Indian analysts, using seismographic readings, have suggested that this might not be the case by pointing at the low yield of the test, which they say is close to 30 kilotons (as opposed to 45 kilotons announced by India).
However, some non-Indian experts agree with India. Harold M. Agnew, former director of the Los Alamos National Laboratory, said that India's assertion of having detonated a staged thermonuclear bomb was believable. The British seismologist Roger Clarke argued that seismic magnitudes suggested a combined yield of up to 60 kilotons, consistent with the Indian announced total yield of 56 kilotons. Professor Jack Evernden, a US seismologist, has always maintained that for correct estimation of yields, one should "account properly for geological and seismological differences between test sites." His estimation of the yields of the Indian tests concurs with those of India.
Indian scientists have argued that some international estimates of the yields of India's nuclear tests are unscientific.
India says that the yield of its tests were deliberately kept low to avoid civilian damage and that it can build staged thermonuclear weapons of various yields up to around 200 kilotons on the basis of those tests. Another cited reason for the low yields was that radioactivity released from yields significantly more than 45 kilotons might not have been contained fully.
Even low-yield tests can have a bearing on thermonuclear capability, as they can provide information on the behavior of primaries without the full ignition of secondaries.
=== North Korea ===
North Korea claimed to have tested its miniaturised thermonuclear bomb on January 6, 2016. North Korea's first three nuclear tests (2006, 2009 and 2013) had a relatively low yield and do not appear to have been of a thermonuclear weapon design. In 2013, the South Korean Defense Ministry had speculated that North Korea might be trying to develop a "hydrogen bomb" and such a device might be North Korea's next weapons test. In January 2016, North Korea claimed to have successfully tested a hydrogen bomb, but only a magnitude 5.1 seismic event was detected at the time of the test, a similar magnitude to the 2013 test of a 6–9 kt atomic bomb. Those seismic recordings have scientists worldwide doubting North Korea's claim that a hydrogen bomb was tested and suggest it was a non-fusion nuclear test. On September 9, 2016, North Korea conducted their fifth nuclear test which yielded between 10 and 30 kilotons.
On September 3, 2017, North Korea conducted a sixth nuclear test just a few hours after photographs of North Korean leader Kim Jong-un inspecting a device resembling a thermonuclear weapon warhead were released. Initial estimates in the first few days were between 70 and 160 kilotons and were raised over a week later to range of 250 to over 300 kilotons. Jane's Information Group estimated, based mainly on visual analysis of propaganda pictures, that the bomb might weigh between 250 and 360 kg (550 and 790 lb).
== Public knowledge ==
The Teller–Ulam design was for many years considered one of the top nuclear secrets, and even today, it is not discussed in any detail by official publications with origins "behind the fence" of classification. The policy of the US Department of Energy (DOE) has always been not to acknowledge when "leaks" occur since doing such would acknowledge the accuracy of the supposed leaked information. Aside from images of the warhead casing but never of the "physics package" itself, most information in the public domain about the design is relegated to a few terse statements and the work of a few individual investigators.
Here is a short discussion of the events that led to the formation of the "public" models of the Teller–Ulam design, with some discussions as to their differences and disagreements with those principles outlined above.
=== Early knowledge ===
The general principles of the "classical Super" design were public knowledge even before thermonuclear weapons were first tested. After Truman ordered the crash program to develop the hydrogen bomb in January 1950, the Boston Daily Globe published a cutaway description of a hypothetical hydrogen bomb with the caption Artist's conception of how H-bomb might work using atomic bomb as a mere "trigger" to generate enough heat to set up the H-bomb's "thermonuclear fusion" process.
The fact that a large proportion of the yield of a thermonuclear device stems from the fission of a uranium-238 tamper (fission-fusion-fission principle) was revealed when the Castle Bravo test "ran away," producing a much higher yield than originally estimated and creating large amounts of nuclear fallout.
=== DOE statements ===
In 1972, the DOE declassified a statement that "The fact that in thermonuclear (TN) weapons, a fission 'primary' is used to trigger a TN reaction in thermonuclear fuel referred to as a 'secondary'", and in 1979, it added: "The fact that, in thermonuclear weapons, radiation from a fission explosive can be contained and used to transfer energy to compress and ignite a physically separate component containing thermonuclear fuel." To the latter sentence, it specified, "Any elaboration of this statement will be classified." (emphasis in original) The only statement that may pertain to the sparkplug was declassified in 1991: "Fact that fissile and/or fissionable materials are present in some secondaries, material unidentified, location unspecified, use unspecified, and weapons undesignated." In 1998, the DOE declassified the statement that "The fact that materials may be present in channels and the term 'channel filler,' with no elaboration," which may refer to the polystyrene foam (or an analogous substance). (DOE 2001, sect. V.C.)
Whether the statements vindicate some or all of the models presented above is up for interpretation, and official US government releases about the technical details of nuclear weapons have been purposely equivocating in the past (such as the Smyth Report). Other information, such as the types of fuel used in some of the early weapons, has been declassified, but precise technical information has not been.
=== The Progressive case ===
Most of the current ideas of the Teller–Ulam design came into public awareness after the DOE attempted to censor a magazine article by the anti-weapons activist Howard Morland in 1979 on the "secret of the hydrogen bomb." In 1978, Morland had decided that discovering and exposing the "last remaining secret" would focus attention onto the arms race and allow citizens to feel empowered to question official statements on the importance of nuclear weapons and nuclear secrecy. Most of Morland's ideas about how the weapon worked were compiled from highly-accessible sources; the drawings that most inspired his approach came from the Encyclopedia Americana. Morland also interviewed, often informally, many former Los Alamos scientists (including Teller and Ulam, though neither gave him any useful information), and used a variety of interpersonal strategies to encourage informational responses from them (such as by asking questions such as "Do they still use sparkplugs?" even if he was unaware what the latter term specifically referred to). (Morland 1981)
Morland eventually concluded that the "secret" was that the primary and secondary were kept separate and that radiation pressure from the primary compressed the secondary before igniting it. When an early draft of the article, to be published in The Progressive magazine, was sent to the DOE after it had fallen into the hands of a professor who was opposed to Morland's goal, the DOE requested that the article not be published and pressed for a temporary injunction. After a short court hearing in which the DOE argued that Morland's information was (1). likely derived from classified sources, (2). if not derived from classified sources, itself counted as "secret" information under the "born secret" clause of the 1954 Atomic Energy Act, and (3). dangerous and would encourage nuclear proliferation, Morland and his lawyers disagreed on all points, but the injunction was granted, as the judge in the case thought that it was safer to grant the injunction and allow Morland, et al., to appeal, which they did in United States v. The Progressive, et al. (1979).
The DOE case began to wane, as it became clear that some of the data it attempted to claim as "secret" had been published in a students' encyclopedia a few years earlier. After another hydrogen bomb speculator, Chuck Hansen, had his own ideas about the "secret" (quite different from Morland's) published in a Wisconsin newspaper, the DOE claimed The Progressive case was moot, dropped its suit, and allowed the magazine to publish, which it did in November 1979. Morland had by then, however, changed his opinion of how the bomb worked to suggesting that a foam medium (the polystyrene) rather than radiation pressure was used to compress the secondary and that in the secondary was a sparkplug of fissile material as well. He published the changes, based in part on the proceedings of the appeals trial, as a short erratum in The Progressive a month later. In 1981, Morland published a book, The secret that exploded, about his experience, describing in detail the train of thought which led him to his conclusions about the "secret."
Because the DOE sought to censor Morland's work, one of the few times that it violated its usual approach of not acknowledging "secret" material that had been released, it is interpreted as being at least partially correct, but to what degree it lacks information or has incorrect information is not known with any great confidence. The difficulty which a number of nations had in developing the Teller–Ulam design (even when they understood the design, such as with the United Kingdom) makes it somewhat unlikely that the simple information alone is what provides the ability to manufacture thermonuclear weapons. Nevertheless, the ideas put forward by Morland in 1979 have been the basis for all current speculation on the Teller–Ulam design.
== See also ==
== Notes ==
== References ==
== Further reading ==
=== History ===
=== Analyzing fallout ===
=== The Progressive Case ===
== External links ==
PBS: Race for the Superbomb: Interviews and Transcripts Archived March 11, 2017, at the Wayback Machine (with U.S. and USSR bomb designers as well as historians).
Howard Morland on how he discovered the "H-bomb secret" (includes many slides).
The Progressive November 1979 issue – "The H-Bomb Secret: How we got it, why we're telling" (entire issue online). | Wikipedia/Teller-Ulam_design |
Fast neutron therapy utilizes high energy neutrons typically between 50 and 70 MeV to treat cancer. Most fast neutron therapy beams are produced by reactors, cyclotrons (d+Be) and linear accelerators. Neutron therapy is currently available in Germany, Russia, South Africa and the United States. In the United States, one treatment center is operational, in Seattle, Washington. The Seattle center uses a cyclotron which produces a proton beam impinging upon a beryllium target.
== Advantages ==
Radiation therapy kills cancer cells in two ways depending on the effective energy of the radiative source. The amount of energy deposited as the particles traverse a section of tissue is referred to as the linear energy transfer (LET). X-rays produce low LET radiation, and protons and neutrons produce high LET radiation. Low LET radiation damages cells predominantly through the generation of reactive oxygen species, see free radicals. The neutron is uncharged and damages cells by direct effect on nuclear structures. Malignant tumors tend to have low oxygen levels and thus can be resistant to low LET radiation. This gives an advantage to neutrons in certain situations. One advantage is a generally shorter treatment cycle. To kill the same number of cancerous cells, neutrons require one third the effective dose as protons. Another advantage is the established ability of neutrons to better treat some cancers, such as salivary gland, adenoid cystic carcinomas and certain types of brain tumors, especially high-grade gliomas
=== LET ===
When therapeutic energy X-rays (1 to 25 MeV) interact with cells in human tissue, they do so mainly by Compton interactions, and produce relatively high energy secondary electrons. These high energy electrons deposit their energy at about 1 keV/μm. By comparison, the charged particles produced at a site of a neutron interaction may deliver their energy at a rate of 30–80 keV/μm. The amount of energy deposited as the particles traverse a section of tissue is referred to as the linear energy transfer (LET). X-rays produce low LET radiation, and neutrons produce high LET radiation.
Because the electrons produced from X-rays have high energy and low LET, when they interact with a cell typically only a few ionizations will occur. It is likely then that the low LET radiation will cause only single strand breaks of the DNA helix. Single strand breaks of DNA molecules can be readily repaired, and so the effect on the target cell is not necessarily lethal. By contrast, the high LET charged particles produced from neutron irradiation cause many ionizations as they traverse a cell, and so double-strand breaks of the DNA molecule are possible. DNA repair of double-strand breaks are much more difficult for a cell to repair, and more likely to lead to cell death.
DNA repair mechanisms are quite efficient, and during a cell's lifetime many thousands of single strand DNA breaks will be repaired. A sufficient dose of ionizing radiation, however, delivers so many DNA breaks that it overwhelms the capability of the cellular mechanisms to cope.
Heavy ion therapy (e.g. carbon ions) makes use of the similarly high LET of 12C6+ ions.
Because of the high LET, the relative radiation damage (relative biological effect or RBE) of fast neutrons is 4 times that of X-rays,
meaning 1 rad of fast neutrons is equal to 4 rads of X-rays. The RBE of neutrons is also energy dependent, so neutron beams produced with different energy spectra at different facilities will have different RBE values.
=== Oxygen effect ===
The presence of oxygen in a cell acts as a radiosensitizer, making the effects of the radiation more damaging. Tumor cells typically have a lower oxygen content than normal tissue. This medical condition is known as tumor hypoxia and therefore the oxygen effect acts to decrease the sensitivity of tumor tissue. The oxygen effect may be quantitatively described by the Oxygen Enhancement Ratio (OER). Generally it is believed that neutron irradiation overcomes the effect of tumor hypoxia, although there are counterarguments.
== Clinical uses ==
The efficacy of neutron beams for use on prostate cancer has been shown through randomized trials.
Fast neutron therapy has been applied successfully against salivary gland tumors.
Adenoid cystic carcinomas have also been treated.
Various other head and neck tumors have been examined.
== Side effects ==
No cancer therapy is without the risk of side effects. Neutron therapy is a very powerful nuclear scalpel that has to be utilized with exquisite care. For instance, some of the most remarkable cures it has been able to achieve are with cancers of the head and neck. Many of these cancers cannot effectively be treated with other therapies. However, neutron damage to nearby vulnerable areas such as the brain and sensory neurons can produce irreversible brain atrophy, blindness, etc. The risk of these side effects can be greatly mitigated by several techniques, but they cannot be eliminated. Moreover, some patients are more susceptible to such side effects than others and this cannot be predicted. The patient ultimately must decide whether the advantages of a possibly lasting cure outweigh the risks of this treatment when faced with an otherwise incurable cancer.
== Fast neutron centers ==
Several centers around the world have used fast neutrons for treating cancer. Due to lack of funding and support, at present only three are active in the USA.
The University of Washington and the Gershenson Radiation Oncology Center operate fast neutron therapy beams and both are equipped with a Multi-Leaf Collimator (MLC) to shape the neutron beam.
=== University of Washington ===
The Radiation Oncology Department operates a proton cyclotron that produces fast neutrons from directing 50.5 MeV protons onto a beryllium target.
The UW Cyclotron is equipped with a gantry mounted delivery system an MLC to produce shaped fields. The UW Neutron system is referred to as the Clinical Neutron Therapy System (CNTS).
The CNTS is typical of most neutron therapy systems. A large, well shielded building is required to cut down on radiation exposure to the general public and to house the necessary equipment.
A beamline transports the proton beam from the cyclotron to a gantry system. The gantry system contains magnets for deflecting and focusing the proton beam onto the beryllium target. The end of the gantry system is referred to as the head, and contains dosimetry systems to measure the dose, along with the MLC and other beam shaping devices. The advantage of having a beam transport and gantry are that the cyclotron can remain stationary, and the radiation source can be rotated around the patient. Along with varying the orientation of the treatment couch which the patient is positioned on, variation of the gantry position allows radiation to be directed from virtually any angle, allowing sparing of normal tissue and maximum radiation dose to the tumor.
During treatment, only the patient remains inside the treatment room (called a vault) and the therapists will remotely control the treatment, viewing the patient via video cameras. Each delivery of a set neutron beam geometry is referred to as a treatment field or beam. The treatment delivery is planned to deliver the radiation as effectively as possible, and usually results in fields that conform to the shape of the gross target, with any extension to cover microscopic disease.
=== Karmanos Cancer Center / Wayne State University ===
The neutron therapy facility at the Gershenson Radiation Oncology Center at Karmanos Cancer Center/Wayne State University (KCC/WSU) in Detroit bore some similarities to the CNTS at the University of Washington, but also had many unique characteristics. This unit was decommissioned in 2011.
While the CNTS accelerates protons, the KCC facility produced its neutron beam by accelerating 48.5 MeV deuterons onto a beryllium target. This method produces a neutron beam with depth dose characteristics roughly similar to those of a 4 MV photon beam. The deuterons were accelerated using a gantry mounted superconducting cyclotron (GMSCC), eliminating the need for extra beam steering magnets and allowing the neutron source to rotate a full 360° around the patient couch.
The KCC facility was also equipped with an MLC beam shaping device, the only other neutron therapy center in the USA besides the CNTS. The MLC at the KCC facility had been supplemented with treatment planning software that allows for the implementation of Intensity Modulated Neutron Radiotherapy (IMNRT), a recent advance in neutron beam therapy which allows for more radiation dose to the targeted tumor site than 3-D neutron therapy.
KCC/WSU has more experience than anyone in the world using neutron therapy for prostate cancer, having treated nearly 1,000 patients during the past 10 years.
=== Fermilab / Northern Illinois University ===
The Fermilab neutron therapy center first treated patients in 1976, and since that time has treated over 3,000 patients. In 2004, the Northern Illinois University began managing the center. The neutrons produced by the linear accelerator at Fermilab have the highest energies available in the US and among the highest in the world
The Fermilab center was decommissioned in 2013.
== See also ==
Boron neutron capture therapy
== References ==
== External links ==
FermiLab Neutron Therapy overview | Wikipedia/Fast_neutron_therapy |
Radiosurgery is surgery using radiation, that is, the destruction of precisely selected areas of tissue using ionizing radiation rather than excision with a blade. Like other forms of radiation therapy (also called radiotherapy), it is usually used to treat cancer. Radiosurgery was originally defined by the Swedish neurosurgeon Lars Leksell as "a single high dose fraction of radiation, stereotactically directed to an intracranial region of interest".
In stereotactic radiosurgery (SRS), the word "stereotactic" refers to a three-dimensional coordinate system that enables accurate correlation of a virtual target seen in the patient's diagnostic images with the actual target position in the patient. Stereotactic radiosurgery may also be called stereotactic body radiation therapy (SBRT) or stereotactic ablative radiotherapy (SABR) when used outside the central nervous system (CNS).
== History ==
Stereotactic radiosurgery was first developed in 1949 by the Swedish neurosurgeon Lars Leksell to treat small targets in the brain that were not amenable to conventional surgery. The initial stereotactic instrument he conceived used probes and electrodes. The first attempt to supplant the electrodes with radiation was made in the early fifties, with x-rays. The principle of this instrument was to hit the intra-cranial target with narrow beams of radiation from multiple directions. The beam paths converge in the target volume, delivering a lethal cumulative dose of radiation there, while limiting the dose to the adjacent healthy tissue. Ten years later significant progress had been made, due in considerable measure to the contribution of the physicists Kurt Liden and Börje Larsson. At this time, stereotactic proton beams had replaced the x-rays. The heavy particle beam presented as an excellent replacement for the surgical knife, but the synchrocyclotron was too clumsy. Leksell proceeded to develop a practical, compact, precise and simple tool which could be handled by the surgeon himself. In 1968 this resulted in the Gamma Knife, which was installed at the Karolinska Institute and consisted of several cobalt-60 radioactive sources placed in a kind of helmet with central channels for irradiation with gamma rays. This prototype was designed to produce slit-like radiation lesions for functional neurosurgical procedures to treat pain, movement disorders, or behavioral disorders that did not respond to conventional treatment. The success of this first unit led to the construction of a second device, containing 179 cobalt-60 sources. This second Gamma Knife unit was designed to produce spherical lesions to treat brain tumors and intracranial arteriovenous malformations (AVMs). Additional units were installed in the 1980s all with 201 cobalt-60 sources.
In parallel to these developments, a similar approach was designed for a linear particle accelerator or Linac. Installation of the first 4 MeV clinical linear accelerator began in June 1952 in the Medical Research Council (MRC) Radiotherapeutic Research Unit at the Hammersmith Hospital, London. The system was handed over for physics and other testing in February 1953 and began to treat patients on 7 September that year. Meanwhile, work at the Stanford Microwave Laboratory led to the development of a 6 MeV accelerator, which was installed at Stanford University Hospital, California, in 1956. Linac units quickly became favored devices for conventional fractionated radiotherapy but it lasted until the 1980s before dedicated Linac radiosurgery became a reality. In 1982, the Spanish neurosurgeon J. Barcia-Salorio began to evaluate the role of cobalt-generated and then Linac-based photon radiosurgery for the treatment of AVMs and epilepsy. In 1984, Betti and Derechinsky described a Linac-based radiosurgical system. Winston and Lutz further advanced Linac-based radiosurgical prototype technologies by incorporating an improved stereotactic positioning device and a method to measure the accuracy of various components. Using a modified Linac, the first patient in the United States was treated in Boston Brigham and Women's Hospital in February 1986.
=== 21st century ===
Technological improvements in medical imaging and computing have led to increased clinical adoption of stereotactic radiosurgery and have broadened its scope in the 21st century. The localization accuracy and precision that are implicit in the word "stereotactic" remain of utmost importance for radiosurgical interventions and are significantly improved via image-guidance technologies such as the N-localizer and Sturm-Pastyr localizer that were originally developed for stereotactic surgery.
In the 21st century the original concept of radiosurgery expanded to include treatments comprising up to five fractions, and stereotactic radiosurgery has been redefined as a distinct neurosurgical discipline that utilizes externally generated ionizing radiation to inactivate or eradicate defined targets, typically in the head or spine, without the need for a surgical incision. Irrespective of the similarities between the concepts of stereotactic radiosurgery and fractionated radiotherapy the mechanism to achieve treatment is subtly different, although both treatment modalities are reported to have identical outcomes for certain indications. Stereotactic radiosurgery has a greater emphasis on delivering precise, high doses to small areas, to destroy target tissue while preserving adjacent normal tissue. The same principle is followed in conventional radiotherapy although lower dose rates spread over larger areas are more likely to be used (for example as in VMAT treatments). Fractionated radiotherapy relies more heavily on the different radiosensitivity of the target and the surrounding normal tissue to the total accumulated radiation dose. Historically, the field of fractionated radiotherapy evolved from the original concept of stereotactic radiosurgery following discovery of the principles of radiobiology: repair, reassortment, repopulation, and reoxygenation. Today, both treatment techniques are complementary, as tumors that may be resistant to fractionated radiotherapy may respond well to radiosurgery, and tumors that are too large or too close to critical organs for safe radiosurgery may be suitable candidates for fractionated radiotherapy.
Today, both Gamma Knife and Linac radiosurgery programs are commercially available worldwide. While the Gamma Knife is dedicated to radiosurgery, many Linacs are built for conventional fractionated radiotherapy and require additional technology and expertise to become dedicated radiosurgery tools. There is not a clear difference in efficacy between these different approaches. The major manufacturers, Varian and Elekta offer dedicated radiosurgery Linacs as well as machines designed for conventional treatment with radiosurgery capabilities. Systems designed to complement conventional Linacs with beam-shaping technology, treatment planning, and image-guidance tools to provide. An example of a dedicated radiosurgery Linac is the CyberKnife, a compact Linac mounted onto a robotic arm that moves around the patient and irradiates the tumor from a large set of fixed positions, thereby mimicking the Gamma Knife concept.
== Mechanism of action ==
The fundamental principle of radiosurgery is that of selective ionization of tissue, by means of high-energy beams of radiation. Ionization is the production of ions and free radicals which are damaging to the cells. These ions and radicals, which may be formed from the water in the cell or biological materials, can produce irreparable damage to DNA, proteins, and lipids, resulting in the cell's death. Thus, biological inactivation is carried out in a volume of tissue to be treated, with a precise destructive effect. The radiation dose is usually measured in grays (one gray (Gy) is the absorption of one joule of energy per kilogram of mass). A unit that attempts to take into account both the different organs that are irradiated and the type of radiation is the sievert, a unit that describes both the amount of energy deposited and the biological effectiveness.
== Clinical applications ==
When used outside the CNS it may be called stereotactic body radiation therapy (SBRT) or stereotactic ablative radiotherapy (SABR).
=== Brain and spine ===
Radiosurgery is performed by a multidisciplinary team of neurosurgeons, radiation oncologists and medical physicists to operate and maintain highly sophisticated, highly precise and complex instruments, including medical linear accelerators, the Gamma Knife unit and the Cyberknife unit. The highly precise irradiation of targets within the brain and spine is planned using information from medical images that are obtained via computed tomography, magnetic resonance imaging, and angiography.
Radiosurgery is indicated primarily for the therapy of tumors, vascular lesions and functional disorders. Significant clinical judgment must be used with this technique and considerations must include lesion type, pathology if available, size, location and age and general health of the patient. General contraindications to radiosurgery include excessively large size of the target lesion, or lesions too numerous for practical treatment. Patients can be treated within one to five days as outpatients. By comparison, the average hospital stay for a craniotomy (conventional neurosurgery, requiring the opening of the skull) is about 15 days. The radiosurgery outcome may not be evident until months after the treatment. Since radiosurgery does not remove the tumor but inactivates it biologically, lack of growth of the lesion is normally considered to be treatment success. General indications for radiosurgery include many kinds of brain tumors, such as acoustic neuromas, germinomas, meningiomas, metastases, trigeminal neuralgia, arteriovenous malformations, and skull base tumors, among others.
Stereotatic radiosurgery of the spinal metastasis is efficient in controlling pain in up to 90% of the cases and ensures stability of the tumours on imaging evaluation in 95% of the cases, and is more efficient for spinal metastasis involving one or two segments. Meanwhile, conventional external beam radiotherapy is more suitable for multiple spinal involvement.
=== Combination therapy ===
SRS may be administered alone or in combination with other therapies. For brain metastases, these treatment options include whole brain radiation therapy (WBRT), surgery, and systemic therapies. However, a recent systematic review found no difference in the affects on overall survival or deaths due to brain metastases when comparing SRS treatment alone to SRS plus WBRT treatment or WBRT alone.
=== Other bodily organs ===
Expansion of stereotactic radiotherapy to other lesions is increasing, and includes liver cancer, lung cancer, pancreatic cancer, etc.
== Risks ==
The New York Times reported in December 2010 that radiation overdoses had occurred with the linear accelerator method of radiosurgery, due in large part to inadequate safeguards in equipment retrofitted for stereotactic radiosurgery. In the U.S. the Food and Drug Administration (FDA) regulates these devices, whereas the Gamma Knife is regulated by the Nuclear Regulatory Commission.
This is evidence that immunotherapy may be useful for treatment of radiation necrosis following stereotactic radiotherapy.
== Types of radiation source ==
The selection of the proper kind of radiation and device depends on many factors including lesion type, size, and location in relation to critical structures. Data suggest that similar clinical outcomes are possible with all of the various techniques. More important than the device used are issues regarding indications for treatment, total dose delivered, fractionation schedule and conformity of the treatment plan.
=== Gamma Knife ===
A Gamma Knife (also known as the Leksell Gamma Knife) is used to treat brain tumors by administering high-intensity gamma radiation therapy in a manner that concentrates the radiation over a small volume. The device was invented in 1967 at the Karolinska Institute in Stockholm, Sweden, by Lars Leksell, Romanian-born neurosurgeon Ladislau Steiner, and radiobiologist Börje Larsson from Uppsala University, Sweden.
A Gamma Knife typically contains 201 cobalt-60 sources of approximately 30 curies each (1.1 TBq), placed in a hemispheric array in a heavily shielded assembly. The device aims gamma radiation through a target point in the patient's brain. The patient wears a specialized helmet that is surgically fixed to the skull, so that the brain tumor remains stationary at the target point of the gamma rays. An ablative dose of radiation is thereby sent through the tumor in one treatment session, while surrounding brain tissues are relatively spared.
Gamma Knife therapy, like all radiosurgery, uses doses of radiation to kill cancer cells and shrink tumors, delivered precisely to avoid damaging healthy brain tissue. Gamma Knife radiosurgery is able to accurately focus many beams of gamma radiation on one or more tumors. Each individual beam is of relatively low intensity, so the radiation has little effect on intervening brain tissue and is concentrated only at the tumor itself.
Gamma Knife radiosurgery has proven effective for patients with benign or malignant brain tumors up to 4 cm (1.6 in) in size, vascular malformations such as an arteriovenous malformation (AVM), pain, and other functional problems. For treatment of trigeminal neuralgia the procedure may be used repeatedly on patients.
Acute complications following Gamma Knife radiosurgery are rare, and complications are related to the condition being treated.
=== Linear accelerator-based therapies ===
A linear accelerator (linac) produces x-rays from the impact of accelerated electrons striking a high z target, usually tungsten. The process is also referred to as "x-ray therapy" or "photon therapy." The emission head, or "gantry", is mechanically rotated around the patient in a full or partial circle. The table where the patient is lying, the "couch", can also be moved in small linear or angular steps. The combination of the movements of the gantry and of the couch allow the computerized planning of the volume of tissue that is going to be irradiated. Devices with a high energy of 6 MeV are commonly used for the treatment of the brain, due to the depth of the target. The diameter of the energy beam leaving the emission head can be adjusted to the size of the lesion by means of collimators. They may be interchangeable orifices with different diameters, typically varying from 5 to 40 mm in 5 mm steps, or multileaf collimators, which consist of a number of metal leaflets that can be moved dynamically during treatment in order to shape the radiation beam to conform to the mass to be ablated. As of 2017 Linacs were capable of achieving extremely narrow beam geometries, such as 0.15 to 0.3 mm. Therefore, they can be used for several kinds of surgeries which hitherto had been carried out by open or endoscopic surgery, such as for trigeminal neuralgia. Long-term follow-up data has shown it to be as effective as radiofrequency ablation, but inferior to surgery in preventing the recurrence of pain.
The first such systems were developed by John R. Adler, a Stanford University professor of neurosurgery and radiation oncology, and Russell and Peter Schonberg at Schonberg Research, and commercialized under the brand name CyberKnife.
=== Proton beam therapy ===
Protons may also be used in radiosurgery in a procedure called Proton Beam Therapy (PBT) or proton therapy. Protons are extracted from proton donor materials by a medical synchrotron or cyclotron, and accelerated in successive transits through a circular, evacuated conduit or cavity, using powerful magnets to shape their path, until they reach the energy required to just traverse a human body, usually about 200 MeV. They are then released toward the region to be treated in the patient's body, the irradiation target. In some machines, which deliver protons of only a specific energy, a custom mask made of plastic is interposed between the beam source and the patient to adjust the beam energy to provide the appropriate degree of penetration. The phenomenon of the Bragg peak of ejected protons gives proton therapy advantages over other forms of radiation, since most of the proton's energy is deposited within a limited distance, so tissue beyond this range (and to some extent also tissue inside this range) is spared from the effects of radiation. This property of protons, which has been called the "depth charge effect" by analogy to the explosive weapons used in anti-submarine warfare, allows for conformal dose distributions to be created around even very irregularly shaped targets, and for higher doses to targets surrounded or backstopped by radiation-sensitive structures such as the optic chiasm or brainstem. The development of "intensity modulated" techniques allowed similar conformities to be attained using linear accelerator radiosurgery.
As of 2013 there was no evidence that proton beam therapy is better than any other types of treatment in most cases, except for a "handful of rare pediatric cancers". Critics, responding to the increasing number of very expensive PBT installations, spoke of a "medical arms race" and "crazy medicine and unsustainable public policy".
== References ==
== External links ==
Treating Tumors that Move with Respiration Book on Radiosurgery to moving targets (July 2007)
Shaped Beam Radiosurgery Book on LINAC-based radiosurgery using multileaf collimation (March 2011) | Wikipedia/Radiosurgery |
The Energy Multiplier Module (EM² or EM squared) is a nuclear fission power reactor under development by General Atomics. It is a fast-neutron version of the Gas Turbine Modular Helium Reactor (GT-MHR) and is capable of converting spent nuclear fuel into electricity and industrial process heat.
== Design specifications ==
EM2 is an advanced modular reactor expected to produce 265 MWe (500 MWth) of power with evaporative cooling (240 MWe with dry cooling) at a core outlet temperature of 850 °C (1,600 °F). The reactor will be fully enclosed in an underground containment structure for 30 years without requiring refueling. EM2 differs from current reactors in that it does not use water coolant but is instead a gas-cooled fast reactor, which uses helium as a coolant for an additional level of safety. The reactor uses a composite of silicon carbide as a fuel cladding material and zirconium silicide as neutron reflector material. The reactor unit is coupled to direct-drive helium closed-cycle gas turbine which drives a generator to produce electricity.
The nuclear core design is based upon a new conversion technique in which an initial "starter" section of the core provides the neutrons to convert fertile material (used nuclear fuel, thorium, or depleted uranium) into burnable fissile fuel. First generation EM2 units use enriched uranium starters (approximately 15 percent U235) to initiate the conversion process. The starter U235 is consumed as the fertile material is converted to fissile fuel. The core life expectancy is approximately 30 years without refueling or reshuffling the fuel.
Substantial amounts of usable fissile material remain in the EM2 core at the end of life. This material can be reused as the starter for the second generation of EM2s, without conventional nuclear reprocessing. There is no separation of individual heavy metals required and no additional enriched uranium needed. Only fission products would be removed, which would decay to near-background radiation levels in about 500 years compared to conventional spent fuel, which requires about 10,000 years.
All EM2 heavy metal discharges could be recycled into new EM2 units, effectively closing the nuclear fuel cycle, which minimizes nuclear proliferation risks and the need for long-term repositories to secure nuclear materials.
== Economics and workforce capacity ==
EM2 power costs are expected to be lower due to high power conversion (from thermal input to electric output) efficiency, a reduced number of components, and long core life. EM2 is expected to achieve a thermal efficiency of above 50% due to its high core outlet temperature and closed Brayton power cycle. The Brayton cycle eliminates many expensive components, including steam generators, pressurizers, condensers, and feedwater pumps. The design would utilize only 1/6th of the nuclear concrete of a conventional light water reactor.
Each module can be manufactured in either U.S. domestic or foreign facilities using replacement parts manufacturing and supply chain management with large components shipped by commercial truck or rail to a site for final assembly, where it will be fully enclosed in an underground containment structure. Dry cooling capability allows siting in locations without a source of cooling water.
If the reactor is to become part of a hydrogen economy, the coolant outlet temperature of 850 °C would allow the sulfur iodine cycle to be used which directly converts thermal energy into hydrogen (without electric or other intermediate steps) with an overall thermal efficiency around 50%.
== Nuclear waste ==
EM2 can burn used nuclear fuel, also referred to as "spent fuel" from current light water reactors. It can utilize an estimated 97% of unused fuel that current reactors leave behind as waste.
Spent fuel rods from conventional nuclear reactors are put into storage and considered to be nuclear waste, by the nuclear industry and the general public. Nuclear waste from light water reactors retains more than 95% of its original energy because such reactors cannot burn fertile U238, while fast reactors can. The current U.S. inventory of spent fuel is equivalent to nine trillion barrels of oil - four times more than the known reserves.
== Non-proliferation ==
By using spent nuclear waste and depleted uranium stockpiles as its fuel source, a large-scale deployment of the EM2 could reduce the long-term need for uranium enrichment and eliminate conventional nuclear reprocessing, which requires plutonium separation.
Conventional light water reactors require refueling every 18 months. EM2's 30-year fuel cycle minimizes the need for fuel handling and reduces access to fuel material, thus reducing proliferation concerns.
== Nuclear safety and security ==
EM2 utilizes passive safety systems designed to safely shutdown the reactor in emergency conditions using only gravity and natural convection. Control rods are automatically inserted during a loss-of-power incident via gravity. Natural convection flow is used to cool the core during whole site loss of power incidents. No external water supply is necessary for emergency cooling. The use of silicon carbide as fuel cladding in the core ensures no hydrogen production during accident scenarios and allows an extended period of response when compared to Zircaloy metal cladding used in current reactors.
Underground siting improves safety and security of the plant against terrorism and other threats.
EM2's high operating temperature can provide process heat for petrochemical fuel products and alternative fuels, such as biofuels and hydrogen.
== See also ==
American Association for the Advancement of Science
Nuclear Energy Institute
Nuclear power
Nuclear safety in the United States
Economics of new nuclear power plants
United States Department of Energy
== References ==
== External links ==
Official website
2011-11-28: Presentation about the EM2 reactor at the Department of Nuclear Engineering, University of California-Berkeley, ustream video Previous presentation
2015-05: Testimony of the Sr. Vice President of General Atomics before the Committee on Science, Space and Technology: [1] | Wikipedia/Energy_Multiplier_Module |
"Graphite reactor" directs here. For the graphite reactor at Oak Ridge National Laboratory, see X-10 Graphite Reactor.
A graphite-moderated reactor is a nuclear reactor that uses carbon as a neutron moderator, which allows natural uranium to be used as nuclear fuel.
The first artificial nuclear reactor, the Chicago Pile-1, used nuclear graphite as a moderator. Graphite-moderated reactors were involved in two of the best-known nuclear disasters: an untested graphite annealing process contributed to the Windscale fire (but the graphite itself did not catch fire), while a graphite fire during the Chernobyl disaster contributed to the spread of radioactive material.
== Types ==
Several types of graphite-moderated nuclear reactors have been used in commercial electricity generation:
Gas-cooled reactors
Magnox
UNGG reactor
Advanced gas-cooled reactor (AGR)
Water-cooled reactors
RBMK
MKER
EGP-6
Hanford N-Reactor (dual use)
ADE-2 (dual use)
High-temperature gas-cooled reactors (past)
Dragon reactor
AVR
Peach Bottom Nuclear Generating Station, Unit 1
THTR-300
Fort St. Vrain Generating Station
High temperature gas-cooled reactors (in development or construction)
Pebble-bed reactor
Very high temperature reactor
Prismatic fuel reactor
UHTREX Ultra-high-temperature reactor experiment
Other
Molten salt reactor
== Research reactors ==
There have been a number of research or test reactors built that use graphite as the moderator.
Chicago Pile-1
Chicago Pile-2
Transient Reactor Test Facility (TREAT)
Molten Salt Reactor Experiment (MSRE)
== History ==
The first artificial nuclear reactor, Chicago Pile-1, a graphite-moderated device that produced between 0.5 watts and 200 watts , was constructed by a team led by Enrico Fermi in 1942. The construction and testing of this reactor (an "atomic pile") was part of the Manhattan Project. This work led to the construction of the X-10 Graphite Reactor at Oak Ridge National Laboratory, which was the first nuclear reactor designed and built for continuous operation, and began operation in 1943.
=== Accidents ===
There have been several major accidents in graphite-moderated reactors, with the Windscale fire and the Chernobyl disaster probably the best known.
In the Windscale fire, an untested annealing process for the graphite was used, and that contributed to the accident – however it was the uranium fuel rather than the graphite in the reactor that caught fire. The only graphite moderator damage was found to be localized around burning fuel elements.
In the Chernobyl disaster, the graphite was a contributing factor to the cause of the accident. Due to overheating from lack of adequate cooling, the fuel rods began to deteriorate. After the SCRAM (AZ5) button was pressed to shut down the reactor, the control rods jammed in the middle of the core, causing a positive loop, since the nuclear fuel reacted to graphite. This has been dubbed the "final trigger" of events before the rupture. A graphite fire after the main event contributed to the spread of radioactive material. The massive power excursion in Chernobyl during a mishandled test led to the rupture of the reactor vessel and a series of steam explosions, which destroyed the reactor building. Now exposed to both air and the heat from the reactor core, the graphite moderator in the reactor core caught fire, and this fire sent a plume of highly radioactive fallout into the atmosphere and over an extensive geographical area.
In addition, the French Saint-Laurent Nuclear Power Plant and the Spanish Vandellòs Nuclear Power Plant – both UNGG graphite-moderated natural uranium reactors – suffered major accidents. Particularly noteworthy is a partial core meltdown on 17 October 1969 and a heat excursion during graphite annealing on 13 March 1980 in Saint-Laurent, which were both classified as INES 4. The Vandellòs NPP was damaged on 19 October 1989, and a repair was considered uneconomical.
== References == | Wikipedia/Graphite-moderated_reactor |
Tomotherapy is a type of radiation therapy treatment machine. In tomotherapy a thin radiation beam is modulated as it rotates around the patient, while they are moved through the bore of the machine. The name comes from the use of a strip-shaped beam, so that only one “slice” (Greek prefix “tomo-”) of the target is exposed at any one time by the radiation. The external appearance of the system and movement of the radiation source and patient can be considered analogous to a CT scanner (computed tomography), which uses lower doses of radiation for imaging. Like a conventional machine used for X-ray external beam radiotherapy (often referred to as a linear accelerator or linac, their main component), it [the tomotherapy machine] generates the radiation beam, but the external appearance of the machine, patient positioning, and treatment delivery differ. Conventional linacs do not work on a slice-by-slice basis but typically have a large area beam which can also be resized and modulated.
== General principles ==
The treatment field's length (the width of the radiation slice) is adjustable using collimator jaws. In static-jaw delivery, the field length remains constant during a treatment. In dynamic-jaw delivery, the field length changes so that it begins and ends at its minimum setting.
Tomotherapy treatment times vary compared to normal radiation therapy treatment times. Tomotherapy treatment times can be as low as 6.5 minutes for common prostate treatment, excluding extra time for imaging. Modern tomotherapy and conventional linac systems incorporate one or both of megavoltage X-ray or kilovoltage X-ray imaging systems, enabling image-guided radiation therapy (IGRT). In tomotherapy, images are acquired in a very similar manner to a CT scanner, thanks to their closely related design.
There are few head-to-head comparisons of tomotherapy and other IMRT techniques, however there is some evidence that a conventional linac using VMAT can provide faster treatment whereas tomotherapy is better able to spare surrounding healthy tissue while delivering a uniform dose.
=== Helical delivery ===
In helical tomotherapy, the linac rotates on its gantry at a constant speed while the beam is delivered; so that from the patient's perspective, the shape traced out by the linac is helical.
While helical tomotherapy can treat very long volumes without a need to abut fields in the longitudinal direction, it does display a distinct artifact due to "thread effect" when treating non-central tumors. Thread effect can be suppressed during planning through good pitch selection.
=== Fixed-angle delivery ===
Fixed-angle tomotherapy uses multiple tomotherapy beams, each delivered from a separate fixed gantry angle, in which only the couch moves during beam delivery. This is branded as TomoDirect, but has also been called topotherapy.
The technology enables fixed beam treatments by moving the patient through the machine bore while maintaining specified beam angles.
== Clinical considerations ==
Lung cancer, head and neck tumors, breast cancer, prostate cancer, stereotactic radiosurgery (SRS) and stereotactic body radiotherapy (SBRT) are some examples of treatments commonly performed using tomotherapy.
In general, radiation therapy (or radiotherapy) has developed with a strong reliance on homogeneity of dose throughout the tumor. Tomotherapy embodies the sequential delivery of radiation to different parts of the tumor which raises two important issues. First, this method is known as "field matching" and brings with it the possibility of a less-than-perfect match between two adjacent fields with a resultant hot and/or cold spot within the tumor. The second issue is that if the patient or tumor moves during this sequential delivery, then again, a hot or cold spot will result. The first problem is reduced by use of a helical motion, as in spiral computed tomography.
Some research has suggested tomotherapy provides more conformal treatment plans and decreased acute toxicity.
Non-helical static beam techniques such as IMRT and TomoDirect are well suited to whole breast radiation therapy. These treatment modes avoid the low-dose integral splay and long treatment times associated with helical approaches by confining dose delivery to tangential angles.
This risk is accentuated in younger patients with early-stage breast cancer, where cure rates are high and life expectancy is substantial.
Static beam angle approaches aim to maximize the therapeutic ratio by ensuring that the tumor control probability (TCP) significantly outweighs the associated normal tissue complication probability (NTCP).
== History ==
The tomotherapy technique was developed in the early 1990s at the University of Wisconsin–Madison by Professor Thomas Rockwell Mackie and Paul Reckwerdt. A small megavoltage x-ray source was mounted in a similar fashion to a CT x-ray source, and the geometry provided the opportunity to provide CT images of the body in the treatment setup position. Although original plans were to include kilovoltage CT imaging, current models use megavoltage energies. With this combination, the unit was one of the first devices capable of providing modern image-guided radiation therapy (IGRT).
The first implementation of tomotherapy was the Corvus system developed by Nomos Corporation, with the first patient treated in April 1994. This was the first commercial system for planning and delivering intensity modulated radiation therapy (IMRT). The original system, designed solely for use in the brain, incorporated a rigid skull-based fixation system to prevent patient motion between the delivery of each slice of radiation. But some users eschewed the fixation system and applied the technique to tumors in many different parts of the body.
=== Mobile tomotherapy ===
Due to their internal shielding and small footprint, TomoTherapy Hi-Art and TomoTherapy TomoHD treatment machines were the only high energy radiotherapy treatment machines used in relocatable radiotherapy treatment suites. Two different types of suites were available: TomoMobile developed by TomoTherapy Inc. which was a moveable truck; and Pioneer, developed by UK-based Oncology Systems Limited. The latter was developed to meet the requirements of UK and European transport law requirements and was a contained unit placed on a concrete pad, delivering radiotherapy treatments in less than five weeks.
== See also ==
Radiation therapy
Radiosurgery
== References ==
== External links == | Wikipedia/Tomotherapy |
Nuclear energy policy is a national and international policy concerning some or all aspects of nuclear energy and the nuclear fuel cycle, such as uranium mining, ore concentration, conversion, enrichment for nuclear fuel, generating electricity by nuclear power, storing and reprocessing spent nuclear fuel, and disposal of radioactive waste. Nuclear energy policies often include the regulation of energy use and standards relating to the nuclear fuel cycle. Other measures include efficiency standards, safety regulations, emission standards, fiscal policies, and legislation on energy trading, transport of nuclear waste and contaminated materials, and their storage. Governments might subsidize nuclear energy and arrange international treaties and trade agreements about the import and export of nuclear technology, electricity, nuclear waste, and uranium.
Since about 2001 the term nuclear renaissance has been used to refer to a possible nuclear power industry revival, but nuclear electricity generation in 2012 was at its lowest level since 1999. Since then it had increased back to 2,653 TWh in 2021, a level last seen in 2006. The share of nuclear power in electricity production however is at a historic low and now below 10% down from a maximum of 17.5% in 1996.
Following the March 2011 Fukushima I nuclear accidents, China, Germany, Switzerland, Israel, Malaysia, Thailand, United Kingdom, and the Philippines are reviewing their nuclear power programs. Indonesia and Vietnam still plan to build nuclear power plants. Thirty-one countries operate nuclear power stations, and there are a considerable number of new reactors being built in China, South Korea, India, and Russia. As of June 2011, countries such as Australia, Austria, Denmark, Greece, Ireland, Latvia, Lichtenstein, Luxembourg, Malta, Portugal, Israel, Malaysia, and Norway have no nuclear power stations and remain opposed to nuclear power.
Since nuclear energy and nuclear weapons technologies are closely related, military aspirations can act as a factor in energy policy decisions. The fear of nuclear proliferation influences some international nuclear energy policies.
== The global picture ==
After 1986's Chernobyl disaster, public fear of nuclear power led to a virtual halt in reactor construction, and several countries decided to phase out nuclear power altogether. However, increasing energy demand was believed to require new sources of electric power, and rising fossil fuel prices coupled with concerns about greenhouse gas emissions (see Climate change mitigation) have sparked heightened interest in nuclear power and predictions of a nuclear renaissance.
In 2004, the largest producer of nuclear energy was the United States with 28% of worldwide capacity, followed by France (18%) and Japan (12%). In 2007, 31 countries operated nuclear power plants. In September 2008 the IAEA projected nuclear power to remain at a 12.4% to 14.4% share of the world's electricity production through 2030.
In 2013, almost two years after Fukushima, according to the IAEA there are 390 operating nuclear generating units throughout the world, more than 10% less than before Fukushima, and exactly the same as in Chernobyl-year 1986. Asia is expected to be the primary growth market for nuclear energy in the foreseeable future, despite continued uncertainty in the energy outlooks for Japan, South Korea, and others in the region. As of 2014, 63% of all reactors under construction globally are in Asia.
== Policy issues ==
=== Nuclear concerns ===
Nuclear accidents and radioactive waste disposal are major concerns. Other concerns include nuclear proliferation, the high cost of nuclear power plants, and nuclear terrorism.
=== Energy security ===
For some countries, nuclear power affords energy independence. In the words of the French, "We have no coal, we have no oil, we have no gas, we have no choice." Japan—similarly lacking in indigenous natural resources for power supply—relied on nuclear power for 1/3 of its energy mix prior to the Fukushima nuclear disaster; since March 2011, Japan has sought to offset the loss of nuclear power with increased reliance on imported liquefied natural gas, which has led to the country's first trade deficits in decades. Therefore, the discussion of a future for nuclear energy is intertwined with a discussion of energy security and the use of energy mix, including renewable energy development.
Nuclear power has been relatively unaffected by embargoes, and uranium is mined in "reliable" countries, including Australia and Canada.
Many commentators have criticized Germany's Energiewende policy to shut down its world-class nuclear fleet after the Fukushima disaster and rely instead on renewable energy sources, which in the interim has made them heavily dependent on Russian gas. Responding to Russia's attempt to exploit this dependency by shutting off natural gas supplies, Germany is ramping up coal production, while maintaining two nuclear plants in reserve.
=== Nuclear energy history and trends ===
Proponents have long made hopeful projections of the expected growth of nuclear power, but major accidents, and a well funded anti-nuclear lobby have kept costs high and growth much lower. In 1973 and 1974, the International Atomic Energy Agency predicted a worldwide installed nuclear capacity of 3,600 to 5,000 gigawatts by 2000. The IAEA's 1980 projection was for 740 to 1,075 gigawatts of installed capacity by the year 2000. Even after the 1986 Chernobyl disaster, the Nuclear Energy Agency forecasted an installed nuclear capacity of 497 to 646 gigawatts for the year 2000. The actual capacity in 2000 was 356 gigawatts. Moreover, construction costs have often been much higher, and times much longer than projected, failing to meet optimistic projections of “unlimited cheap, clean, and safe electricity.”
Since about 2001 the term nuclear renaissance has been used to refer to a possible nuclear power industry revival, driven by rising fossil fuel prices and new concerns about meeting greenhouse gas emission limits. However, nuclear electricity generation in 2012 was at its lowest level since 1999, and new reactors under construction in Finland and France, which were meant to lead a nuclear renaissance, have been delayed and are running over-budget. China has 32 new reactors under construction, and there are also a considerable number of new reactors being built in South Korea, India, and Russia. At the same time, at least 100 older and smaller reactors will "most probably be closed over the next 10-15 years". So the expanding nuclear programs in Asia are balanced by retirements of aging plants and nuclear reactor phase-outs.
In March 2011 the nuclear emergencies at Japan's Fukushima I Nuclear Power Plant and shutdowns at other nuclear facilities raised questions among some commentators over the future of the renaissance. Platts has reported that "the crisis at Japan's Fukushima nuclear plants has prompted leading energy-consuming countries to review the safety of their existing reactors and cast doubt on the speed and scale of planned expansions around the world". China, Germany, Switzerland, Israel, Malaysia, Thailand, United Kingdom, Italy and the Philippines have reviewed their nuclear power programs. Indonesia and Vietnam still plan to build nuclear power plants. Countries such as Australia, Austria, Denmark, Greece, Ireland, Latvia, Liechtenstein, Luxembourg, Portugal, Israel, Malaysia, New Zealand, and Norway remain opposed to nuclear power. Following the Fukushima I nuclear accidents, the International Energy Agency halved its estimate of additional nuclear generating capacity built by 2035.
Following the Fukushima nuclear disaster, Germany permanently shut down eight of its reactors and pledged to close the rest by 2022. In 2011 Siemens exited the nuclear power sector following the changes to German energy policy, and supported the German government's planned energy transition to renewable energy technologies. The Italians voted overwhelmingly to keep their country non-nuclear. Switzerland and Spain have banned the construction of new reactors. Japan's prime minister called for a dramatic reduction in Japan's reliance on nuclear power. Taiwan's president did the same. Mexico has sidelined construction of 10 reactors in favor of developing natural-gas-fired plants. Belgium decided to phase out its nuclear plants.
China—nuclear power's largest prospective market—suspended approvals of new reactor construction while conducting a lengthy nuclear-safety review. In 2012 a new safety plan for nuclear power was approved by State Council, and full incorporation of International Atomic Energy Agency (IAEA) safety standards became explicit. In the 13th Five-Year Plan from 2016, six to eight nuclear reactors were to be approved each year. A draft of the 14th Five-Year Plan (2021-2025) released in March 2021 showed government plans to reach 70 GWe gross of nuclear capacity by the end of 2025.
Neighboring India, another potential nuclear boom market, has encountered effective local opposition, growing national wariness about foreign nuclear reactors, and a nuclear liability controversy that threatens to prevent new reactor imports. There have been mass protests against the French-backed 9900 MW Jaitapur Nuclear Power Project in Maharashtra and the 2000 MW Koodankulam Nuclear Power Plant in Tamil Nadu. The state government of West Bengal state has also refused permission to a proposed 6000 MW facility near the town of Haripur that intended to host six Russian reactors. In March 2018, the government stated that nuclear capacity would fall well short of its 63 GWe target and that the total nuclear capacity is likely to be about 22.5 GWe by the year 2031.
Following IPCC announcements climate concerns again started to dominate world opinion. With rising oil and gas prices in 2022, many countries are reconsidering nuclear power.
In October 2021 the Japanese cabinet approved the new Plan for Electricity Generation to 2030 prepared by the Agency for Natural Resources and Energy (ANRE) and an advisory committee, following public consultation. The nuclear target for 2030 of 20-22% is unchanged from that in the 2015 plan, but renewables increase greatly to 36-38%, including geothermal and hydro. Hydrogen and ammonia are included at 1%. The plan would require the restart of another ten reactors. Prime minister Fumio Kishida in July 2022 announced that the country should consider building advanced reactors and extending operating licences beyond 60 years.
In March 2022 Belgium delayed its plans to phase out nuclear energy by a decade. The prime minister said that two reactors (Doel 4 and Tihange 3) would continue operating to 2035 to “strengthen our county’s independence from fossil fuels in a turbulent geopolitical environment.” In June Engie said it was seeking financial aid from the government for the continued operation of the two reactors.
=== Climate Change and the Energy Transition ===
Eliminating fossil fuels is essential in solving the climate change crisis. Nuclear power has one of the lowest life-cycle greenhouse gas emissions.
Historically, nuclear power has prevented 64 gigatonnes of CO2-equivalent greenhouse-gas emissions between 1971 and 2009.
With a significant amount of renewable energy installed in the 21st century, it has been speculated that tensions between nuclear and renewable national energy development strategies might reduce their effectiveness in terms of climate change mitigation.
However, newer studies have refuted this idea. Both nuclear and renewable energy have shown equally effective in the prevention of greenhouse-gas emissions. An effective climate-change mitigation strategy may include both nuclear and renewable energy sources. In 2018 the IPCC provided advice to policymakers giving four illustrative model pathways to limit warming to 1.5 degrees. In each of these pathways nuclear energy generation increased between 98% and 501% over 2010 levels by 2050.
In 2021 the European Union Joint Research Centre issued the results of its study on whether nuclear power generation meets the criteria of its Green Taxonomy. The analyses did not reveal any science-based evidence that nuclear energy does more harm to human health or to the environment than other electricity production technologies already included in the EU Green Taxonomy as activities supporting climate change mitigation. As a result of this assessment, the EU Parliament voted to include nuclear energy in its Green Taxonomy.
Moreover, nuclear energy has such a low carbon footprint that it could power carbon dioxide capture and transformation, resulting in a carbon-negative process. Specifically, various organizations are working across the globe to create designs for small modular reactors, a type of nuclear fission reactor that is smaller than conventional reactors. Some of these companies include ARC Nuclear in Canada, CNEA in Denmark, Areva TA in France, Toshiba and JAERI in Japan, OKB Gidropress in Russia, and OPEN100 and X-energy in the United States.
== Policies by territory ==
== See also ==
== References ==
== Further reading ==
Cooke, Stephanie (2009). In Mortal Hands: A Cautionary History of the Nuclear Age, Black Inc.
Diesendorf, Mark (2007). Greenhouse Solutions with Sustainable Energy, University of New South Wales Press.
Elliott, David (2007). Nuclear or Not? Does Nuclear Power Have a Place in a Sustainable Energy Future?, Palgrave.
Falk, Jim (1982). Global Fission: The Battle Over Nuclear Power, Oxford University Press.
Ferguson, Charles D., "Nuclear Energy: Balancing Benefits and Risks", Council on Foreign Relations, 2007
Lovins, Amory B. (1977). Soft Energy Paths: Towards a Durable Peace, Friends of the Earth International, ISBN 0-06-090653-7
Lovins, Amory B. and John H. Price (1975). Non-Nuclear Futures: The Case for an Ethical Energy Strategy, Ballinger Publishing Company, 1975, ISBN 0-88410-602-0
Lowe, Ian (2007). Reaction Time: Climate Change and the Nuclear Option, Quarterly Essay.
Pernick, Ron and Clint Wilder (2007). The Clean Tech Revolution: The Next Big Growth and Investment Opportunity, Collins, ISBN 978-0-06-089623-2
Schneider, Mycle, Steve Thomas, Antony Froggatt, Doug Koplow (August 2009). The World Nuclear Industry Status Report, German Federal Ministry of Environment, Nature Conservation and Reactor Safety.
Sovacool, Benjamin K. (2011). Contesting the Future of Nuclear Power: A Critical Global Assessment of Atomic Energy, World Scientific.
Walker, J. Samuel (2004). Three Mile Island: A Nuclear Crisis in Historical Perspective, University of California Press.
== External links ==
NEI Public Policy Information
Robert J. Duffy. Nuclear Politics in America: A History and Theory of Government Regulation (Studies in Government and Public Policy). Paperback. 1997. ISBN 0-7006-0853-2.
Carlton Stoiber, Alec Baer, Norbert Pelzer, Wolfram Tonhauser, Handbook on Nuclear Law, IAEA (International Atomic Energy Agency), 2003.
Annotated bibliography for nuclear power from the Alsos Digital Library for Nuclear Issues
Fairewinds Energy Education
Schneider, Mycle, Steve Thomas, Antony Froggatt, Doug Koplow (2016). The World Nuclear Industry Status Report: World Nuclear Industry Status as of 1 January 2016. | Wikipedia/Nuclear_energy_policy |
Neutron capture therapy (NCT) is a type of radiotherapy for treating locally invasive malignant tumors such as primary brain tumors, recurrent cancers of the head and neck region, and cutaneous and extracutaneous melanomas. It is a two-step process: first, the patient is injected with a tumor-localizing drug containing the stable isotope boron-10 (10B), which has a high propensity to capture low energy "thermal" neutrons. The neutron cross section of 10B (3,837 barns) is 1,000 times more than that of other elements, such as nitrogen, hydrogen, or oxygen, that occur in tissue. In the second step, the patient is radiated with epithermal neutrons, the sources of which in the past have been nuclear reactors and now are accelerators that produce higher energy epithermal neutrons. After losing energy as they penetrate tissue, the resultant low energy "thermal" neutrons are captured by the 10B atoms. The resulting decay reaction yields high-energy alpha particles that kill the cancer cells that have taken up enough 10B.
All clinical experience with NCT to date is with boron-10; hence this method is known as boron neutron capture therapy (BNCT). Use of another non-radioactive isotope, such as gadolinium, has been limited to experimental animal studies and has not been done clinically. BNCT has been evaluated as an alternative to conventional radiation therapy for malignant brain tumors such as glioblastomas, which presently are incurable, and more recently, locally advanced recurrent cancers of the head and neck region and, much less often, superficial melanomas mainly involving the skin and genital region.
== Boron neutron capture therapy ==
=== History ===
James Chadwick discovered the neutron in 1932. Shortly thereafter, H. J. Taylor reported that boron-10 nuclei had a high propensity to capture low energy "thermal" neutrons. This reaction causes nuclear decay of the boron-10 nuclei into helium-4 nuclei (alpha particles) and lithium-7 ions. In 1936, G.L. Locher, a scientist at the Franklin Institute in Philadelphia, Pennsylvania, recognized the therapeutic potential of this discovery and suggested that this specific type of neutron capture reaction could be used to treat cancer. William Sweet, a neurosurgeon at the Massachusetts General Hospital, first suggested the possibility of using BNCT to treat malignant brain tumors to evaluate BNCT for treatment of the most malignant of all brain tumors, glioblastoma multiforme (GBMs), using borax as the boron delivery agent in 1951. A clinical trial subsequently was initiated by Lee Farr using a specially constructed nuclear reactor at the Brookhaven National Laboratory in Long Island, New York, U.S.A. Another clinical trial was initiated in 1954 by Sweet at the Massachusetts General Hospital using the Research Reactor at the Massachusetts Institute of Technology (MIT) in Boston.
A number of research groups worldwide have continued the early ground-breaking clinical studies of Sweet and Farr, and subsequently the pioneering clinical studies of Hiroshi Hatanaka (畠中洋) in the 1960s, to treat patients with brain tumors. Since then, clinical trials have been done in a number of countries including Japan, the United States, Sweden, Finland, the Czech Republic, Taiwan, and Argentina. After the nuclear accident at Fukushima (2011), the clinical program there transitioned from a reactor neutron source to accelerators that would produce high energy neutrons that become thermalized as they penetrate tissue.
== Basic principles ==
Neutron capture therapy is a binary system that consists of two separate components to achieve its therapeutic effect. Each component in itself is non-tumoricidal, but when combined they can be highly lethal to cancer cells.
BNCT is based on the nuclear capture and decay reactions that occur when non-radioactive boron-10, which makes up approximately 20% of natural elemental boron, is irradiated with neutrons of the appropriate energy to yield excited boron-11 (11B*). This undergoes radioactive decay to produce high-energy alpha particles (4He nuclei) and high-energy lithium-7 (7Li) nuclei. The nuclear reaction is:
10B + nth → [11B] *→ α + 7Li + 2.31 MeV
Both the alpha particles and the lithium nuclei produce closely spaced ionizations in the immediate vicinity of the reaction, with a range of 5–9 μm. This approximately is the diameter of the target cell, and thus the lethality of the capture reaction is limited to boron-containing cells. BNCT, therefore, can be regarded as both a biologically and a physically targeted type of radiation therapy. The success of BNCT is dependent upon the selective delivery of sufficient amounts of 10B to the tumor with only small amounts localized in the surrounding normal tissues. Thus, normal tissues, if they have not taken up sufficient amounts of boron-10, can be spared from the neutron capture and decay reactions. Normal tissue tolerance, however, is determined by the nuclear capture reactions that occur with normal tissue hydrogen and nitrogen.
A wide variety of boron delivery agents have been synthesized. The first, which has mainly been used in Japan, is a polyhedral borane anion, sodium borocaptate or BSH (Na2B12H11SH), and the second is a dihydroxyboryl derivative of phenylalanine, called boronophenylalanine or BPA. The latter has been used in many clinical trials. Following administration of either BPA or BSH by intravenous infusion, the tumor site is irradiated with neutrons, the source of which, until recently, has been specially designed nuclear reactors and now is neutron accelerators. Until 1994, low-energy (< 0.5 eV) thermal neutron beams were used in Japan and the United States, but since they have a limited depth of penetration in tissues, higher energy (> .5eV < 10 keV) epithermal neutron beams, which have a greater depth of penetration, were used in clinical trials in the United States, Europe, Japan, Argentina, Taiwan, and China until recently when accelerators replaced the reactors. In theory BNCT is a highly selective type of radiation therapy that can target tumor cells without causing radiation damage to the adjacent normal cells and tissues. Doses up to 60–70 grays (Gy) can be delivered to the tumor cells in one or two applications compared to 6–7 weeks for conventional fractionated external beam photon irradiation. However, the effectiveness of BNCT is dependent upon a relatively homogeneous cellular distribution of 10B within the tumor, and more specifically within the constituent tumor cells, and this is still one of the main unsolved problems that have limited its success.
== Radiobiological considerations ==
The radiation doses to tumor and normal tissues in BNCT are due to energy deposition from three types of directly ionizing radiation that differ in their linear energy transfer (LET), which is the rate of energy loss along the path of an ionizing particle:
1. Low-LET gamma rays, resulting primarily from the capture of thermal neutrons by normal tissue hydrogen atoms [1H(n,γ)2H];
2. High-LET protons, produced by the scattering of fast neutrons and from the capture of thermal neutrons by nitrogen atoms [14N(n,p)14C]; and
3. High-LET, heavier charged alpha particles (stripped down helium [4He] nuclei) and lithium-7 ions, released as products of the thermal neutron capture and decay reactions with 10B [10B(n,α)7Li].
Since both the tumor and surrounding normal tissues are present in the radiation field, even with an ideal epithermal neutron beam, there will be an unavoidable, non-specific background dose, consisting of both high- and low-LET radiation. However, a higher concentration of 10B in the tumor will result in it getting a higher total dose than that of adjacent normal tissues, which is the basis for the therapeutic gain in BNCT. The total radiation dose in Gy delivered to any tissue can be expressed in photon-equivalent units as the sum of each of the high-LET dose components multiplied by weighting factors (Gyw), which depend on the increased radiobiological effectiveness of each of these components.
== Clinical dosimetry ==
Biological weighting factors have been used in all of the more recent clinical trials in patients with high-grade gliomas, using boronophenylalanine (BPA) in combination with an epithermal neutron beam. The 10B(n,α)7Li part of the radiation dose to the scalp has been based on the measured boron concentration in the blood at the time of BNCT, assuming a blood: scalp boron concentration ratio of 1.5:1 and a compound biological effectiveness (CBE) factor for BPA in skin of 2.5. A relative biological effectiveness (RBE) or CBE factor of 3.2 has been used in all tissues for the high-LET components of the beam, such as alpha particles. The RBE factor is used to compare the biologic effectiveness of different types of ionizing radiation. The high-LET components include protons resulting from the capture reaction with normal tissue nitrogen, and recoil protons resulting from the collision of fast neutrons with hydrogen. It must be emphasized that the tissue distribution of the boron delivery agent in humans should be similar to that in the experimental animal model in order to use the experimentally derived values for estimation of the radiation doses for clinical radiations. For more detailed information relating to computational dosimetry and treatment planning, interested readers are referred to a comprehensive review on this subject.
== Boron delivery agents ==
The development of boron delivery agents for BNCT began in the early 1960s and is an ongoing and difficult task. A number of boron-10 containing delivery agents have been synthesized for potential use in BNCT. The most important requirements for a successful boron delivery agent are:
low systemic toxicity and normal tissue uptake with high tumor uptake and concomitantly high tumor: to brain (T:Br) and tumor: to blood (T:Bl) concentration ratios (> 3–4:1);
tumor concentrations in the range of ~20-50 μg 10B/g tumor;
rapid clearance from blood and normal tissues and persistence in tumor during BNCT.
However, as of 2021 no single boron delivery agent fulfills all of these criteria. With the development of new chemical synthetic techniques and increased knowledge of the biological and biochemical requirements needed for an effective agent and their modes of delivery, a wide variety of new boron agents has emerged (see examples in Table 1). However, only one of these compounds has ever been tested in large animals, and only boronophenylalanine (BPA) and sodium borocaptate (BSH), have been used clinically.
aThe delivery agents are not listed in any order that indicates their potential usefulness for BNCT. None of these agents have been evaluated in any animals larger than mice and rats, except for boronated porphyrin (BOPP) that also has been evaluated in dogs. However, due to the severe toxicity of BOPP in canines, no further studies were carried out.
bSee Barth, R.F., Mi, P., and Yang, W., Boron delivery agents for neutron capture therapy of cancer, Cancer Communications, 38:35 (doi:10.1186/s40880-018-0299-7), Check |doi= value (help) 2018 for an updated review.
cThe abbreviations used in this table are defined as follows: BNCT, boron neutron capture therapy; DNA, deoxyribonucleic acid; EGF, epidermal growth factor; EGFR, epidermal growth factor receptor; MoAbs, monoclonal antibodies; VEGF, vascular endothelial growth factor.
The major challenge in the development of boron delivery agents has been the requirement for selective tumor targeting in order to achieve boron concentrations (20-50 μg/g tumor) sufficient to produce therapeutic doses of radiation at the site of the tumor with minimal radiation delivered to normal tissues. The selective destruction of infliltrative tumor (glioma) cells in the presence of normal brain cells represents an even greater challenge compared to malignancies at other sites in the body. Malignant gliomas are highly infiltrative of normal brain, histologically diverse, heterogeneous in their genomic profile and therefore it is very difficult to kill all of them.
== Gadolinium neutron capture therapy (Gd NCT) ==
There also has been some interest in the possible use of gadolinium-157 (157Gd) as a capture agent for NCT for the following reasons: First, and foremost, has been its very high neutron capture cross section of 254,000 barns. Second, gadolinium compounds, such as Gd-DTPA (gadopentetate dimeglumine Magnevist), have been used routinely as contrast agents for magnetic resonance imaging (MRI) of brain tumors and have shown high uptake by brain tumor cells in tissue culture (in vitro). Third, gamma rays and internal conversion and Auger electrons are products of the 157Gd(n,γ)158Gd capture reaction (157Gd + nth (0.025eV) → [158Gd] → 158Gd + γ + 7.94 MeV). Though the gamma rays have longer pathlengths, orders of magnitude greater depths of penetration compared with alpha particles, the other radiation products (internal conversion and Auger electrons) have pathlengths of about one cell diameter and can directly damage DNA. Therefore, it would be highly advantageous for the production of DNA damage if the 157Gd were localized within the cell nucleus. However, the possibility of incorporating gadolinium into biologically active molecules is very limited and only a small number of potential delivery agents for Gd NCT have been evaluated. Relatively few studies with Gd have been carried out in experimental animals compared to the large number with boron containing compounds (Table 1), which have been synthesized and evaluated in experimental animals (in vivo). Although in vitro activity has been demonstrated using the Gd-containing MRI contrast agent Magnevist as the Gd delivery agent, there are very few studies demonstrating the efficacy of Gd NCT in experimental animal tumor models, and, as evidenced by a lack of citations in the literature, Gd NCT has not, as of 2019, been used clinically in humans.
== Neutron sources ==
=== Clinical Studies Using Nuclear reactors as Neutron Sources ===
Until 2014, neutron sources for NCT were limited to nuclear reactors. Reactor-derived neutrons are classified according to their energies as thermal (En < 0.5 eV), epithermal (0.5 eV < En < 10 keV), or fast (En >10 keV). Thermal neutrons are the most important for BNCT since they usually initiate the 10B(n,α)7Li capture reaction. However, because they have a limited depth of penetration, epithermal neutrons, which lose energy and fall into the thermal range as they penetrate tissues, are now preferred for clinical therapy, other than for skin tumors such as melanoma.
A number of nuclear reactors with very good neutron beam quality have been developed and used clinically. These include:
Kyoto University Research Reactor Institute (KURRI) in Kumatori, Japan;
the Massachusetts Institute of Technology Research Reactor (MITR);
the FiR1 (Triga Mk II) research reactor at VTT Technical Research Centre, Espoo, Finland;
the RA-6 CNEA reactor in Bariloche, Argentina;
the High Flux Reactor (HFR) at Petten in the Netherlands; and
Tsing Hua Open-pool Reactor (THOR) at the National Tsing Hua University, Hsinchu, Taiwan.
JRR-4 at Japan Atomic Energy Agency, Tokai, JAPAN
A compact In-Hospital Neutron Irradiator (IHNI) in a free-standing facility in Beijing, China.
As of May 2021, only the reactors in Argentina, China, and Taiwan are still being used clinically. It is anticipated that, beginning some time in 2022, clinical studies in Finland will utilize an accelerator neutron source designed and fabricated in the United States by Neutron Therapeutics, Danvers, Massachusetts.
== Clinical studies of BNCT for brain tumors ==
=== Early studies in the US and Japan ===
It was not until the 1950s that the first clinical trials were initiated by Farr at the Brookhaven National Laboratory (BNL) in New York and by Sweet and Brownell at the Massachusetts General Hospital (MGH) using the Massachusetts Institute of Technology (MIT) nuclear reactor (MITR) and several different low molecular weight boron compounds as the boron delivery agent. However, the results of these studies were disappointing, and no further clinical trials were carried out in the United States until the 1990s.
Following a two-year Fulbright fellowship in Sweet's laboratory at the MGH, clinical studies were initiated by Hiroshi Hatanaka in Japan in 1967. He used a low-energy thermal neutron beam, which had low tissue penetrating properties, and sodium borocaptate (BSH) as the boron delivery agent, which had been evaluated as a boron delivery agent by Albert Soloway at the MGH. In Hatanaka's procedure, as much as possible of the tumor was surgically resected ("debulking"), and at some time thereafter, BSH was administered by a slow infusion, usually intra-arterially, but later intravenously. Twelve to 14 hours later, BNCT was carried out at one or another of several different nuclear reactors using low-energy thermal neutron beams. The poor tissue-penetrating properties of the thermal neutron beams necessitated reflecting the skin and raising a bone flap in order to directly irradiate the exposed brain, a procedure first used by Sweet and his collaborators.
Approximately 200+ patients were treated by Hatanaka, and subsequently by his associate, Nakagawa. Due to the heterogeneity of the patient population, in terms of the microscopic diagnosis of the tumor and its grade, size, and the ability of the patients to carry out normal daily activities (Karnofsky performance status), it was not possible to come up with definitive conclusions about therapeutic efficacy. However, the survival data were no worse than those obtained by standard therapy at the time, and there were several patients who were long-term survivors, and most probably they were cured of their brain tumors.
=== Further clinical studies in the United States and Japan ===
==== USA (2003) ====
BNCT of patients with brain tumors was resumed in the United States in the mid-1990s by Chanana, Diaz, and Coderre and their co-workers at the Brookhaven National Laboratory using the Brookhaven Medical Research Reactor (BMRR) and at Harvard/Massachusetts Institute of Technology (MIT) using the MIT Research Reactor (MITR).
For the first time, BPA was used as the boron delivery agent, and patients were irradiated with a collimated beam of higher energy epithermal neutrons, which had greater tissue-penetrating properties than thermal neutrons. A research group headed up by Zamenhof at the Beth Israel Deaconess Medical Center/Harvard Medical School and MIT was the first to use an epithermal neutron beam for clinical trials.
Initially patients with cutaneous melanomas were treated and this was expanded to include patients with brain tumors, specifically melanoma metastatic to the brain and primary glioblastomas (GBMs). Included in the research team were Otto Harling at MIT and the Radiation Oncologist Paul Busse at the Beth Israel Deaconess Medical Center in Boston. A total of 22 patients were treated by the Harvard-MIT research group. Five patients with cutaneous melanomas were also treated using an epithermal neutron beam at the MIT research reactor (MITR-II) and subsequently patients with brain tumors were treated using a redesigned beam at the MIT reactor that possessed far superior characteristics to the original MITR-II beam and BPA as the capture agent.
The clinical outcome of the cases treated at Harvard-MIT has been summarized by Busse. Although the treatment was well tolerated, there were no significant differences in the mean survival times (MSTs)of patients that had received BNCT compared to those who received conventional external beam X-irradiation.
==== Japan (2009) / Glioblastomas ====
Shin-ichi Miyatake and Shinji Kawabata at Osaka Medical College in Japan have carried out extensive clinical studies employing BPA (500 mg/kg) either alone or in combination with BSH (100 mg/kg), infused intravenously (i.v.) over 2 h, followed by neutron irradiation at Kyoto University Research Reactor Institute (KURRI) on patients with newly diagnosed and recurrent glioblastomas.
The Mean Survival Time (MST) of 10 patients with recurrent high grade gliomas in the first of their trials was 15.6 months, with one long-term survivor (>5 years).
Based on experimental animal data, which showed that BNCT in combination with X-irradiation produced enhanced survival compared to BNCT alone, in another study, Miyatake and Kawabata combined BNCT, as described above, with an X-ray boost. A total dose of 20 to 30 Gy was administered, divided into 2 Gy daily fractions. The MST of this group of patients (with newly diagnosed glioblastomas) was 23.5 months and no significant toxicity was observed, other than hair loss (alopecia). However, a significant subset of these patients, a high proportion of which had small cell variant glioblastomas, developed cerebrospinal fluid dissemination of their tumors.
==== Japan (2011) / Glioblastomas ====
In another Japanese trial with patients with newly diagnosed glioblastomas, carried out by Yamamoto et al., BPA and BSH were infused over 1 h, followed by BNCT at the Japan Research Reactor (JRR)-4 reactor. Patients subsequently received an X-ray boost after completion of BNCT. The overall median survival time (MeST) was 27.1 months, and the 1 year and 2-year survival rates were 87.5 and 62.5%, respectively.
Based on the reports of Miyatake, Kawabata, and Yamamoto, combining BNCT with an X-ray boost can produce a significant therapeutic gain. However, further studies are needed to optimize this combined therapy alone or in combination with other approaches including chemo- and immunotherapy, and to evaluate it using a larger patient population.
==== Japan (2021) / Meningiomas ====
Miyatake and his co-workers also have treated a cohort of 44 patients with recurrent high grade meningiomas (HGM) that were refractory to all other therapeutic approaches. The clinical regimen consisted of intravenous administration of boronophenylalanine two hours before neutron irradiation at the Kyoto University Research Reactor Institute in Kumatori, Japan. Effectiveness was determined using radiographic evidence of tumor shrinkage, overall survival (OS) after initial diagnosis, OS after BNCT, and radiographic patterns associated with treatment failure.
The median OS after BNCT was 29.6 months and 98.4 months after diagnosis. Better responses were seen in patients with lower grade tumors. In 35 of 36 patients, there was tumor shrinkage, and the median progression-free survival (PFS) was 13.7 months. There was good local control of the patients' tumors, as evidenced by the fact that only 22.2% of them experienced local recurrence of their tumors. From these results, it was concluded that BNCT was effective in locally controlling tumor growth, shrinking tumors, and improving survival with acceptable safety in patients with therapeutically refractory HGMs.
=== Clinical studies in Finland ===
The technological and physical aspects of the Finnish BNCT program have been described in considerable detail by Savolainen et al. A team of clinicians led by Heikki Joensuu and Leena Kankaanranta and nuclear engineers led by Iro Auterinen and Hanna Koivunoro at the Helsinki University Central Hospital and VTT Technical Research Center of Finland have treated approximately 200+ patients with recurrent malignant gliomas (glioblastomas) and head and neck cancer who had undergone standard therapy, recurred, and subsequently received BNCT at the time of their recurrence using BPA as the boron delivery agent. The median time to progression in patients with gliomas was 3 months, and the overall MeST was 7 months. It is difficult to compare these results with other reported results in patients with recurrent malignant gliomas, but they are a starting point for future studies using BNCT as salvage therapy in patients with recurrent tumors. Due to a variety of reasons, including financial, no further studies have been carried out at this facility, which has been decommissioned. However, a new facility for BNCT treatment has been installed using an accelerator designed and fabricated by Neutron Therapeutics. This accelerator was specifically designed to be used in a hospital, and the BNCT treatment and clinical studies will be carried out there after dosimetric studies have been completed in 2021. Both Finnish and foreign patients are expected to be treated at the facility.
=== Clinical studies in Sweden ===
To conclude this section on treating brain tumors with BNCT using reactor neutron sources, a clinical trial that was carried out by Stenstam, Sköld, Capala and their co-workers in Studsvik, Sweden, using an epithermal neutron beam produced by the Studsvik nuclear reactor, which had greater tissue penetration properties than the thermal beams originally used in the United States and Japan, will be briefly summarized. This study differed significantly from all previous clinical trials in that the total amount of BPA administered was increased (900 mg/kg), and it was infused i.v. over 6 hours. This was based on experimental animal studies in glioma bearing rats demonstrating enhanced uptake of BPA by infiltrating tumor cells following a 6-hour infusion. The longer infusion time of the BPA was well tolerated by the 30 patients who were enrolled in this study. All were treated with 2 fields, and the average whole brain dose was 3.2–6.1 Gy (weighted), and the minimum dose to the tumor ranged from 15.4 to 54.3 Gy (w). There has been some disagreement among the Swedish investigators regarding the evaluation of the results. Based on incomplete survival data, the MeST was reported as 14.2 months and the time to tumor progression was 5.8 months. However, more careful examination of the complete survival data revealed that the MeST was 17.7 months compared to 15.5 months that has been reported for patients who received standard therapy of surgery, followed by radiotherapy (RT) and the drug temozolomide (TMZ). Furthermore, the frequency of adverse events was lower after BNCT (14%) than after radiation therapy (RT) alone (21%) and both of these were lower than those seen following RT in combination with TMZ. If this improved survival data, obtained using the higher dose of BPA and a 6-hour infusion time, can be confirmed by others, preferably in a randomized clinical trial, it could represent a significant step forward in BNCT of brain tumors, especially if combined with a photon boost.
== Clinical Studies of BNCT for extracranial tumors ==
=== Head and neck cancers ===
The single most important clinical advance over the past 15 years has been the application of BNCT to treat patients with recurrent tumors of the head and neck region who had failed all other therapy. These studies were first initiated by Kato et al. in Japan. and subsequently followed by several other Japanese groups and by Kankaanranta, Joensuu, Auterinen, Koivunoro and their co-workers in Finland. All of these studies employed BPA as the boron delivery agent, usually alone but occasionally in combination with BSH. A very heterogeneous group of patients with a variety of histopathologic types of tumors have been treated, the largest number of which had recurrent squamous cell carcinomas. Kato et al. have reported on a series of 26 patients with far-advanced cancer for whom there were no further treatment options. Either BPA + BSH or BPA alone were administered by a 1 or 2 h i.v. infusion, and this was followed by BNCT using an epithermal beam. In this series, there were complete regressions in 12 cases, 10 partial regressions, and progression in 3 cases. The MST was 13.6 months, and the 6-year survival was 24%. Significant treatment related complications ("adverse" events) included transient mucositis, alopecia and, rarely, brain necrosis and osteomyelitis.
Kankaanranta et al. have reported their results in a prospective Phase I/II study of 30 patients with inoperable, locally recurrent squamous cell carcinomas of the head and neck region. Patients received either two or, in a few instances, one BNCT treatment using BPA (400 mg/kg), administered i.v. over 2 hours, followed by neutron irradiation. Of 29 evaluated patients, there were 13 complete and 9 partial remissions, with an overall response rate of 76%. The most common adverse event was oral mucositis, oral pain, and fatigue. Based on the clinical results, it was concluded that BNCT was effective for the treatment of inoperable, previously irradiated patients with head and neck cancer. Some responses were durable but progression was common, usually at the site of the previously recurrent tumor. As previously indicated in the section on neutron sources, all clinical studies have ended in Finland, for variety of reasons including economic difficulties of the two companies directly involved, VTT and Boneca. However, clinical studies using an accelerator neutron source designed and fabricated by Neutron Therapeutics and installed at the Helsinki University Hospital should be fully functional by 2022. Finally, a group in Taiwan, led by Ling-Wei Wang and his co-workers at the Taipei Veterans General Hospital, have treated 17 patients with locally recurrent head and neck cancers at the Tsing Hua Open-pool Reactor (THOR) of the National Tsing Hua University. Two-year overall survival was 47% and two-year loco-regional control was 28%. Further studies are in progress to further optimize their treatment regimen.
=== Other types of tumor ===
==== Melanoma and extramammary Paget's disease ====
Other extracranial tumors that have been treated with BNCT include malignant melanomas. The original studies were carried out in Japan by the late Yutaka Mishima and his clinical team in the Department of Dermatology at Kobe University using locally injected BPA and a thermal neutron beam. It is important to point out that it was Mishima who first used BPA as a boron delivery agent, and this approach subsequently was extended to other types of tumors based on the experimental animal studies of Coderre et al. at the Brookhaven National Laboratory. Local control was achieved in almost all patients, and some were cured of their melanomas. Patients with melanoma of the head and neck region, vulva, and extramammary Paget's disease of the genital region have been treated by Hiratsuka et al. with promising clinical results. The first clinical trial of BNCT in Argentina for the treatment of melanomas was performed in October 2003 and since then several patients with cutaneous melanomas have been treated as part of a Phase II clinical trial at the RA-6 nuclear reactor in Bariloche. The neutron beam has a mixed thermal-hyperthermal neutron spectrum that can be used to treat superficial tumors. The In-Hospital Neutron Irradiator (IHNI) in Beijing has been used to treat a small number of patients with cutaneous melanomas with a complete response of the primary lesion and no evidence of late radiation injury during a 24+-month follow-up period.
==== Colorectal cancer ====
Two patients with colon cancer, which had spread to the liver, have been treated by Zonta and his co-workers at the University of Pavia in Italy. The first was treated in 2001 and the second in mid-2003. The patients received an i.v. infusion of BPA, followed by removal of the liver (hepatectomy), which was irradiated outside of the body (extracorporeal BNCT) and then re-transplanted into the patient. The first patient did remarkably well and survived for over 4 years after treatment, but the second died within a month of cardiac complications. Clearly, this is a very challenging approach for the treatment of hepatic metastases, and it is unlikely that it will ever be widely used. Nevertheless, the good clinical results in the first patient established proof of principle. Finally, Yanagie and his colleagues at Meiji Pharmaceutical University in Japan have treated several patients with recurrent rectal cancer using BNCT. Although no long-term results have been reported, there was evidence of short-term clinical responses.
== Accelerators as Neutron Sources ==
Accelerators now are the primary source of epithermal neutrons for clinical BNCT. The first papers relating to their possible use were published in the 1980s, and, as summarized by Blue and Yanch, this topic became an active area of research in the early 2000s. However, it was the Fukushima nuclear disaster in Japan in 2011 that gave impetus to their development for clinical use. Accelerators also can be used to produce epithermal neutrons. Today several accelerator-based neutron sources (ABNS) are commercially available or under development. Most existing or planned systems use either the lithium-7 reaction, 7Li(p,n)7Be or the beryllium-9 reaction,9Be(p,n)9B, to generate neutrons, though other nuclear reactions also have been considered. The lithium-7 reaction requires a proton accelerator with energies between 1.9 and 3.0 MeV, while the beryllium-9 reaction typically uses accelerators with energies between 5 and 30 MeV. Aside from the lower proton energy that the lithium-7 reaction requires, its main benefit is the lower energy of the neutrons produced. This in turn allows the use of smaller moderators, "cleaner" neutron beams, and reduced neutron activation. Benefits of the beryllium-9 reaction include simplified target design and disposal, long target lifetime, and lower required proton beam current.
Since the proton beams for BNCT are quite powerful (~20-100 kW), the neutron generating target must incorporate cooling systems capable of removing the heat safely and reliably to protect the target from damage. In the case of the lithium-7, this requirement is especially important due to the low melting point and chemical volatility of the target material. Liquid jets, micro-channels and rotating targets have been employed to solve this problem.Several researchers have proposed the use of liquid lithium-7 targets in which the target material doubles as the coolant. In the case of beryllium-9, "thin" targets, in which the protons come to rest and deposit much of their energy in the cooling fluid, can be employed. Target degradation due to beam exposure ("blistering") is another problem to be solved, either by using layers of materials resistant to blistering or by spreading the protons over a large target area. Since the nuclear reactions yield neutrons with energies ranging from < 100keV to tens of MeV, a Beam Shaping Assembly (BSA) must be used to moderate, filter, reflect and collimate the neutron beam to achieve the desired epithermal energy range, neutron beam size and direction. BSAs are typically composed of a range of materials with desirable nuclear properties for each function. A well-designed BSA should maximize neutron yield per proton while minimizing fast neutron, thermal neutron and gamma contamination. It should also produce a sharply delimited and generally forward directed beam enabling flexible positioning of the patient relative to the aperture.
One key challenge for an ABNS is the duration of treatment time: depending on the neutron beam intensity, treatments can take up to an hour or more. Therefore, it is desirable to reduce the treatment time both for patient comfort during immobilization and to increase the number of patients that could be treated in a 24-hour period. Increasing the neutron beam intensity for the same proton current by adjusting the BSA is often achieved at the cost of reduced beam quality (higher levels of unwanted fast neutrons or gamma rays in the beam or poor beam collimation). Therefore, increasing the proton current delivered by ABNS BNCT systems remains a key goal of technology development programs.
The table below summarizes the existing or planned ABNS installations for clinical use (Updated November, 2024).
== Clinical Studies Using Accelerator Neutron Sources ==
Treatment of Recurrent Malignant Gliomas
The single greatest advance in moving BNCT forward clinically has been the introduction of cyclotron-based neutron sources (c-BNS) in Japan. Shin-ichi Miyatake and Shinji Kawabata have led the way with the treatment of patients with recurrent glioblastomas (GBMs). In their Phase II clinical trial, they used the Sumitomo Heavy Industries accelerator at the Osaka Medical College, Kansai BNCT Medical Center to treat a total of 24 patients. These patients ranged in age from 20 to 75 years, and all previously had received standard treatment consisting of surgery followed by chemotherapy with temozolomide (TMZ) and conventional radiation therapy. They were candidates for treatment with BNCT because their tumors had recurred and were progressing in size. They received an intravenous infusion of a proprietary formulation of 10B-enriched boronophenylalanine ("Borofalan," StellaPharma Corporation, Osaka, Japan) prior to neutron irradiation. The primary endpoint of this study was the 1-year survival rate after BNCT, which was 79.2%, and the median overall survival rate was 18.9 months. Based on these results, it was concluded that c-BNS BNCT was safe and resulted in increased survival of patients with recurrent gliomas. Although there was an increased risk of brain edema due to re-irradiation, this was easily controlled. As a result of this trial, the Sumitomo accelerator was approved by the Japanese regulatory authority having jurisdiction over medical devices, and further studies are being carried out with patients who have recurrent, high-grade (malignant) meningiomas. However, further studies for the treatment of patients with GBMs have been put on hold pending additional analysis of the results.
Treatment of Recurrent or Locally Advanced Cancers of the Head and Neck
Katsumi Hirose and his co-workers at the Southern Tohoku BNCT Research Center in Koriyama, Japan, recently have reported on their results after treating 21 patients with recurrent tumors of the head and neck region. All of these patients had received surgery, chemotherapy, and conventional radiation therapy. Eight of them had recurrent squamous cell carcinomas (R-SCC), and 13 had either recurrent (R) or locally advanced (LA) non-squamous cell carcinomas (nSCC). The overall response rate was 71%, and the complete response and partial response rates were 50% and 25%, respectively, for patients with R-SCC and 80% and 62%, respectively, for those with R or LA SCC. The overall 2-year survival rates for patients with R-SCC or R/LA nSCC were 58% and 100%, respectively. The treatment was well tolerated, and adverse events were those usually associated with conventional radiation treatment of these tumors. These patients had received a proprietary formulation of 10B-enriched boronophenylalanine (Borofalan), which was administered intravenously. Although the manufacturer of the accelerator was not identified, it presumably was the one manufactured by Sumitomo Heavy Industries, Ltd., which was indicated in the Acknowledgements of their report. Based on this Phase II clinical trial, the authors suggested that BNCT using Borofalan and c-BENS was a promising treatment for recurrent head and neck cancers, although further studies would be required to firmly establish this.
== The Future ==
Clinical BNCT first was used to treat highly malignant brain tumors and subsequently for melanomas of the skin that were difficult to treat by surgery. Later, it was used as a type of "salvage" therapy for patients with recurrent tumors of the head and neck region. The clinical results were sufficiently promising to lead to the development of accelerator neutron sources, which will be used almost exclusively in the future. Challenges for the future clinical success of BNCT that need to be met include the following:
Optimizing the dosing and delivery paradigms and administration of BPA and BSH.
The development of more tumor-selective boron delivery agents for BNCT and their evaluation in large animals and ultimately in humans.
Accurate, real time dosimetry to better estimate the radiation doses delivered to the tumor and normal tissues in patients with brain tumors and head and neck cancer.
Further clinical evaluation of accelerator-based neutron sources for the treatment of brain tumors, head and neck cancer, and other malignancies.
Reducing the cost.
== See also ==
Particle therapy, Neutrons, protons, or heavy ions (e.g. carbon)
Fast neutron therapy
Proton therapy
== References ==
== External links ==
Boron and Gadolinium Neutron Capture Therapy for Cancer Treatment
Destroying Cancer with Boron and Neutrons - Medical Frontiers - NHK February 21, 2022 | Wikipedia/Neutron_capture_therapy_of_cancer |
An autoradiograph is an image on an X-ray film or nuclear emulsion produced by the pattern of decay emissions (e.g., beta particles or gamma rays) from a distribution of a radioactive substance. Alternatively, the autoradiograph is also available as a digital image (digital autoradiography), due to the recent development of scintillation gas detectors or rare-earth phosphorimaging systems. The film or emulsion is apposed to the labeled tissue section to obtain the autoradiograph (also called an autoradiogram). The auto- prefix indicates that the radioactive substance is within the sample, as distinguished from the case of historadiography or microradiography, in which the sample is marked using an external source. Some autoradiographs can be examined microscopically for localization of silver grains (such as on the interiors or exteriors of cells or organelles) in which the process is termed micro-autoradiography. For example, micro-autoradiography was used to examine whether atrazine was being metabolized by the hornwort plant or by epiphytic microorganisms in the biofilm layer surrounding the plant.
== Applications ==
In biology, this technique may be used to determine the tissue (or cell) localization of a radioactive substance, either introduced into a metabolic pathway, bound to a receptor or enzyme, or hybridized to a nucleic acid. Applications for autoradiography are broad, ranging from biomedical to environmental sciences to industry.
=== Receptor autoradiography ===
The use of radiolabeled ligands to determine the tissue distributions of receptors is termed either in vivo or in vitro receptor autoradiography if the ligand is administered into the circulation (with subsequent tissue removal and sectioning) or applied to the tissue sections, respectively. Once the receptor density is known, in vitro autoradiography can also be used to determine the anatomical distribution and affinity of a radiolabeled drug towards the receptor. For in vitro autoradiography, radioligand was directly applying on frozen tissue sections without administration to the subject. Thus it cannot follow the distribution, metabolism and degradation situation completely in the living body. But because target in the cryosections is widely exposed and can direct contact with radioligand, in vitro autoradiography is still a quick and easy method to screen drug candidates, PET and SPECT ligands. The ligands are generally labeled with 3H (tritium), 18F (fluorine), 11C (carbon) or 125I (radioiodine). Compare to in vitro, ex vivo autoradiography were performed after administration of radioligand in the body, which can decrease the artifacts and are closer to the inner environment.
The distribution of RNA transcripts in tissue sections by the use of radiolabeled, complementary oligonucleotides or ribonucleic acids ("riboprobes") is called in situ hybridization histochemistry. Radioactive precursors of DNA and RNA, [3H]-thymidine and [3H]-uridine respectively, may be introduced to living cells to determine the timing of several phases of the cell cycle. RNA or DNA viral sequences can also be located in this fashion. These probes are usually labeled with 32P, 33P, or 35S. In the realm of behavioral endocrinology, autoradiography can be used to determine hormonal uptake and indicate receptor location; an animal can be injected with a radiolabeled hormone, or the study can be conducted in vitro.
=== Rate of DNA replication ===
The rate of DNA replication in a mouse cell growing in vitro was measured by autoradiography as 33 nucleotides per second. The rate of phage T4 DNA elongation in phage-infected E. coli was also measured by autoradiography as 749 nucleotides per second during the period of exponential DNA increase at 37 °C (99 °F).
=== Detection of protein phosphorylation ===
Phosphorylation means the posttranslational addition of a phosphate group to specific amino acids of proteins, and such modification can lead to a drastic change in the stability or the function of a protein in the cell. Protein phosphorylation can be detected on an autoradiograph, after incubating the protein in vitro with the appropriate kinase and γ-32P-ATP. The radiolabeled phosphate of latter is incorporated into the protein which is isolated via SDS-PAGE and visualized on an autoradiograph of the gel. (See figure 3. of a recent study showing that CREB-binding protein is phosphorylated by HIPK2.)
=== Detection of sugar movement in plant tissue ===
In plant physiology, autoradiography can be used to determine sugar accumulation in leaf tissue. Sugar accumulation, as it relates to autoradiography, can described the phloem-loading strategy used in a plant. For example, if sugars accumulate in the minor veins of a leaf, it is expected that the leaves have few plasmodesmatal connections which is indicative of apoplastic movement, or an active phloem-loading strategy. Sugars, such as sucrose, fructose, or mannitol, are radiolabeled with [14-C], and then absorbed into leaf tissue by simple diffusion. The leaf tissue is then exposed to autoradiographic film (or emulsion) to produce an image. Images will show distinct vein patterns if sugar accumulation is concentrated in leaf veins (apoplastic movement), or images will show a static-like pattern if sugar accumulation is uniform throughout the leaf (symplastic movement).
=== Other techniques ===
This autoradiographic approach contrasts to techniques such as PET and SPECT where the exact 3-dimensional localization of the radiation source is provided by careful use of coincidence counting, gamma counters and other devices.
Krypton-85 is used to inspect aircraft components for small defects. Krypton-85 is allowed to penetrate small cracks, and then its
presence is detected by autoradiography. The method is called "krypton gas penetrant imaging". The gas penetrates smaller openings than the liquids used in dye penetrant inspection and fluorescent penetrant inspection.
== Historical events ==
The task of radioactive decontamination following the Baker nuclear test at Bikini Atoll during Operation Crossroads in 1946 was far more difficult than the U.S. Navy had prepared for. Though the task's futility became apparent and the danger to cleanup crews mounted, Colonel Stafford Warren, in charge of radiation safety, had difficulty persuading Vice Admiral William H. P. Blandy to abandon the cleanup and with it the surviving target ships. On August 10, Warren showed Blandy an autoradiograph made by a surgeonfish from the lagoon that was left on a photographic plate overnight. The film was exposed by alpha radiation produced from the fish's scales, evidence that plutonium, mimicking calcium, had been distributed throughout the fish. Blandy promptly ordered that all further decontamination work be discontinued. Warren wrote home, "A self X ray of a fish ... did the trick."
== References ==
=== General references ===
Original publication by sole inventor
Askins, Barbara S. (1 November 1976). "Photographic image intensification by autoradiography". Applied Optics. 15 (11): 2860–2865. Bibcode:1976ApOpt..15.2860A. doi:10.1364/ao.15.002860.
=== Inline citations ===
== Further reading ==
Rogers, Andrew W (1979). Techniques of Autoradiography (3rd ed.). New York: Elsevier North Holland. ISBN 978-0-444-80063-3.
"Patent US4101780 Treating silver with a radioactive sulfur compound such as thiourea or derivatives". Google Patents. Retrieved 26 June 2014. | Wikipedia/Autoradiograph |
Single-photon emission computed tomography (SPECT, or less commonly, SPET) is a nuclear medicine tomographic imaging technique using gamma rays. It is very similar to conventional nuclear medicine planar imaging using a gamma camera (that is, scintigraphy), but is able to provide true 3D information. This information is typically presented as cross-sectional slices through the patient, but can be freely reformatted or manipulated as required.
The technique needs delivery of a gamma-emitting radioisotope (a radionuclide) into the patient, normally through injection into the bloodstream. On occasion, the radioisotope is a simple soluble dissolved ion, such as an isotope of gallium(III). Usually, however, a marker radioisotope is attached to a specific ligand to create a radioligand, whose properties bind it to certain types of tissues. This marriage allows the combination of ligand and radiopharmaceutical to be carried and bound to a place of interest in the body, where the ligand concentration is seen by a gamma camera.
== Principles ==
Instead of just "taking a picture of anatomical structures", a SPECT scan monitors level of biological activity at each place in the 3-D region analyzed. Emissions from the radionuclide indicate amounts of blood flow in the capillaries of the imaged regions. In the same way that a plain X-ray is a 2-dimensional (2-D) view of a 3-dimensional structure, the image obtained by a gamma camera is a 2-D view of 3-D distribution of a radionuclide.
SPECT imaging is performed by using a gamma camera to acquire multiple 2-D images (also called projections), from multiple angles. A computer is then used to apply a tomographic reconstruction algorithm to the multiple projections, yielding a 3-D data set. This data set may then be manipulated to show thin slices along any chosen axis of the body, similar to those obtained from other tomographic techniques, such as magnetic resonance imaging (MRI), X-ray computed tomography (X-ray CT), and positron emission tomography (PET).
SPECT is similar to PET in its use of radioactive tracer material and detection of gamma rays. In contrast with PET, the tracers used in SPECT emit gamma radiation that is measured directly, whereas PET tracers emit positrons that annihilate with electrons up to a few millimeters away, causing two gamma photons to be emitted in opposite directions. A PET scanner detects these emissions "coincident" in time, which provides more radiation event localization information and, thus, higher spatial resolution images than SPECT (which has about 1 cm resolution). SPECT scans are significantly less expensive than PET scans, in part because they are able to use longer-lived and more easily obtained radioisotopes than PET.
Because SPECT acquisition is very similar to planar gamma camera imaging, the same radiopharmaceuticals may be used. If a patient is examined in another type of nuclear medicine scan, but the images are non-diagnostic, it may be possible to proceed straight to SPECT by moving the patient to a SPECT instrument, or even by simply reconfiguring the camera for SPECT image acquisition while the patient remains on the table.
To acquire SPECT images, the gamma camera is rotated around the patient. Projections are acquired at defined points during the rotation, typically every 3–6 degrees. In most cases, a full 360-degree rotation is used to obtain an optimal reconstruction. The time taken to obtain each projection is also variable, but 15–20 seconds is typical. This gives a total scan time of 15–20 minutes.
Multi-headed gamma cameras can accelerate acquisition. For example, a dual-headed camera can be used with heads spaced 180 degrees apart, allowing two projections to be acquired simultaneously, with each head requiring 180 degrees of rotation. Triple-head cameras with 120-degree spacing are also used.
Cardiac gated acquisitions are possible with SPECT, just as with planar imaging techniques such as multi gated acquisition scan (MUGA). Triggered by electrocardiogram (EKG) to obtain differential information about the heart in various parts of its cycle, gated myocardial SPECT can be used to obtain quantitative information about myocardial perfusion, thickness, and contractility of the myocardium during various parts of the cardiac cycle, and also to allow calculation of left ventricular ejection fraction, stroke volume, and cardiac output.
== Application ==
SPECT can be used to complement any gamma imaging study, where a true 3D representation can be helpful, such as tumor imaging, infection (leukocyte) imaging, thyroid imaging or bone scintigraphy.
Because SPECT permits accurate localisation in 3D space, it can be used to provide information about localised function in internal organs, such as functional cardiac or brain imaging.
=== Myocardial perfusion imaging ===
Myocardial perfusion imaging (MPI) is a form of functional cardiac imaging, used for the diagnosis of ischemic heart disease. The underlying principle is that under conditions of stress, diseased myocardium receives less blood flow than normal myocardium. MPI is one of several types of cardiac stress test.
A cardiac specific radiopharmaceutical is administered, e.g., 99mTc-tetrofosmin (Myoview, GE healthcare), 99mTc-sestamibi (Cardiolite, Bristol-Myers Squibb) or Thallium-201 chloride. Following this, the heart rate is raised to induce myocardial stress, either by exercise on a treadmill or pharmacologically with adenosine, dobutamine, or dipyridamole (aminophylline can be used to reverse the effects of dipyridamole).
SPECT imaging performed after stress reveals the distribution of the radiopharmaceutical, and therefore the relative blood flow to the different regions of the myocardium. Diagnosis is made by comparing stress images to a further set of images obtained at rest which are normally acquired prior to the stress images.
MPI has been demonstrated to have an overall accuracy of about 83% (sensitivity: 85%; specificity: 72%) (in a review, not exclusively of SPECT MPI), and is comparable with (or better than) other non-invasive tests for ischemic heart disease.
=== Functional brain imaging ===
Usually, the gamma-emitting tracer used in functional brain imaging is Technetium (99mTc) exametazime. 99mTc is a metastable nuclear isomer that emits gamma rays detectable by a gamma camera. Attaching it to exametazime allows it to be taken up by brain tissue in a manner proportional to brain blood flow, in turn allowing cerebral blood flow to be assessed with the nuclear gamma camera.
Because blood flow in the brain is tightly coupled to local brain metabolism and energy use, the 99mTc-exametazime tracer (as well as the similar 99mTc-EC tracer) is used to assess brain metabolism regionally, in an attempt to diagnose and differentiate the different causal pathologies of dementia. Meta-analysis of many reported studies suggests that SPECT with this tracer is about 74% sensitive at diagnosing Alzheimer's disease vs. 81% sensitivity for clinical exam (cognitive testing, etc.). More recent studies have shown the accuracy of SPECT in Alzheimer's diagnosis may be as high as 88%. In meta analysis, SPECT was superior to clinical exam and clinical criteria (91% vs. 70%) in being able to differentiate Alzheimer's disease from vascular dementias. This latter ability relates to SPECT's imaging of local metabolism of the brain, in which the patchy loss of cortical metabolism seen in multiple strokes differs clearly from the more even or "smooth" loss of non-occipital cortical brain function typical of Alzheimer's disease. Another recent review article showed that multi-headed SPECT cameras with quantitative analysis result in an overall sensitivity of 84-89% and an overall specificity of 83-89% in cross sectional studies and sensitivity of 82-96% and specificity of 83-89% for longitudinal studies of dementia.
99mTc-exametazime SPECT scanning competes with fludeoxyglucose (FDG) PET scanning of the brain, which works to assess regional brain glucose metabolism, to provide very similar information about local brain damage from many processes. SPECT is more widely available, because the radioisotope used is longer-lasting and far less expensive in SPECT, and the gamma scanning equipment is less expensive as well. While 99mTc is extracted from relatively simple technetium-99m generators, which are delivered to hospitals and scanning centers weekly to supply fresh radioisotope, FDG PET relies on FDG, which is made in an expensive medical cyclotron and "hot-lab" (automated chemistry lab for radiopharmaceutical manufacture), and then delivered immediately to scanning sites because of the natural short 110-minute half-life of Fluorine-18.
=== Applications in nuclear technology ===
In the nuclear power sector, the SPECT technique can be applied to image radioisotope distributions in irradiated nuclear fuels. Due to the irradiation of nuclear fuel (e.g. uranium) with neutrons in a nuclear reactor, a wide array of gamma-emitting radionuclides are naturally produced in the fuel, such as fission products (cesium-137, barium-140 and europium-154) and activation products (chromium-51 and cobalt-58). These may be imaged using SPECT in order to verify the presence of fuel rods in a stored fuel assembly for IAEA safeguards purposes, to validate predictions of core simulation codes, or to study the behavior of the nuclear fuel in normal operation,
or in accident scenarios.
== Reconstruction ==
Reconstructed images typically have resolutions of 64×64 or 128×128 pixels, with the pixel sizes ranging from 3–6 mm. The number of projections acquired is chosen to be approximately equal to the width of the resulting images. In general, the resulting reconstructed images will be of lower resolution, have increased noise than planar images, and be susceptible to artifacts.
Scanning is time-consuming, and it is essential that there is no patient movement during the scan time. Movement can cause significant degradation of the reconstructed images, although movement compensation reconstruction techniques can help with this. A highly uneven distribution of radiopharmaceutical also has the potential to cause artifacts. A very intense area of activity (e.g., the bladder) can cause extensive streaking of the images and obscure neighboring areas of activity. This is a limitation of the filtered back projection reconstruction algorithm. Iterative reconstruction is an alternative algorithm that is growing in importance, as it is less sensitive to artifacts and can also correct for attenuation and depth dependent blurring. Furthermore, iterative algorithms can be made more efficacious using the Superiorization methodology.
Attenuation of the gamma rays within the patient can lead to significant underestimation of activity in deep tissues, compared to superficial tissues. Approximate correction is possible, based on relative position of the activity, and optimal correction is obtained with measured attenuation values. Modern SPECT equipment is available with an integrated X-ray CT scanner. As X-ray CT images are an attenuation map of the tissues, this data can be incorporated into the SPECT reconstruction to correct for attenuation. It also provides a precisely registered CT image, which can provide additional anatomical information.
Scatter of the gamma rays as well as the random nature of gamma rays can also lead to the degradation of quality of SPECT images and cause loss of resolution. Scatter correction and resolution recovery are also applied to improve resolution of SPECT images.
== Typical SPECT acquisition protocols ==
== SPECT/CT ==
In some cases a SPECT gamma scanner may be built to operate with a conventional CT scanner, with coregistration of images. As in PET/CT, this allows location of tumors or tissues which may be seen on SPECT scintigraphy, but are difficult to locate precisely with regard to other anatomical structures. Such scans are most useful for tissues outside the brain, where location of tissues may be far more variable. For example, SPECT/CT may be used in sestamibi parathyroid scan applications, where the technique is useful in locating ectopic parathyroid adenomas which may not be in their usual locations in the thyroid gland.
== Quality control ==
The overall performance of SPECT systems can be performed by quality control tools such as the Jaszczak phantom.
== See also ==
== References ==
Cerqueira M. D., Jacobson A. F. (1989). "Assessment of myocardial viability with SPECT and PET imaging". American Journal of Roentgenology. 153 (3): 477–483. doi:10.2214/ajr.153.3.477. PMID 2669461.
== Further reading ==
Bruyant, P. P. (2002). "Analytic and iterative reconstruction algorithms in SPECT". Journal of Nuclear Medicine 43(10):1343-1358.
Elhendy et al., "Dobutamine Stress Myocardial Perfusion Imaging in Coronary Artery Disease", J Nucl Med 2002 43: 1634–1646.
Frankle W. Gordon (2005). "Neuroreceptor Imaging in Psychiatry: Theory and Applications". International Review of Neurobiology. 67: 385–440. doi:10.1016/S0074-7742(05)67011-0. ISBN 9780123668684. PMID 16291028.
Herman, Gabor T. (2009). Fundamentals of Computerized Tomography: Image Reconstruction from Projections (2nd ed.). Springer. ISBN 978-1-85233-617-2.
Jones / Hogg / Seeram (2013). Practical SPECT/CT in Nuclear Medicine. ISBN 978-1447147022.
Willowson K, Bailey DL, Baldock C, 2008. "Quantitative SPECT reconstruction using CT-derived corrections". Phys. Med. Biol. 53 3099–3112.
== External links ==
Human Health Campus, The official website of the International Atomic Energy Agency dedicated to Professionals in Radiation Medicine. This site is managed by the Division of Human Health, Department of Nuclear Sciences and Applications
National Isotope Development Center Reference information on radioisotopes including those for SPECT; coordination and management of isotope production, availability, and distribution
Isotope Development & Production for Research and Applications (IDPRA) U.S. Department of Energy program for isotope production and production research and development | Wikipedia/Single-photon_emission_computed_tomography |
The Organic Moderated Reactor Experiment (OMRE) was a 16 MWt experimental organic nuclear reactor that operated at the National Reactor Testing Station from 1957 to 1963 to explore the use of hydrocarbons as coolant, moderator, and reflector materials in power reactor conditions. Such organic fluids are non-corrosive, do not become highly activated under irradiation, and can operate at low pressure and moderate temperature. These characteristics were considered promising towards the goal of achieving economical commercial nuclear power.
The information provided by OMRE established the credibility of the Organic nuclear reactor concept and led to the commercial demonstration at the Piqua Nuclear Generating Station. More recently, OMRE has been cited as providing key input and motivation for modern designs of such systems, aiming to help improve performance of new and advanced nuclear power plants towards the goals of climate change mitigation.
== Design ==
The OMRE design efforts began in July 1955. It was originally intended to operate for 1 year.
The objectives of the OMRE program were to obtain the following experimental information: 9 :
Rate of radiation and thermal neutron damage to the hydrocarbon in the reactor
Effect of this damage upon the operation of the reactor
Suitable methods for ensuring satisfactory reactor operation in the presence of damaged hydrocarbon
It was neither a pilot plant nor a prototype, but rather a minimum-cost experimental facility designed to investigate the feasibility of the organic concept to power reactors. It did not have an electric power conversion system.
OMRE was designed to provide operational information on the response of diphenyl to high nuclear radiation and thermal neutron flux, with flexibility to test other polyphenyls such as terphenyl.
The design criteria stated included:
Maximum fuel surface temperature between 750 °F (400 °C) and 800 °F (430 °C)
Bulk coolant temperature between 500 °F (260 °C) and 700 °F (370 °C)
Coolant velocity in fuel plates up to 15 ft/s (4.6 m/s)
Heat rejection capacity of 16 MWt
25 fuel elements representing a total of 20.6 kg U235
Fuel burnup of 11.2% U235
Average thermal neutron flux in fuel of 2 1013 n/cm2/s at 500 °F (260 °C)
Reactor system pressure of 300 psi (21 bar)
The fuel element was a stainless steel box in which 16 active fuel plates were held in longitudinal grooves. Each fuel plate consisted of a core of highly enriched uranium particles uniformly dispersed in a stainless-steel matrix, clad with 304 stainless steel and rolled into a 0.030 in (0.76 mm) thick, 2.760 in (70 mm) wide and 37 in (940 mm) fuel plate. The dimensions of the rectangular reactor core were 57 centimeters by 69 centimeters by 91 centimeters.
The reactor vessel was filled with diphenyl to obtain 14 feet of radiation shielding above the reactor core at 250 °F (120 °C). It was pressurized up to 300 psi with the inert nitrogen pressurized to 200 psi (14 bar) to prevent boiling of the hydrocarbon. The nitrogen was continuously purged from the system to sweep out any hydrogen and light hydrocarbon gases, like methane or ethane, produced by the decomposition of the coolant-moderator due to pyrolysis and radiolysis and discharge it out the stack.
Coolant was pumped at 7,200 US gal/min (27 cubic metres per minute) through an air-blast heat exchanger to dump the core heat to the atmosphere. A steam system and power conversion system were not used to simplify the construction and operation of the reactor experiment.
At high temperature and under irradiation, the hydrocarbons decompose and form longer chains with increasing molecular weight. This gradually degrades the heat transfer and flow characteristics of the fluid. To mitigate this, a coolant-moderator purification ran continuously to remove any hydrocarbons that had been damaged by heat or radiation. This was accomplished with a low-pressure distillation system.
All systems were constructed with carbon steel, except the reactor vessel. All systems had heaters (including induction heating, resistance heating, and an oil-fired heater on the air-blast heat exchangers) to bring the system above the melting temperature of the coolant-moderator.
== Construction ==
Construction of OMRE began on June 17, 1956, and completed in May 1957. The reactor containment was partially built underground and consisted of a concrete pad and corrugated steel cylinder surrounded by compacted earth for radiation shielding.: 86
Clearing, grading, roads, walks, drainage, water supply, power substation, sanitary and process waste systems, fencing, security lighting, guard station, communications system, control and processing building, and reactor foundation excavation were performed in Phase I of the construction by the Idaho Operations Office and the Atomic Energy Commission. Some delays were encountered due to appropriations delays and a steel strike.
The biggest setback was unsatisfactory performance of the control-rod drive mechanism. During testing, it became apparent that the original design would not work, and a new approach was needed.
Process piping was constructed of Schedule 40 carbon steel.
The buildings and utilities were constructed by Wadsworth & Arrington.
== Operation ==
The OMRE first achieved criticality on September 17, 1957, and reached full power at the beginning of February, 1958. The reactor operated in two modes: without the purification system, and with the purification system. Seventeen tests were run with the first OMRE core throughout 1958 with reactor power between 0 and 12 MWt.
The first three tests were system check-out tests, covering all major systems. Subsequent tests simulated the conditions expected to be encountered in the Piqua Nuclear Generating Station. Test 4 demonstrated that pyrolitic decomposition rate in external piping was negligible. Tests 5-11 measured the decomposition rate and the effect of radiation damage on coolant-moderator heat-transfer characteristics. Tests 12 and 13 tested the purification system's ability to reduce the concentration of inorganic particulate matter while also reducing the high-boiler concentration from 40% to 8%.
Three fuel element failures occurred during first core operation. Two occurred in experimental low-enriched assemblies with finned aluminum cladding due to inadequate coolant filtration, and the third was caused by improper element seating.
By the end of the first year, the core had generated 958 MW-day of energy and been in operation for 5,600 hours. An extended shutdown followed to replace the core.
Problems with coolant purification complicated the operation of the OMRE reactor. The polymerization of the terphenyl coolant (Santowax OM, subsequently Santowax R) lead to fouling and blockage of coolant channels and to the installation of an on-line coolant purification system. These complications and the progress of the water-cooled nuclear reactor technology led to the decision of US Atomic Energy Commission to reduce the American organic nuclear reactor program on December 10, 1962, and ultimately to shutdown OMRE on June 30, 1963. The Experimental Organic Cooled Reactor (EOCR) was built next to OMRE in anticipation of further development of the concept. During the final stages of its construction, EOCR was also placed in standby and never operated.
== Decommissioning ==
Immediately following final OMRE shutdown, the nuclear fuel and reactor vessel internals were removed, and the organic coolant Santowax R (a commercial name of a mixture of terphenyl and diphenyl isomers) was drained from all the systems and remained in this deactivated condition until 1977.
The facility was eventually decontaminated and decommissioned between October 1977 and September 1979. The process was complicated by the existence of some remaining toxic and flammable Santowax-R and xylene, a neutron-activated radioactive vessel emitting 350 R/h, and asbestos insulation. Furthermore, due to insufficient neutron shielding being included in the design, "an extraordinary, unexpected amount of activated rock and soil was removed.: ii
The surface radiation of the excavation and backfill material was brought to 20 R/h or less, and the nuclide content of the backfill soil was brought below 0.5 pCi/g.
The decommissioning effort was initially estimated in 1977 to cost $700,000 (equivalent to $3,600,000 in 2024) and take 2 years, and was completed on time and under budget, for a total cost of $500,000 (equivalent to $2,600,000 in 2024).: 15
== References ==
== External links ==
Organic Moderated Reactor Experiment (1958 documentary film)
Organic cooled reactors: Five Fast Facts (2019 American Nuclear Society article) | Wikipedia/Organic_Moderated_Reactor_Experiment |
Scintigraphy (from Latin scintilla, "spark"), also known as a gamma scan, is a diagnostic test in nuclear medicine, where radioisotopes attached to drugs that travel to a specific organ or tissue (radiopharmaceuticals) are taken internally and the emitted gamma radiation is captured by gamma cameras, which are external detectors that form two-dimensional images in a process similar to the capture of x-ray images. In contrast, SPECT and positron emission tomography (PET) form 3-dimensional images and are therefore classified as separate techniques from scintigraphy, although they also use gamma cameras to detect internal radiation. Scintigraphy is unlike a diagnostic X-ray where external radiation is passed through the body to form an image.
== Process ==
Scintillography is an imaging method of nuclear events provoked by collisions or charged current interactions among nuclear particles or ionizing radiation and atoms which result in a brief, localised pulse of electromagnetic radiation, usually in the visible light range (Cherenkov radiation). This pulse (scintillation) is usually detected and amplified by a photomultiplier or charge-coupled device elements, and its resulting electrical waveform is processed by computers to provide two- and three-dimensional images of a subject or region of interest.
Scintillography is mainly used in scintillation cameras in experimental physics. For example, huge neutrino detection underground tanks filled with tetrachloroethylene are surrounded by arrays of photo detectors in order to capture the extremely rare event of a collision between the fluid's atoms and a neutrino.
Another extensive use of scintillography is in medical imaging techniques which use gamma ray detectors called gamma cameras. Detectors coated with materials which scintillate when subjected to gamma rays are scanned with optical photon detectors and scintillation counters. The subjects are injected with special radionuclides which irradiate in the gamma range inside the region of interest, such as the heart or the brain. A special type of gamma camera is the SPECT (Single Photon Emission Computed Tomography). Another medical scintillography technique, the Positron-emission tomography (PET), which uses the scintillations provoked by electron-positron annihilation phenomena.
== By organ or organ system ==
=== Biliary system (cholescintigraphy) ===
Scintigraphy of the biliary system is called cholescintigraphy and is done to diagnose obstruction of the bile ducts by a gallstone (cholelithiasis), a tumor, or another cause. It can also diagnose gallbladder diseases, e.g. bile leaks of biliary fistulas. In cholescintigraphy, the injected radioactive chemical is taken up by the liver and secreted into the bile. The radiopharmaceutical then goes into the bile ducts, the gallbladder, and the intestines. The gamma camera is placed on the abdomen to picture these perfused organs. Other scintigraphic tests are done similarly.
=== Lung scintigraphy ===
The most common indication for lung scintigraphy is to diagnose pulmonary embolism, e.g. with a ventilation/perfusion scan and may be appropriate for excluding PE in pregnancy. Less common indications include evaluation of lung transplantation, preoperative evaluation, evaluation of right-to-left shunts.
In the ventilation phase of a ventilation/perfusion scan, a gaseous radionuclide xenon or technetium DTPA in an aerosol form (or ideally using Technegas, a radioaerosol invented in Australia by Dr Bill Burch and Dr Richard Fawdry) is inhaled by the patient through a mouthpiece or mask that covers the nose and mouth. The perfusion phase of the test involves the intravenous injection of radioactive technetium macro aggregated albumin (Tc99m-MAA). A gamma camera acquires the images for both phases of the study.
=== Bone ===
For example, the ligand methylene-diphosphonate (MDP) can be preferentially taken up by bone. By chemically attaching technetium-99m to MDP, radioactivity can be transported and attached to bone via the hydroxyapatite for imaging. Any increased physiological function, such as a fracture in the bone, will usually mean increased concentration of the tracer.
=== Heart ===
A thallium stress test is a form of scintigraphy, where the amount of thallium-201 detected in cardiac tissues correlates with tissue blood supply. Viable cardiac cells have normal Na+/K+ ion exchange pumps. Thallium binds the K+ pumps and is transported into the cells. Exercise or dipyridamole induces widening (vasodilation) of normal coronary arteries. This produces coronary steal from areas of ischemia where arteries are already maximally dilated. Areas of infarct or ischemic tissue will remain "cold". Pre- and post-stress thallium may indicate areas that will benefit from myocardial revascularization. Redistribution indicates the existence of coronary steal and the presence of ischemic coronary artery disease.
=== Parathyroid ===
Tc99m-sestamibi is used to detect parathyroid adenomas.
=== Thyroid ===
To detect metastases/function of thyroid, the isotopes technetium-99m or iodine-123 are generally used, and for this purpose the iodide isotope does not need to be attached to another protein or molecule, because thyroid tissue takes up free iodide actively.
=== Renal and urinary systems ===
=== Full body ===
Examples are gallium scans, indium white blood cell scans, iobenguane scan (MIBG) and octreotide scans. The MIBG scan detects adrenergic tissue and thus can be used to identify the location of tumors such as pheochromocytomas and neuroblastomas.
== Function tests ==
Certain tests, such as the Schilling test and urea breath test, use radioisotopes but are not used to produce a specific image.
== History ==
Scintigraphic scanning was invented and proven by Neurologist and Radiologist professor Bernard George Ziedses des Plantes. He presented the results in 1950 under the name 'indirect Autoradiograph’. In 1970, the Physikalisch-Medizinische Gesellschaft für Neuroradiologie (The Physics and Medical Society for Neuroradiology) instituted the ‘Ziedses des Plantes Medal'. It was first awarded to W. Oldendorf en G. Hounsfield in 1974 for Computer Tomography (CT). Later, in 1985, the medal was awarded to Ziedses des Plantes himself. In 1977 he received The Roentgen Medal.
== See also ==
Gamma camera
Medical imaging
Nuclear medicine
== References ==
== External links == | Wikipedia/Scintigraphy |
Targeted alpha-particle therapy (or TAT) is an in-development method of targeted radionuclide therapy of various cancers. It employs radioactive substances which undergo alpha decay to treat diseased tissue at close proximity. It has the potential to provide highly targeted treatment, especially to microscopic tumour cells. Targets include leukemias, lymphomas, gliomas, melanoma, and peritoneal carcinomatosis. As in diagnostic nuclear medicine, appropriate radionuclides can be chemically bound to a targeting biomolecule which carries the combined radiopharmaceutical to a specific treatment point.
It has been said that "α-emitters are indispensable with regard to optimisation of strategies for tumour therapy".
== Advantages of alpha emitters ==
The primary advantage of alpha particle (α) emitters over other types of radioactive sources is their very high linear energy transfer (LET) and relative biological effectiveness (RBE). Beta particle (β) emitters such as yttrium-90 can travel considerable distances beyond the immediate tissue before depositing their energy, while alpha particles deposit their energy in 70–100 μm long tracks.
Alpha particles are more likely than other types of radiation to cause double-strand breaks to DNA molecules, which is one of several effective causes of cell death.
== Production ==
Some α emitting isotopes such as 225Ac and 213Bi are only available in limited quantities from 229Th decay, although cyclotron production is feasible. Among alpha-emitting radiometals according to availability, chelation chemistry, and half-life, 212Pb is also a promising candidate for targeted alpha-therapy.
The ARRONAX cyclotron can produce 211At by irradiation of 209Bi.
== Applications ==
Though many α-emitters exist, useful isotopes would have a sufficient energy to cause damage to cancer cells, and a half-life that is long enough to provide a therapeutic dose without remaining long enough to damage healthy tissue.
=== Immunotherapy ===
Several radionuclides have been studied for use in immunotherapy. Though β-emitters are more popular, in part due to their availability, trials have taken place involving 225Ac, 211At, 212Pb and 213Bi.
=== Peritoneal carcinomas ===
Treatment of peritoneal carcinomas has promising early results limited by availability of α-emitters compared to β-emitters.
=== Bone metastases ===
223Ra was the first α-emitter approved by the FDA in the United States for treatment of bone metastases from prostate cancer, and is a recommended treatment in the UK by NICE. In a phase III trial comparing 223Ra to a placebo, survival was significantly improved.
=== Leukaemia ===
Early trials of 225Ac and 213Bi have shown evidence of anti-tumour activity in Leukaemia patients.
=== Melanomas ===
Phase I trials on melanomas have shown 213Bi is effective in causing tumour regression.
=== Solid tumours ===
The short path length of alpha particles in tissue, which makes them well suited to treatment of the above types of disease, is a negative when it comes to treatment of larger bodies of solid tumour by intravenous injection. Potential methods to solve this problem of delivery exist, such as direct intratumoral injection and anti-angiogenic drugs. Limited treatment experience of low grade malignant gliomas has shown possible efficacy.
== See also ==
Unsealed source radiotherapy
Selective internal radiation therapy
== References == | Wikipedia/Targeted_alpha-particle_therapy |
In fluid dynamics, turbulence modeling is the construction and use of a mathematical model to predict the effects of turbulence. Turbulent flows are commonplace in most real-life scenarios. In spite of decades of research, there is no analytical theory to predict the evolution of these turbulent flows. The equations governing turbulent flows can only be solved directly for simple cases of flow. For most real-life turbulent flows, CFD simulations use turbulent models to predict the evolution of turbulence. These turbulence models are simplified constitutive equations that predict the statistical evolution of turbulent flows.
== Closure problem ==
The Navier–Stokes equations govern the velocity and pressure of a fluid flow. In a turbulent flow, each of these quantities may be decomposed into a mean part and a fluctuating part. Averaging the equations gives the Reynolds-averaged Navier–Stokes (RANS) equations, which govern the mean flow. However, the nonlinearity of the Navier–Stokes equations means that the velocity fluctuations still appear in the RANS equations, in the nonlinear term
−
ρ
v
i
′
v
j
′
¯
{\displaystyle -\rho {\overline {v_{i}^{\prime }v_{j}^{\prime }}}}
from the convective acceleration. This term is known as the Reynolds stress,
R
i
j
{\displaystyle R_{ij}}
. Its effect on the mean flow is like that of a stress term, such as from pressure or viscosity.
To obtain equations containing only the mean velocity and pressure, we need to close the RANS equations by modelling the Reynolds stress term
R
i
j
{\displaystyle R_{ij}}
as a function of the mean flow, removing any reference to the fluctuating part of the velocity. This is the closure problem.
== Eddy viscosity ==
Joseph Valentin Boussinesq was the first to attack the closure problem, by introducing the concept of eddy viscosity. In 1877 Boussinesq proposed relating the turbulence stresses to the mean flow to close the system of equations. Here the Boussinesq hypothesis is applied to model the Reynolds stress term. Note that a new proportionality constant
ν
t
>
0
{\displaystyle \nu _{t}>0}
, the (kinematic) turbulence eddy viscosity, has been introduced. Models of this type are known as eddy viscosity models (EVMs).
−
v
i
′
v
j
′
¯
=
ν
t
(
∂
v
i
¯
∂
x
j
+
∂
v
j
¯
∂
x
i
)
−
2
3
k
δ
i
j
{\displaystyle -{\overline {v_{i}^{\prime }v_{j}^{\prime }}}=\nu _{t}\left({\frac {\partial {\overline {v_{i}}}}{\partial x_{j}}}+{\frac {\partial {\overline {v_{j}}}}{\partial x_{i}}}\right)-{\frac {2}{3}}k\delta _{ij}}
which can be written in shorthand as
−
v
i
′
v
j
′
¯
=
2
ν
t
S
i
j
−
2
3
k
δ
i
j
{\displaystyle -{\overline {v_{i}^{\prime }v_{j}^{\prime }}}=2\nu _{t}S_{ij}-{\tfrac {2}{3}}k\delta _{ij}}
where
S
i
j
{\displaystyle S_{ij}}
is the mean rate of strain tensor
ν
t
{\displaystyle \nu _{t}}
is the (kinematic) turbulence eddy viscosity
k
=
1
2
v
i
′
v
i
′
¯
{\displaystyle k={\tfrac {1}{2}}{\overline {v_{i}'v_{i}'}}}
is the turbulence kinetic energy
and
δ
i
j
{\displaystyle \delta _{ij}}
is the Kronecker delta.
In this model, the additional turbulence stresses are given by augmenting the molecular viscosity with an eddy viscosity. This can be a simple constant eddy viscosity (which works well for some free shear flows such as axisymmetric jets, 2-D jets, and mixing layers).
The Boussinesq hypothesis – although not explicitly stated by Boussinesq at the time – effectively consists of the assumption that the Reynolds stress tensor is aligned with the strain tensor of the mean flow (i.e.: that the shear stresses due to turbulence act in the same direction as the shear stresses produced by the averaged flow). It has since been found to be significantly less accurate than most practitioners would assume. Still, turbulence models which employ the Boussinesq hypothesis have demonstrated significant practical value. In cases with well-defined shear layers, this is likely due the dominance of streamwise shear components, so that considerable relative errors in flow-normal components are still negligible in absolute terms. Beyond this, most eddy viscosity turbulence models contain coefficients which are calibrated against measurements, and thus produce reasonably accurate overall outcomes for flow fields of similar type as used for calibration.
== Prandtl's mixing-length concept ==
Later, Ludwig Prandtl introduced the additional concept of the mixing length, along with the idea of a boundary layer. For wall-bounded turbulent flows, the eddy viscosity must vary with distance from the wall, hence the addition of the concept of a 'mixing length'. In the simplest wall-bounded flow model, the eddy viscosity is given by the equation:
ν
t
=
|
∂
u
∂
y
|
l
m
2
{\displaystyle \nu _{t}=\left|{\frac {\partial u}{\partial y}}\right|l_{m}^{2}}
where
∂
u
∂
y
{\displaystyle {\frac {\partial u}{\partial y}}}
is the partial derivative of the streamwise velocity (u) with respect to the wall normal direction (y)
l
m
{\displaystyle l_{m}}
is the mixing length.
This simple model is the basis for the "law of the wall", which is a surprisingly accurate model for wall-bounded, attached (not separated) flow fields with small pressure gradients.
More general turbulence models have evolved over time, with most modern turbulence models given by field equations similar to the Navier–Stokes equations.
== Smagorinsky model for the sub-grid scale eddy viscosity ==
Joseph Smagorinsky was the first who proposed a formula for the eddy viscosity in Large Eddy Simulation models, based on the local derivatives of the velocity field and the local grid size:
ν
t
=
Δ
x
Δ
y
(
∂
u
∂
x
)
2
+
(
∂
v
∂
y
)
2
+
1
2
(
∂
u
∂
y
+
∂
v
∂
x
)
2
{\displaystyle \nu _{t}=\Delta x\Delta y{\sqrt {\left({\frac {\partial u}{\partial x}}\right)^{2}+\left({\frac {\partial v}{\partial y}}\right)^{2}+{\frac {1}{2}}\left({\frac {\partial u}{\partial y}}+{\frac {\partial v}{\partial x}}\right)^{2}}}}
In the context of Large Eddy Simulation, turbulence modeling refers to the need to parameterize the subgrid scale stress in terms of features of the filtered velocity field. This field is called subgrid-scale modeling.
== Spalart–Allmaras, k–ε and k–ω models ==
The Boussinesq hypothesis is employed in the Spalart–Allmaras (S–A), k–ε (k–epsilon), and k–ω (k–omega) models and offers a relatively low cost computation for the turbulence viscosity
ν
t
{\displaystyle \nu _{t}}
. The S–A model uses only one additional equation to model turbulence viscosity transport, while the k–ε and k–ω models use two.
== Common models ==
The following is a brief overview of commonly employed models in modern engineering applications.
== References ==
=== Notes ===
=== Other ===
Absi, R. (2019) "Eddy Viscosity and Velocity Profiles in Fully-Developed Turbulent Channel Flows" Fluid Dyn (2019) 54: 137. https://doi.org/10.1134/S0015462819010014
Absi, R. (2021) "Reinvestigating the Parabolic-Shaped Eddy Viscosity Profile for Free Surface Flows" Hydrology 2021, 8(3), 126. https://doi.org/10.3390/hydrology8030126
Townsend, A. A. (1980) "The Structure of Turbulent Shear Flow" 2nd Edition (Cambridge Monographs on Mechanics), ISBN 0521298199
Bradshaw, P. (1971) "An introduction to turbulence and its measurement" (Pergamon Press), ISBN 0080166210
Wilcox, C. D. (1998), "Turbulence Modeling for CFD" 2nd Ed., (DCW Industries, La Cañada), ISBN 0963605100 | Wikipedia/Turbulence_models |
In fluid mechanics, an aerodynamic force is a force exerted on a body by the air (or other gas) in which the body is immersed, and is due to the relative motion between the body and the gas.
== Force ==
There are two causes of aerodynamic force:
: §4.10 : 29
the normal force due to the pressure on the surface of the body
the shear force due to the viscosity of the gas, also known as skin friction.
Pressure acts normal to the surface, and shear force acts parallel to the surface. Both forces act locally. The net aerodynamic force on the body is equal to the pressure and shear forces integrated over the body's total exposed area.
When an airfoil moves relative to the air, it generates an aerodynamic force determined by the velocity of relative motion, and the angle of attack. This aerodynamic force is commonly resolved into two components, both acting through the center of pressure:: 14 : § 5.3
drag is the force component parallel to the direction of relative motion,
lift is the force component perpendicular to the direction of relative motion.
In addition to these two forces, the body may experience an aerodynamic moment.
The force created by propellers and jet engines is called thrust, and is also an aerodynamic force (since it acts on the surrounding air). The aerodynamic force on a powered airplane is commonly represented by three vectors: thrust, lift and drag.: 151 : § 14.2
The other force acting on an aircraft during flight is its weight, which is a body force and not an aerodynamic force.
== See also ==
Fluid dynamics
== References == | Wikipedia/Aerodynamic_force |
In fluid dynamics, the drag equation is a formula used to calculate the force of drag experienced by an object due to movement through a fully enclosing fluid. The equation is:
F
d
=
1
2
ρ
u
2
c
d
A
{\displaystyle F_{\rm {d}}\,=\,{\tfrac {1}{2}}\,\rho \,u^{2}\,c_{\rm {d}}\,A}
where
F
d
{\displaystyle F_{\rm {d}}}
is the drag force, which is by definition the force component in the direction of the flow velocity,
ρ
{\displaystyle \rho }
is the mass density of the fluid,
u
{\displaystyle u}
is the flow velocity relative to the object,
A
{\displaystyle A}
is the reference area, and
c
d
{\displaystyle c_{\rm {d}}}
is the drag coefficient – a dimensionless coefficient related to the object's geometry and taking into account both skin friction and form drag. If the fluid is a liquid,
c
d
{\displaystyle c_{\rm {d}}}
depends on the Reynolds number; if the fluid is a gas,
c
d
{\displaystyle c_{\rm {d}}}
depends on both the Reynolds number and the Mach number.
The equation is attributed to Lord Rayleigh, who originally used L2 in place of A (with L being some linear dimension).
The reference area A is typically defined as the area of the orthographic projection of the object on a plane perpendicular to the direction of motion. For non-hollow objects with simple shape, such as a sphere, this is exactly the same as the maximal cross sectional area. For other objects (for instance, a rolling tube or the body of a cyclist), A may be significantly larger than the area of any cross section along any plane perpendicular to the direction of motion. Airfoils use the square of the chord length as the reference area; since airfoil chords are usually defined with a length of 1, the reference area is also 1. Aircraft use the wing area (or rotor-blade area) as the reference area, which makes for an easy comparison to lift. Airships and bodies of revolution use the volumetric coefficient of drag, in which the reference area is the square of the cube root of the airship's volume. Sometimes different reference areas are given for the same object in which case a drag coefficient corresponding to each of these different areas must be given.
For sharp-cornered bluff bodies, like square cylinders and plates held transverse to the flow direction, this equation is applicable with the drag coefficient as a constant value when the Reynolds number is greater than 1000. For smooth bodies, like a cylinder, the drag coefficient may vary significantly until Reynolds numbers up to 107 (ten million).
== Discussion ==
The equation is easier understood for the idealized situation where all of the fluid impinges on the reference area and comes to a complete stop, building up stagnation pressure over the whole area. No real object exactly corresponds to this behavior.
c
d
{\displaystyle c_{\rm {d}}}
is the ratio of drag for any real object to that of the ideal object. In practice a rough un-streamlined body (a bluff body) will have a
c
d
{\displaystyle c_{\rm {d}}}
around 1, more or less. Smoother objects can have much lower values of
c
d
{\displaystyle c_{\rm {d}}}
. The equation is precise – it simply provides the definition of
c
d
{\displaystyle c_{\rm {d}}}
(drag coefficient), which varies with the Reynolds number and is found by experiment.
Of particular importance is the
u
2
{\displaystyle u^{2}}
dependence on flow velocity, meaning that fluid drag increases with the square of flow velocity. When flow velocity is doubled, for example, not only does the fluid strike with twice the flow velocity, but twice the mass of fluid strikes per second. Therefore, the change of momentum per time, i.e. the force experienced, is multiplied by four. This is in contrast with solid-on-solid dynamic friction, which generally has very little velocity dependence.
== Relation with dynamic pressure ==
The drag force can also be specified as
F
d
∝
P
D
A
{\displaystyle F_{\rm {d}}\propto P_{\rm {D}}A}
where PD is the pressure exerted by the fluid on area A. Here the pressure PD is referred to as dynamic pressure due to the kinetic energy of the fluid experiencing relative flow velocity u. This is defined in similar form as the kinetic energy equation:
P
D
=
1
2
ρ
u
2
{\displaystyle P_{\rm {D}}={\frac {1}{2}}\rho u^{2}}
== Derivation ==
The drag equation may be derived to within a multiplicative constant by the method of dimensional analysis. If a moving fluid meets an object, it exerts a force on the object. Suppose that the fluid is a liquid, and the variables involved – under some conditions – are the:
speed u,
fluid density ρ,
kinematic viscosity ν of the fluid,
size of the body, expressed in terms of its wetted area A, and
drag force Fd.
Using the algorithm of the Buckingham π theorem, these five variables can be reduced to two dimensionless groups:
drag coefficient cd and
Reynolds number Re.
That this is so becomes apparent when the drag force Fd is expressed as part of a function of the other variables in the problem:
f
a
(
F
d
,
u
,
A
,
ρ
,
ν
)
=
0.
{\displaystyle f_{a}(F_{\rm {d}},u,A,\rho ,\nu )=0.}
This rather odd form of expression is used because it does not assume a one-to-one relationship. Here, fa is some (as-yet-unknown) function that takes five arguments. Now the right-hand side is zero in any system of units; so it should be possible to express the relationship described by fa in terms of only dimensionless groups.
There are many ways of combining the five arguments of fa to form dimensionless groups, but the Buckingham π theorem states that there will be two such groups. The most appropriate are the Reynolds number, given by
R
e
=
u
A
ν
{\displaystyle \mathrm {Re} ={\frac {u{\sqrt {A}}}{\nu }}}
and the drag coefficient, given by
c
d
=
F
d
1
2
ρ
A
u
2
.
{\displaystyle c_{\rm {d}}={\frac {F_{\rm {d}}}{{\frac {1}{2}}\rho Au^{2}}}.}
Thus the function of five variables may be replaced by another function of only two variables:
f
b
(
F
d
1
2
ρ
A
u
2
,
u
A
ν
)
=
0.
{\displaystyle f_{b}\left({\frac {F_{\rm {d}}}{{\frac {1}{2}}\rho Au^{2}}},{\frac {u{\sqrt {A}}}{\nu }}\right)=0.}
where fb is some function of two arguments.
The original law is then reduced to a law involving only these two numbers.
Because the only unknown in the above equation is the drag force Fd, it is possible to express it as
F
d
1
2
ρ
A
u
2
=
f
c
(
u
A
ν
)
F
d
=
1
2
ρ
A
u
2
f
c
(
R
e
)
c
d
=
f
c
(
R
e
)
{\displaystyle {\begin{aligned}{\frac {F_{\rm {d}}}{{\frac {1}{2}}\rho Au^{2}}}&=f_{c}\left({\frac {u{\sqrt {A}}}{\nu }}\right)\\F_{\rm {d}}&={\tfrac {1}{2}}\rho Au^{2}f_{c}(\mathrm {Re} )\\c_{\rm {d}}&=f_{c}(\mathrm {Re} )\end{aligned}}}
Thus the force is simply 1/2 ρ A u2 times some (as-yet-unknown) function fc of the Reynolds number Re – a considerably simpler system than the original five-argument function given above.
Dimensional analysis thus makes a very complex problem (trying to determine the behavior of a function of five variables) a much simpler one: the determination of the drag as a function of only one variable, the Reynolds number.
If the fluid is a gas, certain properties of the gas influence the drag and those properties must also be taken into account. Those properties are conventionally considered to be the absolute temperature of the gas, and the ratio of its specific heats. These two properties determine the speed of sound in the gas at its given temperature. The Buckingham pi theorem then leads to a third dimensionless group, the ratio of the relative velocity to the speed of sound, which is known as the Mach number. Consequently when a body is moving relative to a gas, the drag coefficient varies with the Mach number and the Reynolds number.
The analysis also gives other information for free, so to speak. The analysis shows that, other things being equal, the drag force will be proportional to the density of the fluid. This kind of information often proves to be extremely valuable, especially in the early stages of a research project.
== Air viscosity in a rotating sphere ==
Air viscosity in a rotating sphere has a coefficient, similar to the drag coefficient in the drag equation.
== Experimental methods ==
To empirically determine the Reynolds number dependence, instead of experimenting on a large body with fast-flowing fluids (such as real-size airplanes in wind tunnels), one may just as well experiment using a small model in a flow of higher velocity because these two systems deliver similitude by having the same Reynolds number. If the same Reynolds number and Mach number cannot be achieved just by using a flow of higher velocity it may be advantageous to use a fluid of greater density or lower viscosity.
== See also ==
Aerodynamic drag
Angle of attack
Morison equation
Newton's sine-square law of air resistance
Stall (flight)
Terminal velocity
== References ==
== External links ==
Batchelor, G.K. (1967). An Introduction to Fluid Dynamics. Cambridge University Press. ISBN 0-521-66396-2.
Huntley, H. E. (1967). Dimensional Analysis. Dover. LOC 67-17978.
Benson, Tom. "Falling Object with Air Resistance". US: NASA. Retrieved June 9, 2022.
Benson, Tom. "The drag equation". US: NASA. Retrieved June 9, 2022. | Wikipedia/Drag_equation |
In fluid dynamics the Morison equation is a semi-empirical equation for the inline force on a body in oscillatory flow. It is sometimes called the MOJS equation after all four authors—Morison, O'Brien, Johnson and Schaaf—of the 1950 paper in which the equation was introduced. The Morison equation is used to estimate the wave loads in the design of oil platforms and other offshore structures.
== Description ==
The Morison equation is the sum of two force components: an inertia force in phase with the local flow acceleration and a drag force proportional to the (signed) square of the instantaneous flow velocity. The inertia force is of the functional form as found in potential flow theory, while the drag force has the form as found for a body placed in a steady flow. In the heuristic approach of Morison, O'Brien, Johnson and Schaaf these two force components, inertia and drag, are simply added to describe the inline force in an oscillatory flow. The transverse force—perpendicular to the flow direction, due to vortex shedding—has to be addressed separately.
The Morison equation contains two empirical hydrodynamic coefficients—an inertia coefficient and a drag coefficient—which are determined from experimental data. As shown by dimensional analysis and in experiments by Sarpkaya, these coefficients depend in general on the Keulegan–Carpenter number, Reynolds number and surface roughness.
The descriptions given below of the Morison equation are for uni-directional onflow conditions as well as body motion.
=== Fixed body in an oscillatory flow ===
In an oscillatory flow with flow velocity
u
(
t
)
{\displaystyle u(t)}
, the Morison equation gives the inline force parallel to the flow direction:
F
=
ρ
C
m
V
u
˙
⏟
F
I
+
1
2
ρ
C
d
A
u
|
u
|
⏟
F
D
,
{\displaystyle F\,=\,\underbrace {\rho \,C_{m}\,V\,{\dot {u}}} _{F_{I}}+\underbrace {{\frac {1}{2}}\,\rho \,C_{d}\,A\,u\,|u|} _{F_{D}},}
where
F
(
t
)
{\displaystyle F(t)}
is the total inline force on the object,
u
˙
≡
d
u
/
d
t
{\displaystyle {\dot {u}}\equiv {\text{d}}u/{\text{d}}t}
is the flow acceleration, i.e. the time derivative of the flow velocity
u
(
t
)
,
{\displaystyle u(t),}
the inertia force
F
I
=
ρ
C
m
V
u
˙
{\displaystyle F_{I}\,=\,\rho \,C_{m}\,V\,{\dot {u}}}
, is the sum of the Froude–Krylov force
ρ
V
u
˙
{\displaystyle \rho \,V\,{\dot {u}}}
and the hydrodynamic mass force
ρ
C
a
V
u
˙
,
{\displaystyle \rho \,C_{a}\,V\,{\dot {u}},}
the drag force
F
D
=
1
2
ρ
C
d
A
u
|
u
|
{\displaystyle F_{D}\,=\,{\scriptstyle {\frac {1}{2}}}\,\rho \,C_{d}\,A\,u\,|u|}
according to the drag equation,
C
m
=
1
+
C
a
{\displaystyle C_{m}=1+C_{a}}
is the inertia coefficient, and
C
a
{\displaystyle C_{a}}
the added mass coefficient,
A is a reference area, e.g. the cross-sectional area of the body perpendicular to the flow direction,
V is volume of the body.
For instance for a circular cylinder of diameter D in oscillatory flow, the reference area per unit cylinder length is
A
=
D
{\displaystyle A=D}
and the cylinder volume per unit cylinder length is
V
=
1
4
π
D
2
{\displaystyle V={\scriptstyle {\frac {1}{4}}}\pi {D^{2}}}
. As a result,
F
(
t
)
{\displaystyle F(t)}
is the total force per unit cylinder length:
F
=
C
m
ρ
π
4
D
2
u
˙
+
C
d
1
2
ρ
D
u
|
u
|
.
{\displaystyle F\,=\,C_{m}\,\rho \,{\frac {\pi }{4}}D^{2}\,{\dot {u}}\,+\,C_{d}\,{\frac {1}{2}}\,\rho \,D\,u\,|u|.}
Besides the inline force, there are also oscillatory lift forces perpendicular to the flow direction, due to vortex shedding. These are not covered by the Morison equation, which is only for the inline forces.
=== Moving body in an oscillatory flow ===
In case the body moves as well, with velocity
v
(
t
)
{\displaystyle v(t)}
, the Morison equation becomes:
F
=
ρ
V
u
˙
⏟
a
+
ρ
C
a
V
(
u
˙
−
v
˙
)
⏟
b
+
1
2
ρ
C
d
A
(
u
−
v
)
|
u
−
v
|
⏟
c
.
{\displaystyle F=\underbrace {\rho \,V{\dot {u}}} _{a}+\underbrace {\rho \,C_{a}V\left({\dot {u}}-{\dot {v}}\right)} _{b}+\underbrace {{\frac {1}{2}}\rho \,C_{d}A\left(u-v\right)\left|u-v\right|} _{c}.}
where the total force contributions are:
a: Froude–Krylov force, due to the pressure gradient at the body's location induced by the fluid acceleration
u
˙
{\displaystyle {\dot {u}}}
,
b: hydrodynamic mass force,
c: drag force.
Note that the added mass coefficient
C
a
{\displaystyle C_{a}}
is related to the inertia coefficient
C
m
{\displaystyle C_{m}}
as
C
m
=
1
+
C
a
{\displaystyle C_{m}=1+C_{a}}
.
== Limitations ==
The Morison equation is a heuristic formulation of the force fluctuations in an oscillatory flow. The first assumption is that the flow acceleration is more-or-less uniform at the location of the body. For instance, for a vertical cylinder in surface gravity waves this requires that the diameter of the cylinder is much smaller than the wavelength. If the diameter of the body is not small compared to the wavelength, diffraction effects have to be taken into account.
Second, it is assumed that the asymptotic forms: the inertia and drag force contributions, valid for very small and very large Keulegan–Carpenter numbers respectively, can just be added to describe the force fluctuations at intermediate Keulegan–Carpenter numbers. However, from experiments it is found that in this intermediate regime—where both drag and inertia are giving significant contributions—the Morison equation is not capable of describing the force history very well. Although the inertia and drag coefficients can be tuned to give the correct extreme values of the force.
Third, when extended to orbital flow which is a case of non uni-directional flow, for instance encountered by a horizontal cylinder under waves, the Morison equation does not give a good representation of the forces as a function of time.
== References ==
== Further reading == | Wikipedia/Morison_equation |
In fluid dynamics, a stall is a reduction in the lift coefficient generated by a foil as angle of attack exceeds its critical value. The critical angle of attack is typically about 15°, but it may vary significantly depending on the fluid, foil – including its shape, size, and finish – and Reynolds number.
Stalls in fixed-wing aircraft are often experienced as a sudden reduction in lift. It may be caused either by the pilot increasing the wing's angle of attack or by a decrease in the critical angle of attack. The former may be due to slowing down (below stall speed), the latter by accretion of ice on the wings (especially if the ice is rough). A stall does not mean that the engine(s) have stopped working, or that the aircraft has stopped moving—the effect is the same even in an unpowered glider aircraft. Vectored thrust in aircraft is used to maintain altitude or controlled flight with wings stalled by replacing lost wing lift with engine or propeller thrust, thereby giving rise to post-stall technology.
Because stalls are most commonly discussed in connection with aviation, this article discusses stalls as they relate mainly to aircraft, in particular fixed-wing aircraft. The principles of stall discussed here translate to foils in other fluids as well.
== Formal definition ==
A stall is a condition in aerodynamics and aviation such that if the angle of attack on an aircraft increases beyond a certain point, then lift begins to decrease. The angle at which this occurs is called the critical angle of attack. If the angle of attack increases beyond the critical value, the lift decreases and the aircraft descends, further increasing the angle of attack and causing further loss of lift. The critical angle of attack is dependent upon the airfoil section or profile of the wing, its planform, its aspect ratio, and other factors, but is typically in the range of 8 to 20 degrees relative to the incoming wind (relative wind) for most subsonic airfoils. The critical angle of attack is the angle of attack on the lift coefficient versus angle-of-attack (Cl~alpha) curve at which the maximum lift coefficient occurs.
Stalling is caused by flow separation which, in turn, is caused by the air flowing against a rising pressure. Whitford describes three types of stall: trailing-edge, leading-edge and thin-aerofoil, each with distinctive Cl~alpha features. For the trailing-edge stall, separation begins at small angles of attack near the trailing edge of the wing while the rest of the flow over the wing remains attached. As angle of attack increases, the separated regions on the top of the wing increase in size as the flow separation moves forward, and this hinders the ability of the wing to create lift. This is shown by the reduction in lift-slope on a Cl~alpha curve as the lift nears its maximum value. The separated flow usually causes buffeting. Beyond the critical angle of attack, separated flow is so dominant that additional increases in angle of attack cause the lift to fall from its peak value.
Piston-engined and early jet transports had very good stall behaviour with pre-stall buffet warning and, if ignored, a straight nose-drop for a natural recovery. Wing developments that came with the introduction of turbo-prop engines introduced unacceptable stall behaviour. Leading-edge developments on high-lift wings, and the introduction of rear-mounted engines and high-set tailplanes on the next generation of jet transports, also introduced unacceptable stall behaviour. The probability of achieving the stall speed inadvertently, a potentially hazardous event, had been calculated, in 1965, at about once in every 100,000 flights, often enough to justify the cost of development of warning devices, such as stick shakers, and devices to automatically provide an adequate nose-down pitch, such as stick pushers.
When the mean angle of attack of the wings is beyond the stall a spin, which is an autorotation of a stalled wing, may develop. A spin follows departures in roll, yaw and pitch from balanced flight. For example, a roll is naturally damped with an unstalled wing, but with wings stalled the damping moment is replaced with a propelling moment.
== Variation of lift with angle of attack ==
The graph shows that the greatest amount of lift is produced as the critical angle of attack is reached (which in early-20th century aviation was called the "burble point"). This angle is 17.5 degrees in this case, but it varies from airfoil to airfoil. In particular, for aerodynamically thick airfoils (thickness to chord ratios of around 10%), the critical angle is higher than with a thin airfoil of the same camber. Symmetric airfoils have lower critical angles (but also work efficiently in inverted flight). The graph shows that, as the angle of attack exceeds the critical angle, the lift produced by the airfoil decreases.
The information in a graph of this kind is gathered using a model of the airfoil in a wind tunnel. Because aircraft models are normally used, rather than full-size machines, special care is needed to make sure that data is taken in the same Reynolds number regime (or scale speed) as in free flight. The separation of flow from the upper wing surface at high angles of attack is quite different at low Reynolds number from that at the high Reynolds numbers of real aircraft. In particular at high Reynolds numbers the flow tends to stay attached to the airfoil for longer because the inertial forces are dominant with respect to the viscous forces which are responsible for the flow separation ultimately leading to the aerodynamic stall. For this reason wind tunnel results carried out at lower speeds and on smaller scale models of the real life counterparts often tend to overestimate the aerodynamic stall angle of attack. High-pressure wind tunnels are one solution to this problem.
In general, steady operation of an aircraft at an angle of attack above the critical angle is not possible because, after exceeding the critical angle, the loss of lift from the wing causes the nose of the aircraft to fall, reducing the angle of attack again. This nose drop, independent of control inputs, indicates the pilot has actually stalled the aircraft.
This graph shows the stall angle, yet in practice most pilot operating handbooks (POH) or generic flight manuals describe stalling in terms of airspeed. This is because all aircraft are equipped with an airspeed indicator, but fewer aircraft have an angle of attack indicator. An aircraft's stalling speed is published by the manufacturer (and is required for certification by flight testing) for a range of weights and flap positions, but the stalling angle of attack is not published.
As speed reduces, angle of attack has to increase to keep lift constant until the critical angle is reached. The airspeed at which this angle is reached is the (1g, unaccelerated) stalling speed of the aircraft in that particular configuration. Deploying flaps/slats decreases the stall speed to allow the aircraft to take off and land at a lower speed.
== Aerodynamic description ==
=== Fixed-wing aircraft ===
A fixed-wing aircraft can be made to stall in any pitch attitude or bank angle or at any airspeed but deliberate stalling is commonly practiced by reducing the speed to the unaccelerated stall speed, at a safe altitude. Unaccelerated (1g) stall speed varies on different fixed-wing aircraft and is represented by colour codes on the airspeed indicator. As the plane flies at this speed, the angle of attack must be increased to prevent any loss of altitude or gain in airspeed (which corresponds to the stall angle described above). The pilot will notice the flight controls have become less responsive and may also notice some buffeting, a result of the turbulent air separated from the wing hitting the tail of the aircraft.
In most light aircraft, as the stall is reached, the aircraft will start to descend (because the wing is no longer producing enough lift to support the aircraft's weight) and the nose will pitch down. Recovery from the stall involves lowering the aircraft nose, to decrease the angle of attack and increase the air speed, until smooth air-flow over the wing is restored. Normal flight can be resumed once recovery is complete. The maneuver is normally quite safe, and, if correctly handled, leads to only a small loss in altitude (20–30 m/66–98 ft). It is taught and practised in order for pilots to recognize, avoid, and recover from stalling the aircraft. A pilot is required to demonstrate competency in controlling an aircraft during and after a stall for certification in the United States, and it is a routine maneuver for pilots when getting to know the handling of an unfamiliar aircraft type. The only dangerous aspect of a stall is a lack of altitude for recovery.
A special form of asymmetric stall in which the aircraft also rotates about its yaw axis is called a spin. A spin can occur if an aircraft is stalled and there is an asymmetric yawing moment applied to it. This yawing moment can be aerodynamic (sideslip angle, rudder, adverse yaw from the ailerons), thrust related (p-factor, one engine inoperative on a multi-engine non-centreline thrust aircraft), or from less likely sources such as severe turbulence. The net effect is that one wing is stalled before the other and the aircraft descends rapidly while rotating, and some aircraft cannot recover from this condition without correct pilot control inputs (which must stop yaw) and loading. A new solution to the problem of difficult (or impossible) stall-spin recovery is provided by the ballistic parachute recovery system.
The most common stall-spin scenarios occur on takeoff (departure stall) and during landing (base to final turn) because of insufficient airspeed during these maneuvers. Stalls also occur during a go-around manoeuvre if the pilot does not properly respond to the out-of-trim situation resulting from the transition from low power setting to high power setting at low speed. Stall speed is increased when the wing surfaces are contaminated with ice or frost creating a rougher surface, and heavier airframe due to ice accumulation.
Stalls occur not only at slow airspeed, but at any speed when the wings exceed their critical angle of attack. Attempting to increase the angle of attack at 1g by moving the control column back normally causes the aircraft to climb. However, aircraft often experience higher g-forces, such as when turning steeply or pulling out of a dive. In these cases, the wings are already operating at a higher angle of attack to create the necessary force (derived from lift) to accelerate in the desired direction. Increasing the g-loading still further, by pulling back on the controls, can cause the stalling angle to be exceeded, even though the aircraft is flying at a high speed. These "high-speed stalls" produce the same buffeting characteristics as 1g stalls and can also initiate a spin if there is also any yawing.
=== Characteristics ===
Different aircraft types have different stalling characteristics but they only have to be good enough to satisfy their particular Airworthiness authority. For example, the Short Belfast heavy freighter had a marginal nose drop which was acceptable to the Royal Air Force. When the aircraft were sold to a civil operator they had to be fitted with a stick pusher to meet the civil requirements. Some aircraft may naturally have very good behaviour well beyond what is required. For example, first generation jet transports have been described as having an immaculate nose drop at the stall. Loss of lift on one wing is acceptable as long as the roll, including during stall recovery, doesn't exceed about 20 degrees, or in turning flight the roll shall not exceed 90 degrees bank. If pre-stall warning followed by nose drop and limited wing drop are naturally not present or are deemed to be unacceptably marginal by an Airworthiness authority the stalling behaviour has to be made good enough with airframe modifications or devices such as a stick shaker and pusher. These are described in "Warning and safety devices".
== Stall speeds ==
Stalls depend only on angle of attack, not airspeed. However, the slower an aircraft flies, the greater the angle of attack it needs to produce lift equal to the aircraft's weight. As the speed decreases further, at some point this angle will be equal to the critical (stall) angle of attack. This speed is called the "stall speed". An aircraft flying at its stall speed cannot climb, and an aircraft flying below its stall speed cannot stop descending. Any attempt to do so by increasing angle of attack, without first increasing airspeed, will result in a stall.
The actual stall speed will vary depending on the airplane's weight, altitude, configuration, and vertical and lateral acceleration. Propeller slipstream reduces the stall speed by energizing the flow over the wings.: 61
Speed definitions vary and include:
VS: Stall speed: the speed at which the airplane exhibits those qualities accepted as defining the stall.: 8
VS0: The stall speed or minimum steady flight speed in landing configuration. The zero-thrust stall speed at the most extended landing flap setting.: 8
VS1: The stall speed or minimum steady flight speed obtained in a specified configuration. The zero thrust stall speed at a specified flap setting.: 8
An airspeed indicator, for the purpose of flight-testing, may have the following markings: the bottom of the white arc indicates VS0 at maximum weight, while the bottom of the green arc indicates VS1 at maximum weight. While an aircraft's VS speed is computed by design, its VS0 and VS1 speeds must be demonstrated empirically by flight testing.
== In accelerated and turning flight ==
The normal stall speed, specified by the VS values above, always refers to straight and level flight, where the load factor is equal to 1g. However, if the aircraft is turning or pulling up from a dive, additional lift is required to provide the vertical or lateral acceleration, and so the stall speed is higher. An accelerated stall is a stall that occurs under such conditions.
In a banked turn, the lift required is equal to the weight of the aircraft plus extra lift to provide the centripetal force necessary to perform the turn:
L
=
n
W
{\displaystyle L=nW}
where:
L
{\displaystyle L}
= lift
n
{\displaystyle n}
= load factor (greater than 1 in a turn)
W
{\displaystyle W}
= weight of the aircraft
To achieve the extra lift, the lift coefficient, and so the angle of attack, will have to be higher than it would be in straight and level flight at the same speed. Therefore, given that the stall always occurs at the same critical angle of attack, by increasing the load factor (e.g. by tightening the turn) the critical angle will be reached at a higher airspeed:
V
st
=
V
s
n
{\displaystyle V_{\text{st}}=V_{\text{s}}{\sqrt {n}}}
where:
V
st
{\displaystyle V_{\text{st}}}
= stall speed
V
s
{\displaystyle V_{\text{s}}}
= stall speed of the aircraft in straight, level flight
n
{\displaystyle n}
= load factor
The table that follows gives some examples of the relation between the angle of bank and the square root of the load factor. It derives from the trigonometric relation (secant) between
L
{\displaystyle L}
and
W
{\displaystyle W}
.
For example, in a turn with bank angle of 45°, Vst is 19% higher than Vs.
According to Federal Aviation Administration (FAA) terminology, the above example illustrates a so-called turning flight stall, while the term accelerated is used to indicate an accelerated turning stall only, that is, a turning flight stall where the airspeed decreases at a given rate.
The tendency of powerful propeller aircraft to roll in reaction to engine torque creates a risk of accelerated stalls. When an aircraft such as a Mitsubishi MU-2 is flying close to its stall speed, the sudden application of full power may cause it to roll, creating the same aerodynamic conditions that induce an accelerated stall in turning flight even if the pilot did not deliberately initiate a turn. Pilots of such aircraft are trained to avoid sudden and drastic increases in power at low altitude and low airspeed as it may be difficult to recover from an accelerated stall under these conditions.
A notable example of an air accident involving a low-altitude turning flight stall is the 1994 Fairchild Air Force Base B-52 crash.
== Types ==
=== Dynamic stall ===
Dynamic stall is a non-linear unsteady aerodynamic effect that occurs when airfoils rapidly change the angle of attack. The rapid change can cause a strong vortex to be shed from the leading edge of the aerofoil, and travel backwards above the wing. The vortex, containing high-velocity airflows, briefly increases the lift produced by the wing. As soon as it passes behind the trailing edge, however, the lift reduces dramatically, and the wing is in normal stall.
Dynamic stall is an effect most associated with helicopters and flapping wings, though also occurs in wind turbines, and due to gusting airflow. During forward flight, some regions of a helicopter blade may incur flow that reverses (compared to the direction of blade movement), and thus includes rapidly changing angles of attack. Oscillating (flapping) wings, such as those of insects like the bumblebee—may rely almost entirely on dynamic stall for lift production, provided the oscillations are fast compared to the speed of flight, and the angle of the wing changes rapidly compared to airflow direction.
Stall delay can occur on airfoils subject to a high angle of attack and a three-dimensional flow. When the angle of attack on an airfoil is increasing rapidly, the flow will remain substantially attached to the airfoil to a significantly higher angle of attack than can be achieved in steady-state conditions. As a result, the stall is delayed momentarily and a lift coefficient significantly higher than the steady-state maximum is achieved. The effect was first noticed on propellers.
=== Deep stall ===
A deep stall (or super-stall) is a dangerous type of stall that affects certain aircraft designs, notably jet aircraft with a T-tail configuration and rear-mounted engines. In these designs, the turbulent wake of a stalled main wing, nacelle-pylon wakes and the wake from the fuselage "blanket" the horizontal stabilizer, rendering the elevators ineffective and preventing the aircraft from recovering from the stall. Aircraft with rear-mounted nacelles may also exhibit a loss of thrust. T-tail propeller aircraft are generally resistant to deep stalls, because the prop wash increases airflow over the wing root, but may be fitted with a precautionary vertical tail booster during flight testing, as happened with the A400M.
Trubshaw gives a broad definition of deep stall as penetrating to such angles of attack
α
{\textstyle \alpha }
that pitch control effectiveness is reduced by the wing and nacelle wakes. He also gives a definition that relates deep stall to a locked-in condition where recovery is impossible. This is a single value of
α
{\textstyle \alpha }
, for a given aircraft configuration, where there is no pitching moment, i.e. a trim point.
Typical values both for the range of deep stall, as defined above, and the locked-in trim point are given for the Douglas DC-9 Series 10 by Schaufele. These values are from wind-tunnel tests for an early design. The final design had no locked-in trim point, so recovery from the deep stall region was possible, as required to meet certification rules. Normal stall beginning at the "g break" (sudden decrease of the vertical load factor) was at
α
=
18
∘
{\textstyle \alpha =18^{\circ }}
, deep stall started at about 30°, and the locked-in unrecoverable trim point was at 47°.
The very high
α
{\textstyle \alpha }
for a deep stall locked-in condition occurs well beyond the normal stall but can be attained very rapidly, as the aircraft is unstable beyond the normal stall and requires immediate action to arrest it. The loss of lift causes high sink rates, which, together with the low forward speed at the normal stall, give a high
α
{\textstyle \alpha }
with little or no rotation of the aircraft. BAC 1-11 G-ASHG, during stall flight tests before the type was modified to prevent a locked-in deep-stall condition, descended at over 10,000 feet per minute (50 m/s) and struck the ground in a flat attitude moving only 70 feet (20 m) forward after initial impact. Sketches showing how the wing wake blankets the tail may be misleading if they imply that deep stall requires a high body angle. Taylor and Ray show how the aircraft attitude in the deep stall is relatively flat, even less than during the normal stall, with very high negative flight-path angles.
Effects similar to deep stall had been known to occur on some aircraft designs before the term was coined. A prototype Gloster Javelin (serial WD808) was lost in a crash on 11 June 1953 to a "locked-in" stall. However, Waterton states that the trimming tailplane was found to be the wrong way for recovery. Low-speed handling tests were being done to assess a new wing. Handley Page Victor XL159 was lost to a "stable stall" on 23 March 1962. It had been clearing the fixed droop leading edge with the test being stall approach, landing configuration, C of G aft. The brake parachute had not been streamed, as it may have hindered rear crew escape.
The name "deep stall" first came into widespread use after the crash of the prototype BAC 1-11 G-ASHG on 22 October 1963, which killed its crew. This led to changes to the aircraft, including the installation of a stick shaker (see below) to clearly warn the pilot of an impending stall. Stick shakers are now a standard part of commercial airliners. Nevertheless, the problem continues to cause accidents; on 3 June 1966, a Hawker Siddeley Trident (G-ARPY), was lost to deep stall; deep stall is suspected to be cause of another Trident (the British European Airways Flight 548 G-ARPI) crash – known as the "Staines Disaster" – on 18 June 1972, when the crew failed to notice the conditions and had disabled the stall-recovery system. On 3 April 1980, a prototype of the Canadair Challenger business jet crashed after initially entering a deep stall from 17,000 ft and having both engines flame-out. It recovered from the deep stall after deploying the anti-spin parachute but crashed after being unable to jettison the chute or relight the engines. One of the test pilots was unable to escape from the aircraft in time and was killed. On 26 July 1993, a Canadair CRJ-100 was lost in flight testing due to a deep stall. It has been reported that a Boeing 727 entered a deep stall in a flight test, but the pilot was able to rock the airplane to increasingly higher bank angles until the nose finally fell through and normal control response was recovered. The crash of West Caribbean Airways Flight 708 in 2005 was also attributed to a deep stall.
Deep stalls can occur at apparently normal pitch attitudes, if the aircraft is descending quickly enough. The airflow is coming from below, so the angle of attack is increased. Early speculation on reasons for the crash of Air France Flight 447 blamed an unrecoverable deep stall, since it descended in an almost flat attitude (15°) at an angle of attack of 35° or more. However, it was held in a stalled glide by the pilots, who held the nose up amid all the confusion of what was actually happening to the aircraft.
Canard-configured aircraft are also at risk of getting into a deep stall. Two Velocity aircraft crashed due to locked-in deep stalls. Testing revealed that the addition of leading-edge cuffs to the outboard wing prevented the aircraft from getting into a deep stall. The Piper Advanced Technologies PAT-1, N15PT, another canard-configured aircraft, also crashed in an accident attributed to a deep stall. Wind-tunnel testing of the design at the NASA Langley Research Center showed that it was vulnerable to a deep stall.
In the early 1980s, a Schweizer SGS 1-36 sailplane was modified for NASA's controlled deep-stall flight program.
=== Tip stall ===
Wing sweep and taper cause stalling at the tip of a wing before the root. The position of a swept wing along the fuselage has to be such that the lift from the wing root, well forward of the aircraft center of gravity (c.g.), must be balanced by the wing tip, well aft of the c.g. If the tip stalls first the balance of the aircraft is upset causing dangerous nose pitch up. Swept wings have to incorporate features which prevent pitch-up caused by premature tip stall.
A swept wing has a higher lift coefficient on its outer panels than on the inner wing, causing them to reach their maximum lift capability first and to stall first. This is caused by the downwash pattern associated with swept/tapered wings. To delay tip stall the outboard wing is given washout to reduce its angle of attack. The root can also be modified with a suitable leading-edge and airfoil section to make sure it stalls before the tip. However, when taken beyond stalling incidence the tips may still become fully stalled before the inner wing despite initial separation occurring inboard. This causes pitch-up after the stall and entry to a super-stall on those aircraft with super-stall characteristics. Span-wise flow of the boundary layer is also present on swept wings and causes tip stall. The amount of boundary layer air flowing outboard can be reduced by generating vortices with a leading-edge device such as a fence, notch, saw tooth or a set of vortex generators behind the leading edge.
== Warning and safety devices ==
Fixed-wing aircraft can be equipped with devices to prevent or postpone a stall or to make it less (or in some cases more) severe, or to make recovery easier.
An aerodynamic twist can be introduced to the wing with the leading edge near the wing tip twisted downward. This is called washout and causes the wing root to stall before the wing tip. This makes the stall gentle and progressive. Since the stall is delayed at the wing tips, where the ailerons are, roll control is maintained when the stall begins.
A stall strip is a small sharp-edged device that, when attached to the leading edge of a wing, encourages the stall to start there in preference to any other location on the wing. If attached close to the wing root, it makes the stall gentle and progressive; if attached near the wing tip, it encourages the aircraft to drop a wing when stalling.
A stall fence is a flat plate in the direction of the chord to stop separated flow progressing out along the wing
Vortex generators, tiny strips of metal or plastic placed on top of the wing near the leading edge that protrude past the boundary layer into the free stream. As the name implies, they energize the boundary layer by mixing free stream airflow with boundary layer flow, thereby creating vortices, this increases the momentum in the boundary layer. By increasing the momentum of the boundary layer, airflow separation and the resulting stall may be delayed.
An anti-stall strake is a leading edge extension that generates a vortex on the wing upper surface to postpone the stall.
A stick pusher is a mechanical device that prevents the pilot from stalling an aircraft. It pushes the elevator control forward as the stall is approached, causing a reduction in the angle of attack. In generic terms, a stick pusher is known as a stall identification device or stall identification system.
A stick shaker is a mechanical device that shakes the pilot's controls to warn of the onset of stall.
A stall warning is an electronic or mechanical device that sounds an audible warning as the stall speed is approached. The majority of aircraft contain some form of this device that warns the pilot of an impending stall. The simplest such device is a stall warning horn, which consists of either a pressure sensor or a movable metal tab that actuates a switch and produces an audible warning in response.
An angle-of-attack indicator for light aircraft, the "AlphaSystemsAOA" and a nearly identical "lift reserve indicator", are both pressure-differential instruments that display margin above stall and/or angle of attack on an instantaneous, continuous readout. The General Technics CYA-100 displays true angle of attack via a magnetically coupled vane. An AOA indicator provides a visual display of the amount of available lift throughout its slow-speed envelope regardless of the many variables that act upon an aircraft. This indicator is immediately responsive to changes in speed, angle of attack, and wind conditions, and automatically compensates for aircraft weight, altitude, and temperature.
An angle of attack limiter or an "alpha limiter" is a flight computer that automatically prevents pilot input from causing the plane to rise over the stall angle. Some alpha limiters can be disabled by the pilot.
Stall warning systems often involve inputs from a broad range of sensors and systems to include a dedicated angle of attack sensor.
Blockage, damage, or inoperation of stall and angle of attack (AOA) probes can lead to unreliability of the stall warning and cause the stick pusher, overspeed warning, autopilot, and yaw damper to malfunction.
If a forward canard is used for pitch control, rather than an aft tail, the canard is designed to meet the airflow at a slightly greater angle of attack than the wing. Therefore, when the aircraft pitch increases abnormally, the canard will usually stall first, causing the nose to drop and so preventing the wing from reaching its critical AOA. Thus, the risk of main-wing stalling is greatly reduced. However, if the main wing stalls, recovery becomes difficult, as the canard is more deeply stalled, and angle of attack increases rapidly.
If an aft tail is used, the wing is designed to stall before the tail. In this case, the wing can be flown at higher lift coefficient (closer to stall) to produce more overall lift.
Most military combat aircraft have an angle of attack indicator among the pilot's instruments, which lets the pilot know precisely how close to the stall point the aircraft is. Modern airliner instrumentation may also measure angle of attack, although this information may not be directly displayed on the pilot's display, instead driving a stall warning indicator or giving performance information to the flight computer (for fly-by-wire systems).
== Flight beyond the stall ==
As a wing stalls, aileron effectiveness is reduced, rendering the plane difficult to control and increasing the risk of a spin. Post stall, steady flight beyond the stalling angle (where the coefficient of lift is largest) requires engine thrust to replace lift, as well as alternative controls to replace the loss of effectiveness of the ailerons. Short-term stalls at 90–120° (e.g. Pugachev's cobra) are sometimes performed at airshows. The highest angle of attack in sustained flight so far demonstrated was 70° in the X-31 at the Dryden Flight Research Center. Sustained post-stall flight is a type of supermaneuverability.
== Spoilers ==
Except for flight training, airplane testing, and aerobatics, a stall is usually an undesirable event. Spoilers (sometimes called lift dumpers), however, are devices that are intentionally deployed to create a carefully controlled flow separation over part of an aircraft's wing to reduce the lift it generates, increase the drag, and allow the aircraft to descend more rapidly without gaining speed. Spoilers are also deployed asymmetrically (one wing only) to enhance roll control. Spoilers can also be used on aborted take-offs and after main wheel contact on landing to increase the aircraft's weight on its wheels for better braking action.
Unlike powered airplanes, which can control descent by increasing or decreasing thrust, gliders have to increase drag to increase the rate of descent. In high-performance gliders, spoiler deployment is extensively used to control the approach to landing.
Spoilers can also be thought of as "lift reducers" because they reduce the lift of the wing in which the spoiler resides. For example, an uncommanded roll to the left could be reversed by raising the right wing spoiler (or only a few of the spoilers present in large airliner wings). This has the advantage of avoiding the need to increase lift in the wing that is dropping (which may bring that wing closer to stalling).
== History ==
German aviator Otto Lilienthal died while flying in 1896 as the result of a stall. Wilbur Wright encountered stalls for the first time in 1901, while flying his second glider. Awareness of Lilienthal's accident and Wilbur's experience motivated the Wright Brothers to design their plane in "canard" configuration. This purportedly made recoveries from stalls easier and more gentle. The design allegedly saved the brothers' lives more than once. Although, canard configurations, without careful design, can actually make a stall unrecoverable.
The aircraft engineer Juan de la Cierva worked on his "Autogiro" project to develop a rotary wing aircraft which, he hoped, would be unable to stall and which therefore would be safer than aeroplanes. In developing the resulting "autogyro" aircraft, he solved many engineering problems which made the helicopter possible.
== See also ==
Articles
Aviation safety
Coffin corner (aerodynamics)
Compressor stall
Lift coefficient
Spin (flight)
Spoiler (aeronautics)
Wing twist
Notable accidents
1963 BAC One-Eleven test crash
1966 Felthorpe Trident crash
British European Airways Flight 548
China Airlines Flight 140
China Airlines Flight 676
Yeti Airlines Flight 691
Air France Flight 447
Colgan Air Flight 3407
Turkish Airlines Flight 1951
Indonesia AirAsia Flight 8501
West Caribbean Airways Flight 708
2017 Teterboro Learjet crash
Northwest Orient Airlines Flight 6231
Voepass Linhas Aéreas Flight 2283
== Notes ==
== References ==
USAF & NATO Report RTO-TR-015 AC/323/(HFM-015)/TP-1 (2001
Anderson, J.D., A History of Aerodynamics (1997). Cambridge University Press. ISBN 0-521-66955-3
Federal Aviation Administration (2007). "Slow Flight, Stalls, and Spins". Airplane Flying Handbook (2nd ed.). New York: Skyhorse Publishing. pp. 4-1 to 4-16. ISBN 978-1-60239-003-4.
L. J. Clancy (1975), Aerodynamics, Pitman Publishing Limited, London. ISBN 0-273-01120-0
Stengel, R. (2004), Flight Dynamics, Princeton University Press, ISBN 0-691-11407-2 | Wikipedia/Stall_(fluid_mechanics) |
In geophysical fluid dynamics, an approximation whereby the Coriolis parameter, f, is set to vary linearly in space is called a beta plane approximation.
On a rotating sphere such as the Earth, f varies with the sine of latitude; in the so-called f-plane approximation, this variation is ignored, and a value of f appropriate for a particular latitude is used throughout the domain. This approximation can be visualized as a tangent plane touching the surface of the sphere at this latitude.
A more accurate model is a linear Taylor series approximation to this variability about a given latitude
ϕ
0
{\displaystyle \phi _{0}}
:
f
=
f
0
+
β
y
{\displaystyle f=f_{0}+\beta y}
, where
f
0
{\displaystyle f_{0}}
is the Coriolis parameter at
ϕ
0
{\displaystyle \phi _{0}}
,
β
=
(
d
f
/
d
y
)
|
ϕ
0
=
2
Ω
cos
(
ϕ
0
)
/
a
{\displaystyle \beta =(\mathrm {d} f/\mathrm {d} y)|_{\phi _{0}}=2\Omega \cos(\phi _{0})/a}
is the Rossby parameter,
y
{\displaystyle y}
is the meridional distance from
ϕ
0
{\displaystyle \phi _{0}}
,
Ω
{\displaystyle \Omega }
is the angular rotation rate of the Earth, and
a
{\displaystyle a}
is the Earth's radius.
In analogy with the f-plane, this approximation is termed the beta plane, even though it no longer describes dynamics on a hypothetical tangent plane. The advantage of the beta plane approximation over more accurate formulations is that it does not contribute nonlinear terms to the dynamical equations; such terms make the equations harder to solve. The name 'beta plane' derives from the convention to denote the linear coefficient of variation with the Greek letter β.
The beta plane approximation is useful for the theoretical analysis of many phenomena in geophysical fluid dynamics since it makes the equations much more tractable, yet retains the important information that the Coriolis parameter varies in space. In particular, Rossby waves, the most important type of waves if one considers large-scale atmospheric and oceanic dynamics, depend on the variation of f as a restoring force; they do not occur if the Coriolis parameter is approximated only as a constant.
== See also ==
Rossby parameter
Coriolis effect
Coriolis frequency
Baroclinic instability
Quasi-geostrophic equations
== References ==
Holton, J. R., An introduction to dynamical meteorology, Academic Press, 2004. ISBN 978-0-12-354015-7.
Pedlosky, J., Geophysical fluid dynamics, Springer-Verlag, 1992. ISBN 978-0-387-96387-7. | Wikipedia/Beta-plane_approximation |
In physics and continuum mechanics, deformation is the change in the shape or size of an object. It has dimension of length with SI unit of metre (m). It is quantified as the residual displacement of particles in a non-rigid body, from an initial configuration to a final configuration, excluding the body's average translation and rotation (its rigid transformation). A configuration is a set containing the positions of all particles of the body.
A deformation can occur because of external loads, intrinsic activity (e.g. muscle contraction), body forces (such as gravity or electromagnetic forces), or changes in temperature, moisture content, or chemical reactions, etc.
In a continuous body, a deformation field results from a stress field due to applied forces or because of some changes in the conditions of the body. The relation between stress and strain (relative deformation) is expressed by constitutive equations, e.g., Hooke's law for linear elastic materials.
Deformations which cease to exist after the stress field is removed are termed as elastic deformation. In this case, the continuum completely recovers its original configuration. On the other hand, irreversible deformations may remain, and these exist even after stresses have been removed. One type of irreversible deformation is plastic deformation, which occurs in material bodies after stresses have attained a certain threshold value known as the elastic limit or yield stress, and are the result of slip, or dislocation mechanisms at the atomic level. Another type of irreversible deformation is viscous deformation, which is the irreversible part of viscoelastic deformation.
In the case of elastic deformations, the response function linking strain to the deforming stress is the compliance tensor of the material.
== Definition and formulation ==
Deformation is the change in the metric properties of a continuous body, meaning that a curve drawn in the initial body placement changes its length when displaced to a curve in the final placement. If none of the curves changes length, it is said that a rigid body displacement occurred.
It is convenient to identify a reference configuration or initial geometric state of the continuum body which all subsequent configurations are referenced from. The reference configuration need not be one the body actually will ever occupy. Often, the configuration at t = 0 is considered the reference configuration, κ0(B). The configuration at the current time t is the current configuration.
For deformation analysis, the reference configuration is identified as undeformed configuration, and the current configuration as deformed configuration. Additionally, time is not considered when analyzing deformation, thus the sequence of configurations between the undeformed and deformed configurations are of no interest.
The components Xi of the position vector X of a particle in the reference configuration, taken with respect to the reference coordinate system, are called the material or reference coordinates. On the other hand, the components xi of the position vector x of a particle in the deformed configuration, taken with respect to the spatial coordinate system of reference, are called the spatial coordinates
There are two methods for analysing the deformation of a continuum. One description is made in terms of the material or referential coordinates, called material description or Lagrangian description. A second description of deformation is made in terms of the spatial coordinates it is called the spatial description or Eulerian description.
There is continuity during deformation of a continuum body in the sense that:
The material points forming a closed curve at any instant will always form a closed curve at any subsequent time.
The material points forming a closed surface at any instant will always form a closed surface at any subsequent time and the matter within the closed surface will always remain within.
=== Affine deformation ===
An affine deformation is a deformation that can be completely described by an affine transformation. Such a transformation is composed of a linear transformation (such as rotation, shear, extension and compression) and a rigid body translation. Affine deformations are also called homogeneous deformations.
Therefore, an affine deformation has the form
x
(
X
,
t
)
=
F
(
t
)
⋅
X
+
c
(
t
)
{\displaystyle \mathbf {x} (\mathbf {X} ,t)={\boldsymbol {F}}(t)\cdot \mathbf {X} +\mathbf {c} (t)}
where x is the position of a point in the deformed configuration, X is the position in a reference configuration, t is a time-like parameter, F is the linear transformer and c is the translation. In matrix form, where the components are with respect to an orthonormal basis,
[
x
1
(
X
1
,
X
2
,
X
3
,
t
)
x
2
(
X
1
,
X
2
,
X
3
,
t
)
x
3
(
X
1
,
X
2
,
X
3
,
t
)
]
=
[
F
11
(
t
)
F
12
(
t
)
F
13
(
t
)
F
21
(
t
)
F
22
(
t
)
F
23
(
t
)
F
31
(
t
)
F
32
(
t
)
F
33
(
t
)
]
[
X
1
X
2
X
3
]
+
[
c
1
(
t
)
c
2
(
t
)
c
3
(
t
)
]
{\displaystyle {\begin{bmatrix}x_{1}(X_{1},X_{2},X_{3},t)\\x_{2}(X_{1},X_{2},X_{3},t)\\x_{3}(X_{1},X_{2},X_{3},t)\end{bmatrix}}={\begin{bmatrix}F_{11}(t)&F_{12}(t)&F_{13}(t)\\F_{21}(t)&F_{22}(t)&F_{23}(t)\\F_{31}(t)&F_{32}(t)&F_{33}(t)\end{bmatrix}}{\begin{bmatrix}X_{1}\\X_{2}\\X_{3}\end{bmatrix}}+{\begin{bmatrix}c_{1}(t)\\c_{2}(t)\\c_{3}(t)\end{bmatrix}}}
The above deformation becomes non-affine or inhomogeneous if F = F(X,t) or c = c(X,t).
=== Rigid body motion ===
A rigid body motion is a special affine deformation that does not involve any shear, extension or compression. The transformation matrix F is proper orthogonal in order to allow rotations but no reflections.
A rigid body motion can be described by
x
(
X
,
t
)
=
Q
(
t
)
⋅
X
+
c
(
t
)
{\displaystyle \mathbf {x} (\mathbf {X} ,t)={\boldsymbol {Q}}(t)\cdot \mathbf {X} +\mathbf {c} (t)}
where
Q
⋅
Q
T
=
Q
T
⋅
Q
=
1
{\displaystyle {\boldsymbol {Q}}\cdot {\boldsymbol {Q}}^{T}={\boldsymbol {Q}}^{T}\cdot {\boldsymbol {Q}}={\boldsymbol {\mathit {1}}}}
In matrix form,
[
x
1
(
X
1
,
X
2
,
X
3
,
t
)
x
2
(
X
1
,
X
2
,
X
3
,
t
)
x
3
(
X
1
,
X
2
,
X
3
,
t
)
]
=
[
Q
11
(
t
)
Q
12
(
t
)
Q
13
(
t
)
Q
21
(
t
)
Q
22
(
t
)
Q
23
(
t
)
Q
31
(
t
)
Q
32
(
t
)
Q
33
(
t
)
]
[
X
1
X
2
X
3
]
+
[
c
1
(
t
)
c
2
(
t
)
c
3
(
t
)
]
{\displaystyle {\begin{bmatrix}x_{1}(X_{1},X_{2},X_{3},t)\\x_{2}(X_{1},X_{2},X_{3},t)\\x_{3}(X_{1},X_{2},X_{3},t)\end{bmatrix}}={\begin{bmatrix}Q_{11}(t)&Q_{12}(t)&Q_{13}(t)\\Q_{21}(t)&Q_{22}(t)&Q_{23}(t)\\Q_{31}(t)&Q_{32}(t)&Q_{33}(t)\end{bmatrix}}{\begin{bmatrix}X_{1}\\X_{2}\\X_{3}\end{bmatrix}}+{\begin{bmatrix}c_{1}(t)\\c_{2}(t)\\c_{3}(t)\end{bmatrix}}}
== Background: displacement ==
A change in the configuration of a continuum body results in a displacement. The displacement of a body has two components: a rigid-body displacement and a deformation. A rigid-body displacement consists of a simultaneous translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration κ0(B) to a current or deformed configuration κt(B) (Figure 1).
If after a displacement of the continuum there is a relative displacement between particles, a deformation has occurred. On the other hand, if after displacement of the continuum the relative displacement between particles in the current configuration is zero, then there is no deformation and a rigid-body displacement is said to have occurred.
The vector joining the positions of a particle P in the undeformed configuration and deformed configuration is called the displacement vector u(X,t) = uiei in the Lagrangian description, or U(x,t) = UJEJ in the Eulerian description.
A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. It is convenient to do the analysis of deformation or motion of a continuum body in terms of the displacement field. In general, the displacement field is expressed in terms of the material coordinates as
u
(
X
,
t
)
=
b
(
X
,
t
)
+
x
(
X
,
t
)
−
X
or
u
i
=
α
i
J
b
J
+
x
i
−
α
i
J
X
J
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {b} (\mathbf {X} ,t)+\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=\alpha _{iJ}b_{J}+x_{i}-\alpha _{iJ}X_{J}}
or in terms of the spatial coordinates as
U
(
x
,
t
)
=
b
(
x
,
t
)
+
x
−
X
(
x
,
t
)
or
U
J
=
b
J
+
α
J
i
x
i
−
X
J
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {b} (\mathbf {x} ,t)+\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=b_{J}+\alpha _{Ji}x_{i}-X_{J}}
where αJi are the direction cosines between the material and spatial coordinate systems with unit vectors EJ and ei, respectively. Thus
E
J
⋅
e
i
=
α
J
i
=
α
i
J
{\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\alpha _{Ji}=\alpha _{iJ}}
and the relationship between ui and UJ is then given by
u
i
=
α
i
J
U
J
or
U
J
=
α
J
i
u
i
{\displaystyle u_{i}=\alpha _{iJ}U_{J}\qquad {\text{or}}\qquad U_{J}=\alpha _{Ji}u_{i}}
Knowing that
e
i
=
α
i
J
E
J
{\displaystyle \mathbf {e} _{i}=\alpha _{iJ}\mathbf {E} _{J}}
then
u
(
X
,
t
)
=
u
i
e
i
=
u
i
(
α
i
J
E
J
)
=
U
J
E
J
=
U
(
x
,
t
)
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=u_{i}\mathbf {e} _{i}=u_{i}(\alpha _{iJ}\mathbf {E} _{J})=U_{J}\mathbf {E} _{J}=\mathbf {U} (\mathbf {x} ,t)}
It is common to superimpose the coordinate systems for the undeformed and deformed configurations, which results in b = 0, and the direction cosines become Kronecker deltas:
E
J
⋅
e
i
=
δ
J
i
=
δ
i
J
{\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\delta _{Ji}=\delta _{iJ}}
Thus, we have
u
(
X
,
t
)
=
x
(
X
,
t
)
−
X
or
u
i
=
x
i
−
δ
i
J
X
J
=
x
i
−
X
i
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=x_{i}-\delta _{iJ}X_{J}=x_{i}-X_{i}}
or in terms of the spatial coordinates as
U
(
x
,
t
)
=
x
−
X
(
x
,
t
)
or
U
J
=
δ
J
i
x
i
−
X
J
=
x
J
−
X
J
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=\delta _{Ji}x_{i}-X_{J}=x_{J}-X_{J}}
=== Displacement gradient tensor ===
The partial differentiation of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor ∇Xu. Thus we have:
u
(
X
,
t
)
=
x
(
X
,
t
)
−
X
∇
X
u
=
∇
X
x
−
I
∇
X
u
=
F
−
I
{\displaystyle {\begin{aligned}\mathbf {u} (\mathbf {X} ,t)&=\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \\\nabla _{\mathbf {X} }\mathbf {u} &=\nabla _{\mathbf {X} }\mathbf {x} -\mathbf {I} \\\nabla _{\mathbf {X} }\mathbf {u} &=\mathbf {F} -\mathbf {I} \end{aligned}}}
or
u
i
=
x
i
−
δ
i
J
X
J
=
x
i
−
X
i
∂
u
i
∂
X
K
=
∂
x
i
∂
X
K
−
δ
i
K
{\displaystyle {\begin{aligned}u_{i}&=x_{i}-\delta _{iJ}X_{J}=x_{i}-X_{i}\\{\frac {\partial u_{i}}{\partial X_{K}}}&={\frac {\partial x_{i}}{\partial X_{K}}}-\delta _{iK}\end{aligned}}}
where F is the deformation gradient tensor.
Similarly, the partial differentiation of the displacement vector with respect to the spatial coordinates yields the spatial displacement gradient tensor ∇xU. Thus we have,
U
(
x
,
t
)
=
x
−
X
(
x
,
t
)
∇
x
U
=
I
−
∇
x
X
∇
x
U
=
I
−
F
−
1
{\displaystyle {\begin{aligned}\mathbf {U} (\mathbf {x} ,t)&=\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\\\nabla _{\mathbf {x} }\mathbf {U} &=\mathbf {I} -\nabla _{\mathbf {x} }\mathbf {X} \\\nabla _{\mathbf {x} }\mathbf {U} &=\mathbf {I} -\mathbf {F} ^{-1}\end{aligned}}}
or
U
J
=
δ
J
i
x
i
−
X
J
=
x
J
−
X
J
∂
U
J
∂
x
k
=
δ
J
k
−
∂
X
J
∂
x
k
{\displaystyle {\begin{aligned}U_{J}&=\delta _{Ji}x_{i}-X_{J}=x_{J}-X_{J}\\{\frac {\partial U_{J}}{\partial x_{k}}}&=\delta _{Jk}-{\frac {\partial X_{J}}{\partial x_{k}}}\end{aligned}}}
== Examples ==
Homogeneous (or affine) deformations are useful in elucidating the behavior of materials. Some homogeneous deformations of interest are
uniform extension
pure dilation
equibiaxial tension
simple shear
pure shear
Linear or longitudinal deformations of long objects, such as beams and fibers, are called elongation or shortening; derived quantities are the relative elongation and the stretch ratio.
Plane deformations are also of interest, particularly in the experimental context.
Volume deformation is a uniform scaling due to isotropic compression; the relative volume deformation is called volumetric strain.
=== Plane deformation ===
A plane deformation, also called plane strain, is one where the deformation is restricted to one of the planes in the reference configuration. If the deformation is restricted to the plane described by the basis vectors e1, e2, the deformation gradient has the form
F
=
F
11
e
1
⊗
e
1
+
F
12
e
1
⊗
e
2
+
F
21
e
2
⊗
e
1
+
F
22
e
2
⊗
e
2
+
e
3
⊗
e
3
{\displaystyle {\boldsymbol {F}}=F_{11}\mathbf {e} _{1}\otimes \mathbf {e} _{1}+F_{12}\mathbf {e} _{1}\otimes \mathbf {e} _{2}+F_{21}\mathbf {e} _{2}\otimes \mathbf {e} _{1}+F_{22}\mathbf {e} _{2}\otimes \mathbf {e} _{2}+\mathbf {e} _{3}\otimes \mathbf {e} _{3}}
In matrix form,
F
=
[
F
11
F
12
0
F
21
F
22
0
0
0
1
]
{\displaystyle {\boldsymbol {F}}={\begin{bmatrix}F_{11}&F_{12}&0\\F_{21}&F_{22}&0\\0&0&1\end{bmatrix}}}
From the polar decomposition theorem, the deformation gradient, up to a change of coordinates, can be decomposed into a stretch and a rotation. Since all the deformation is in a plane, we can write
F
=
R
⋅
U
=
[
cos
θ
sin
θ
0
−
sin
θ
cos
θ
0
0
0
1
]
[
λ
1
0
0
0
λ
2
0
0
0
1
]
{\displaystyle {\boldsymbol {F}}={\boldsymbol {R}}\cdot {\boldsymbol {U}}={\begin{bmatrix}\cos \theta &\sin \theta &0\\-\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}}{\begin{bmatrix}\lambda _{1}&0&0\\0&\lambda _{2}&0\\0&0&1\end{bmatrix}}}
where θ is the angle of rotation and λ1, λ2 are the principal stretches.
==== Isochoric plane deformation ====
If the deformation is isochoric (volume preserving) then det(F) = 1 and we have
F
11
F
22
−
F
12
F
21
=
1
{\displaystyle F_{11}F_{22}-F_{12}F_{21}=1}
Alternatively,
λ
1
λ
2
=
1
{\displaystyle \lambda _{1}\lambda _{2}=1}
==== Simple shear ====
A simple shear deformation is defined as an isochoric plane deformation in which there is a set of line elements with a given reference orientation that do not change length and orientation during the deformation.
If e1 is the fixed reference orientation in which line elements do not deform during the deformation then λ1 = 1 and F·e1 = e1.
Therefore,
F
11
e
1
+
F
21
e
2
=
e
1
⟹
F
11
=
1
;
F
21
=
0
{\displaystyle F_{11}\mathbf {e} _{1}+F_{21}\mathbf {e} _{2}=\mathbf {e} _{1}\quad \implies \quad F_{11}=1~;~~F_{21}=0}
Since the deformation is isochoric,
F
11
F
22
−
F
12
F
21
=
1
⟹
F
22
=
1
{\displaystyle F_{11}F_{22}-F_{12}F_{21}=1\quad \implies \quad F_{22}=1}
Define
γ
:=
F
12
{\displaystyle \gamma :=F_{12}}
Then, the deformation gradient in simple shear can be expressed as
F
=
[
1
γ
0
0
1
0
0
0
1
]
{\displaystyle {\boldsymbol {F}}={\begin{bmatrix}1&\gamma &0\\0&1&0\\0&0&1\end{bmatrix}}}
Now,
F
⋅
e
2
=
F
12
e
1
+
F
22
e
2
=
γ
e
1
+
e
2
⟹
F
⋅
(
e
2
⊗
e
2
)
=
γ
e
1
⊗
e
2
+
e
2
⊗
e
2
{\displaystyle {\boldsymbol {F}}\cdot \mathbf {e} _{2}=F_{12}\mathbf {e} _{1}+F_{22}\mathbf {e} _{2}=\gamma \mathbf {e} _{1}+\mathbf {e} _{2}\quad \implies \quad {\boldsymbol {F}}\cdot (\mathbf {e} _{2}\otimes \mathbf {e} _{2})=\gamma \mathbf {e} _{1}\otimes \mathbf {e} _{2}+\mathbf {e} _{2}\otimes \mathbf {e} _{2}}
Since
e
i
⊗
e
i
=
1
{\displaystyle \mathbf {e} _{i}\otimes \mathbf {e} _{i}={\boldsymbol {\mathit {1}}}}
we can also write the deformation gradient as
F
=
1
+
γ
e
1
⊗
e
2
{\displaystyle {\boldsymbol {F}}={\boldsymbol {\mathit {1}}}+\gamma \mathbf {e} _{1}\otimes \mathbf {e} _{2}}
== See also ==
The deformation of long elements such as beams or studs due to bending forces is known as deflection.
Euler–Bernoulli beam theory
Deformation (engineering)
Finite strain theory
Infinitesimal strain theory
Moiré pattern
Shear modulus
Shear stress
Shear strength
Strain (mechanics)
Stress (mechanics)
Stress measures
== References ==
== Further reading ==
Bazant, Zdenek P.; Cedolin, Luigi (2010). Three-Dimensional Continuum Instabilities and Effects of Finite Strain Tensor, chapter 11 in "Stability of Structures", 3rd ed. Singapore, New Jersey, London: World Scientific Publishing. ISBN 978-9814317030.
Dill, Ellis Harold (2006). Continuum Mechanics: Elasticity, Plasticity, Viscoelasticity. Germany: CRC Press. ISBN 0-8493-9779-0.
Hutter, Kolumban; Jöhnk, Klaus (2004). Continuum Methods of Physical Modeling. Germany: Springer. ISBN 3-540-20619-1.
Jirasek, M; Bazant, Z.P. (2002). Inelastic Analysis of Structures. London and New York: J. Wiley & Sons. ISBN 0471987166.
Lubarda, Vlado A. (2001). Elastoplasticity Theory. CRC Press. ISBN 0-8493-1138-1.
Macosko, C. W. (1994). Rheology: principles, measurement and applications. VCH Publishers. ISBN 1-56081-579-5.
Mase, George E. (1970). Continuum Mechanics. McGraw-Hill Professional. ISBN 0-07-040663-4.
Mase, G. Thomas; Mase, George E. (1999). Continuum Mechanics for Engineers (2nd ed.). CRC Press. ISBN 0-8493-1855-6.
Nemat-Nasser, Sia (2006). Plasticity: A Treatise on Finite Deformation of Heterogeneous Inelastic Materials. Cambridge: Cambridge University Press. ISBN 0-521-83979-3.
Prager, William (1961). Introduction to Mechanics of Continua. Boston: Ginn and Co. ISBN 0486438090. {{cite book}}: ISBN / Date incompatibility (help) | Wikipedia/Elongation_(mechanics) |
Viscoplasticity is a theory in continuum mechanics that describes the rate-dependent inelastic behavior of solids. Rate-dependence in this context means that the deformation of the material depends on the rate at which loads are applied. The inelastic behavior that is the subject of viscoplasticity is plastic deformation which means that the material undergoes unrecoverable deformations when a load level is reached. Rate-dependent plasticity is important for transient plasticity calculations. The main difference between rate-independent plastic and viscoplastic material models is that the latter exhibit not only permanent deformations after the application of loads but continue to undergo a creep flow as a function of time under the influence of the applied load.
The elastic response of viscoplastic materials can be represented in one-dimension by Hookean spring elements. Rate-dependence can be represented by nonlinear dashpot elements in a manner similar to viscoelasticity. Plasticity can be accounted for by adding sliding frictional elements as shown in Figure 1. In the figure
E
{\displaystyle E}
is the modulus of elasticity,
λ
{\displaystyle \lambda }
is the viscosity parameter and
N
{\displaystyle N}
is a power-law type parameter that represents non-linear dashpot
[
σ
(
d
ε
/
d
t
)
=
σ
=
λ
(
d
ε
/
d
t
)
1
/
N
]
{\displaystyle [\sigma (\mathrm {d} \varepsilon /\mathrm {d} t)=\sigma =\lambda (\mathrm {d} \varepsilon /\mathrm {d} t)^{1/N}]}
. The sliding element can have a yield stress (
σ
y
{\displaystyle \sigma _{y}}
) that is strain rate dependent, or even constant, as shown in Figure 1c.
Viscoplasticity is usually modeled in three-dimensions using overstress models of the Perzyna or Duvaut-Lions types. In these models, the stress is allowed to increase beyond the rate-independent yield surface upon application of a load and then allowed to relax back to the yield surface over time. The yield surface is usually assumed not to be rate-dependent in such models. An alternative approach is to add a strain rate dependence to the yield stress and use the techniques of rate independent plasticity to calculate the response of a material.
For metals and alloys, viscoplasticity is the macroscopic behavior caused by a mechanism linked to the movement of dislocations in grains, with superposed effects of inter-crystalline gliding. The mechanism usually becomes dominant at temperatures greater than approximately one third of the absolute melting temperature. However, certain alloys exhibit viscoplasticity at room temperature (300 K). For polymers, wood, and bitumen, the theory of viscoplasticity is required to describe behavior beyond the limit of elasticity or viscoelasticity.
In general, viscoplasticity theories are useful in areas such as:
the calculation of permanent deformations,
the prediction of the plastic collapse of structures,
the investigation of stability,
crash simulations,
systems exposed to high temperatures such as turbines in engines, e.g. a power plant,
dynamic problems and systems exposed to high strain rates.
== History ==
Research on plasticity theories started in 1864 with the work of Henri Tresca, Saint Venant (1870) and Levy (1871) on the maximum shear criterion. An improved plasticity model was presented in 1913 by Von Mises which is now referred to as the von Mises yield criterion. In viscoplasticity, the development of a mathematical model heads back to 1910 with the representation of primary creep by Andrade's law. In 1929, Norton developed a one-dimensional dashpot model which linked the rate of secondary creep to the stress. In 1934, Odqvist generalized Norton's law to the multi-axial case.
Concepts such as the normality of plastic flow to the yield surface and flow rules for plasticity were introduced by Prandtl (1924) and Reuss (1930). In 1932, Hohenemser and Prager proposed the first model for slow viscoplastic flow. This model provided a relation between the deviatoric stress and the strain rate for an incompressible Bingham solid However, the application of these theories did not begin before 1950, where limit theorems were discovered.
In 1960, the first IUTAM Symposium "Creep in Structures" organized by Hoff provided a major development in viscoplasticity with the works of Hoff, Rabotnov, Perzyna, Hult, and Lemaitre for the isotropic hardening laws, and those of Kratochvil, Malinini and Khadjinsky, Ponter and Leckie, and Chaboche for the kinematic hardening laws. Perzyna, in 1963, introduced a viscosity coefficient that is temperature and time dependent. The formulated models were supported by the thermodynamics of irreversible processes and the phenomenological standpoint. The ideas presented in these works have been the basis for most subsequent research into rate-dependent plasticity.
== Phenomenology ==
For a qualitative analysis, several characteristic tests are performed to describe the phenomenology of viscoplastic materials. Some examples of these tests are
hardening tests at constant stress or strain rate,
creep tests at constant force, and
stress relaxation at constant elongation.
=== Strain hardening test ===
One consequence of yielding is that as plastic deformation proceeds, an increase in stress is required to produce additional strain. This phenomenon is known as Strain/Work hardening. For a viscoplastic material the hardening curves are not significantly different from those of rate-independent plastic material. Nevertheless, three essential differences can be observed.
At the same strain, the higher the rate of strain the higher the stress
A change in the rate of strain during the test results in an immediate change in the stress–strain curve.
The concept of a plastic yield limit is no longer strictly applicable.
The hypothesis of partitioning the strains by decoupling the elastic and plastic parts is still applicable where the strains are small, i.e.,
ε
=
ε
e
+
ε
v
p
{\displaystyle {\boldsymbol {\varepsilon }}={\boldsymbol {\varepsilon }}_{\mathrm {e} }+{\boldsymbol {\varepsilon }}_{\mathrm {vp} }}
where
ε
e
{\displaystyle {\boldsymbol {\varepsilon }}_{\mathrm {e} }}
is the elastic strain and
ε
v
p
{\displaystyle {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}
is the viscoplastic strain. To obtain the stress–strain behavior shown in blue in the figure, the material is initially loaded at a strain rate of 0.1/s. The strain rate is then instantaneously raised to 100/s and held constant at that value for some time. At the end of that time period the strain rate is dropped instantaneously back to 0.1/s and the cycle is continued for increasing values of strain. There is clearly a lag between the strain-rate change and the stress response. This lag is modeled quite accurately by overstress models (such as the Perzyna model) but not by models of rate-independent plasticity that have a rate-dependent yield stress.
=== Creep test ===
Creep is the tendency of a solid material to slowly move or deform permanently under constant stresses. Creep tests measure the strain response due to a constant stress as shown in Figure 3. The classical creep curve represents the evolution of strain as a function of time in a material subjected to uniaxial stress at a constant temperature. The creep test, for instance, is performed by applying a constant force/stress and analyzing the strain response of the system. In general, as shown in Figure 3b this curve usually shows three phases or periods of behavior:
A primary creep stage, also known as transient creep, is the starting stage during which hardening of the material leads to a decrease in the rate of flow which is initially very high.
(
0
≤
ε
≤
ε
1
)
{\displaystyle (0\leq {\boldsymbol {\varepsilon }}\leq {\boldsymbol {\varepsilon }}_{1})}
.
The secondary creep stage, also known as the steady state, is where the strain rate is constant.
(
ε
1
≤
ε
≤
ε
2
)
{\displaystyle ({\boldsymbol {\varepsilon }}_{1}\leq {\boldsymbol {\varepsilon }}\leq {\boldsymbol {\varepsilon }}_{2})}
.
A tertiary creep phase in which there is an increase in the strain rate up to the fracture strain.
(
ε
2
≤
ε
≤
ε
R
)
{\displaystyle ({\boldsymbol {\varepsilon }}_{2}\leq {\boldsymbol {\varepsilon }}\leq {\boldsymbol {\varepsilon }}_{R})}
.
=== Relaxation test ===
As shown in Figure 4, the relaxation test is defined as the stress response due to a constant strain for a period of time. In viscoplastic materials, relaxation tests demonstrate the stress relaxation in uniaxial loading at a constant strain. In fact, these tests characterize the viscosity and can be used to determine the relation which exists between the stress and the rate of viscoplastic strain. The decomposition of strain rate is
d
ε
d
t
=
d
ε
e
d
t
+
d
ε
v
p
d
t
.
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}}{\mathrm {d} t}}={\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {e} }}{\mathrm {d} t}}+{\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}{\mathrm {d} t}}~.}
The elastic part of the strain rate is given by
d
ε
e
d
t
=
E
−
1
d
σ
d
t
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {e} }}{\mathrm {d} t}}={\mathsf {E}}^{-1}~{\cfrac {\mathrm {d} {\boldsymbol {\sigma }}}{\mathrm {d} t}}}
For the flat region of the strain–time curve, the total strain rate is zero. Hence we have,
d
ε
v
p
d
t
=
−
E
−
1
d
σ
d
t
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}{\mathrm {d} t}}=-{\mathsf {E}}^{-1}~{\cfrac {\mathrm {d} {\boldsymbol {\sigma }}}{\mathrm {d} t}}}
Therefore, the relaxation curve can be used to determine rate of viscoplastic strain and hence the viscosity of the dashpot in a one-dimensional viscoplastic material model. The residual value that is reached when the stress has plateaued at the end of a relaxation test corresponds to the upper limit of elasticity. For some materials such as rock salt such an upper limit of elasticity occurs at a very small value of stress and relaxation tests can be continued for more than a year without any observable plateau in the stress.
It is important to note that relaxation tests are extremely difficult to perform because maintaining the condition
d
ε
d
t
=
0
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}}{\mathrm {d} t}}=0}
in a test requires considerable delicacy.
== Rheological models of viscoplasticity ==
One-dimensional constitutive models for viscoplasticity based on spring-dashpot-slider elements include the perfectly viscoplastic solid, the elastic perfectly viscoplastic solid, and the elastoviscoplastic hardening solid. The elements may be connected in series or in parallel. In models where the elements are connected in series the strain is additive while the stress is equal in each element. In parallel connections, the stress is additive while the strain is equal in each element. Many of these one-dimensional models can be generalized to three dimensions for the small strain regime. In the subsequent discussion, time rates strain and stress are written as
ε
˙
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}}
and
σ
˙
{\displaystyle {\dot {\boldsymbol {\sigma }}}}
, respectively.
=== Perfectly viscoplastic solid (Norton-Hoff model) ===
In a perfectly viscoplastic solid, also called the Norton-Hoff model of viscoplasticity, the stress (as for viscous fluids) is a function of the rate of permanent strain. The effect of elasticity is neglected in the model, i.e.,
ε
e
=
0
{\displaystyle {\boldsymbol {\varepsilon }}_{e}=0}
and hence there is no initial yield stress, i.e.,
σ
y
=
0
{\displaystyle \sigma _{y}=0}
. The viscous dashpot has a response given by
σ
=
η
ε
˙
v
p
⟹
ε
˙
v
p
=
σ
η
{\displaystyle {\boldsymbol {\sigma }}=\eta ~{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }\implies {\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\cfrac {\boldsymbol {\sigma }}{\eta }}}
where
η
{\displaystyle \eta }
is the viscosity of the dashpot. In the Norton-Hoff model the viscosity
η
{\displaystyle \eta }
is a nonlinear function of the applied stress and is given by
η
=
λ
[
λ
|
|
σ
|
|
]
N
−
1
{\displaystyle \eta =\lambda \left[{\cfrac {\lambda }{||{\boldsymbol {\sigma }}||}}\right]^{N-1}}
where
N
{\displaystyle N}
is a fitting parameter, λ is the kinematic viscosity of the material and
|
|
σ
|
|
=
σ
:
σ
=
σ
i
j
σ
i
j
{\displaystyle ||{\boldsymbol {\sigma }}||={\sqrt {{\boldsymbol {\sigma }}:{\boldsymbol {\sigma }}}}={\sqrt {\sigma _{ij}\sigma _{ij}}}}
. Then the viscoplastic strain rate is given by the relation
ε
˙
v
p
=
σ
λ
[
|
|
σ
|
|
λ
]
N
−
1
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\cfrac {\boldsymbol {\sigma }}{\lambda }}\left[{\cfrac {||{\boldsymbol {\sigma }}||}{\lambda }}\right]^{N-1}}
In one-dimensional form, the Norton-Hoff model can be expressed as
σ
=
λ
(
ε
˙
v
p
)
1
/
N
{\displaystyle \sigma =\lambda ~\left({\dot {\varepsilon }}_{\mathrm {vp} }\right)^{1/N}}
When
N
=
1.0
{\displaystyle N=1.0}
the solid is viscoelastic.
If we assume that plastic flow is isochoric (volume preserving), then the above relation can be expressed in the more familiar form
s
=
2
K
(
3
ε
˙
e
q
)
m
−
1
ε
˙
v
p
{\displaystyle {\boldsymbol {s}}=2K~\left({\sqrt {3}}{\dot {\varepsilon }}_{\mathrm {eq} }\right)^{m-1}~{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }}
where
s
{\displaystyle {\boldsymbol {s}}}
is the deviatoric stress tensor,
ε
˙
e
q
{\displaystyle {\dot {\varepsilon }}_{\mathrm {eq} }}
is the von Mises equivalent strain rate, and
K
,
m
{\displaystyle K,m}
are material parameters. The equivalent strain rate is defined as
ϵ
¯
˙
=
2
3
ϵ
¯
¯
˙
:
ϵ
¯
¯
˙
{\displaystyle {\dot {\bar {\epsilon }}}={\sqrt {{\frac {2}{3}}{\dot {\bar {\bar {\epsilon }}}}:{\dot {\bar {\bar {\epsilon }}}}}}}
These models can be applied in metals and alloys at temperatures higher than two thirds of their absolute melting point (in kelvins) and polymers/asphalt at elevated temperature. The responses for strain hardening, creep, and relaxation tests of such material are shown in Figure 6.
=== Elastic perfectly viscoplastic solid (Bingham–Norton model) ===
Two types of elementary approaches can be used to build up an elastic-perfectly viscoplastic mode. In the first situation, the sliding friction element and the dashpot are arranged in parallel and then connected in series to the elastic spring as shown in Figure 7. This model is called the Bingham–Maxwell model (by analogy with the Maxwell model and the Bingham model) or the Bingham–Norton model. In the second situation, all three elements are arranged in parallel. Such a model is called a Bingham–Kelvin model by analogy with the Kelvin model.
For elastic-perfectly viscoplastic materials, the elastic strain is no longer considered negligible but the rate of plastic strain is only a function of the initial yield stress and there is no influence of hardening. The sliding element represents a constant yielding stress when the elastic limit is exceeded irrespective of the strain. The model can be expressed as
σ
=
E
ε
f
o
r
‖
σ
‖
<
σ
y
ε
˙
=
ε
˙
e
+
ε
˙
v
p
=
E
−
1
σ
˙
+
σ
η
[
1
−
σ
y
‖
σ
‖
]
f
o
r
‖
σ
‖
≥
σ
y
{\displaystyle {\begin{aligned}&{\boldsymbol {\sigma }}={\mathsf {E}}~{\boldsymbol {\varepsilon }}&&\mathrm {for} ~\|{\boldsymbol {\sigma }}\|<\sigma _{y}\\&{\dot {\boldsymbol {\varepsilon }}}={\dot {\boldsymbol {\varepsilon }}}_{\mathrm {e} }+{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+{\cfrac {\boldsymbol {\sigma }}{\eta }}\left[1-{\cfrac {\sigma _{y}}{\|{\boldsymbol {\sigma }}\|}}\right]&&\mathrm {for} ~\|{\boldsymbol {\sigma }}\|\geq \sigma _{y}\end{aligned}}}
where
η
{\displaystyle \eta }
is the viscosity of the dashpot element. If the dashpot element has a response that is of the Norton form
σ
η
=
σ
λ
[
‖
σ
‖
λ
]
N
−
1
{\displaystyle {\cfrac {\boldsymbol {\sigma }}{\eta }}={\cfrac {\boldsymbol {\sigma }}{\lambda }}\left[{\cfrac {\|{\boldsymbol {\sigma }}\|}{\lambda }}\right]^{N-1}}
we get the Bingham–Norton model
ε
˙
=
E
−
1
σ
˙
+
σ
λ
[
‖
σ
‖
λ
]
N
−
1
[
1
−
σ
y
‖
σ
‖
]
f
o
r
‖
σ
‖
≥
σ
y
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+{\cfrac {\boldsymbol {\sigma }}{\lambda }}\left[{\cfrac {\|{\boldsymbol {\sigma }}\|}{\lambda }}\right]^{N-1}\left[1-{\cfrac {\sigma _{y}}{\|{\boldsymbol {\sigma }}\|}}\right]\quad \mathrm {for} ~\|{\boldsymbol {\sigma }}\|\geq \sigma _{y}}
Other expressions for the strain rate can also be observed in the literature with the general form
ε
˙
=
E
−
1
σ
˙
+
f
(
σ
,
σ
y
)
σ
f
o
r
‖
σ
‖
≥
σ
y
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+f({\boldsymbol {\sigma }},\sigma _{y})~{\boldsymbol {\sigma }}\quad \mathrm {for} ~\|{\boldsymbol {\sigma }}\|\geq \sigma _{y}}
The responses for strain hardening, creep, and relaxation tests of such material are shown in Figure 8.
=== Elastoviscoplastic hardening solid ===
An elastic-viscoplastic material with strain hardening is described by equations similar to those for an elastic-viscoplastic material with perfect plasticity. However, in this case the stress depends both on the plastic strain rate and on the plastic strain itself. For an elastoviscoplastic material the stress, after exceeding the yield stress, continues to increase beyond the initial yielding point. This implies that the yield stress in the sliding element increases with strain and the model may be expressed in generic terms as
ε
=
ε
e
=
E
−
1
σ
=
ε
f
o
r
|
|
σ
|
|
<
σ
y
ε
˙
=
ε
˙
e
+
ε
˙
v
p
=
E
−
1
σ
˙
+
f
(
σ
,
σ
y
,
ε
v
p
)
σ
f
o
r
|
|
σ
|
|
≥
σ
y
{\displaystyle {\begin{aligned}&{\boldsymbol {\varepsilon }}={\boldsymbol {\varepsilon }}_{\mathrm {e} }={\mathsf {E}}^{-1}~{\boldsymbol {\sigma }}=~{\boldsymbol {\varepsilon }}&&\mathrm {for} ~||{\boldsymbol {\sigma }}||<\sigma _{y}\\&{\dot {\boldsymbol {\varepsilon }}}={\dot {\boldsymbol {\varepsilon }}}_{\mathrm {e} }+{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+f({\boldsymbol {\sigma }},\sigma _{y},{\boldsymbol {\varepsilon }}_{\mathrm {vp} })~{\boldsymbol {\sigma }}&&\mathrm {for} ~||{\boldsymbol {\sigma }}||\geq \sigma _{y}\end{aligned}}}
This model is adopted when metals and alloys are at medium and higher temperatures and wood under high loads. The responses for strain hardening, creep, and relaxation tests of such a material are shown in Figure 9.
== Strain-rate dependent plasticity models ==
Classical phenomenological viscoplasticity models for small strains are usually categorized into two types:
the Perzyna formulation
the Duvaut–Lions formulation
=== Perzyna formulation ===
In the Perzyna formulation the plastic strain rate is assumed to be given by a constitutive relation of the form
ε
˙
v
p
=
⟨
f
(
σ
,
q
)
⟩
τ
∂
f
∂
σ
=
{
f
(
σ
,
q
)
τ
∂
f
∂
σ
i
f
f
(
σ
,
q
)
>
0
0
o
t
h
e
r
w
i
s
e
{\displaystyle {\dot {\varepsilon }}_{\mathrm {vp} }={\cfrac {\left\langle f({\boldsymbol {\sigma }},{\boldsymbol {q}})\right\rangle }{\tau }}{\cfrac {\partial f}{\partial {\boldsymbol {\sigma }}}}={\begin{cases}{\cfrac {f({\boldsymbol {\sigma }},{\boldsymbol {q}})}{\tau }}{\cfrac {\partial f}{\partial {\boldsymbol {\sigma }}}}&{\rm {if}}~f({\boldsymbol {\sigma }},{\boldsymbol {q}})>0\\0&{\rm {otherwise}}\\\end{cases}}}
where
f
(
.
,
.
)
{\displaystyle f(.,.)}
is a yield function,
σ
{\displaystyle {\boldsymbol {\sigma }}}
is the Cauchy stress,
q
{\displaystyle {\boldsymbol {q}}}
is a set of internal variables (such as the plastic strain
ε
v
p
{\displaystyle {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}
),
τ
{\displaystyle \tau }
is a relaxation time. The notation
⟨
…
⟩
{\displaystyle \langle \dots \rangle }
denotes the Macaulay brackets. The flow rule used in various versions of the Chaboche model is a special case of Perzyna's flow rule and has the form
ε
˙
v
p
=
⟨
f
f
0
⟩
n
s
i
g
n
(
σ
−
χ
)
{\displaystyle {\dot {\varepsilon }}_{\mathrm {vp} }=\left\langle {\frac {f}{f_{0}}}\right\rangle ^{n}sign({\boldsymbol {\sigma }}-{\boldsymbol {\chi }})}
where
f
0
{\displaystyle f_{0}}
is the quasistatic value of
f
{\displaystyle f}
and
χ
{\displaystyle {\boldsymbol {\chi }}}
is a backstress. Several models for the backstress also go by the name Chaboche model.
=== Duvaut–Lions formulation ===
The Duvaut–Lions formulation is equivalent to the Perzyna formulation and may be expressed as
ε
˙
v
p
=
{
C
−
1
:
σ
−
P
σ
τ
i
f
f
(
σ
,
q
)
>
0
0
o
t
h
e
r
w
i
s
e
{\displaystyle {\dot {\varepsilon }}_{\mathrm {vp} }={\begin{cases}{\mathsf {C}}^{-1}:{\cfrac {{\boldsymbol {\sigma }}-{\mathcal {P}}{\boldsymbol {\sigma }}}{\tau }}&{\rm {{if}~f({\boldsymbol {\sigma }},{\boldsymbol {q}})>0}}\\0&{\rm {otherwise}}\end{cases}}}
where
C
{\displaystyle {\mathsf {C}}}
is the elastic stiffness tensor,
P
σ
{\displaystyle {\mathcal {P}}{\boldsymbol {\sigma }}}
is the closest point projection of the stress state on to the boundary of the region that bounds all possible elastic stress states. The quantity
P
σ
{\displaystyle {\mathcal {P}}{\boldsymbol {\sigma }}}
is typically found from the rate-independent solution to a plasticity problem.
=== Flow stress models ===
The quantity
f
(
σ
,
q
)
{\displaystyle f({\boldsymbol {\sigma }},{\boldsymbol {q}})}
represents the evolution of the yield surface. The yield function
f
{\displaystyle f}
is often expressed as an equation consisting of some invariant of stress and a model for the yield stress (or plastic flow stress). An example is von Mises or
J
2
{\displaystyle J_{2}}
plasticity. In those situations the plastic strain rate is calculated in the same manner as in rate-independent plasticity. In other situations, the yield stress model provides a direct means of computing the plastic strain rate.
Numerous empirical and semi-empirical flow stress models are used the computational plasticity. The following temperature and strain-rate dependent models provide a sampling of the models in current use:
the Johnson–Cook model
the Steinberg–Cochran–Guinan–Lund model.
the Zerilli–Armstrong model.
the Mechanical threshold stress model.
the Preston–Tonks–Wallace model.
The Johnson–Cook (JC) model is purely empirical and is the most widely used of the five. However, this model exhibits an unrealistically small strain-rate dependence at high temperatures. The Steinberg–Cochran–Guinan–Lund (SCGL) model is semi-empirical. The model is purely empirical and strain-rate independent at high strain-rates. A dislocation-based extension based on is used at low strain-rates. The SCGL model is used extensively by the shock physics community. The Zerilli–Armstrong (ZA) model is a simple physically based model that has been used extensively. A more complex model that is based on ideas from dislocation dynamics is the Mechanical Threshold Stress (MTS) model. This model has been used to model the plastic deformation of copper, tantalum, alloys of steel, and aluminum alloys. However, the MTS model is limited to strain-rates less than around 107/s. The Preston–Tonks–Wallace (PTW) model is also physically based and has a form similar to the MTS model. However, the PTW model has components that can model plastic deformation in the overdriven shock regime (strain-rates greater that 107/s). Hence this model is valid for the largest range of strain-rates among the five flow stress models.
==== Johnson–Cook flow stress model ====
The Johnson–Cook (JC) model is purely empirical and gives the following relation for the flow stress (
σ
y
{\displaystyle \sigma _{y}}
)
(1)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
[
A
+
B
(
ε
p
)
n
]
[
1
+
C
ln
(
ε
p
˙
∗
)
]
[
1
−
(
T
∗
)
m
]
{\displaystyle {\text{(1)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)=\left[A+B(\varepsilon _{\rm {p}})^{n}\right]\left[1+C\ln({\dot {\varepsilon _{\rm {p}}}}^{*})\right]\left[1-(T^{*})^{m}\right]}
where
ε
p
{\displaystyle \varepsilon _{\rm {p}}}
is the equivalent plastic strain,
ε
p
˙
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}}
is the
plastic strain-rate, and
A
,
B
,
C
,
n
,
m
{\displaystyle A,B,C,n,m}
are material constants.
The normalized strain-rate and temperature in equation (1) are defined as
ε
p
˙
∗
:=
ε
p
˙
ε
p
0
˙
and
T
∗
:=
(
T
−
T
0
)
(
T
m
−
T
0
)
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}^{*}:={\cfrac {\dot {\varepsilon _{\rm {p}}}}{\dot {\varepsilon _{\rm {p0}}}}}\qquad {\text{and}}\qquad T^{*}:={\cfrac {(T-T_{0})}{(T_{m}-T_{0})}}}
where
ε
p
0
˙
{\displaystyle {\dot {\varepsilon _{\rm {p0}}}}}
is the effective plastic strain-rate of the quasi-static test used to determine the yield and hardening parameters A,B and n. This is not as it is often thought just a parameter to make
ε
p
˙
∗
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}^{*}}
non-dimensional.
T
0
{\displaystyle T_{0}}
is a reference temperature, and
T
m
{\displaystyle T_{m}}
is a reference melt temperature. For conditions where
T
∗
<
0
{\displaystyle T^{*}<0}
, we assume that
m
=
1
{\displaystyle m=1}
.
==== Steinberg–Cochran–Guinan–Lund flow stress model ====
The Steinberg–Cochran–Guinan–Lund (SCGL) model is a semi-empirical model that was developed by Steinberg et al. for high strain-rate situations and extended to low strain-rates and bcc materials by Steinberg and Lund. The flow stress in this model is given by
(2)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
[
σ
a
f
(
ε
p
)
+
σ
t
(
ε
p
˙
,
T
)
]
μ
(
p
,
T
)
μ
0
;
σ
a
f
≤
σ
max
and
σ
t
≤
σ
p
{\displaystyle {\text{(2)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)=\left[\sigma _{a}f(\varepsilon _{\rm {p}})+\sigma _{t}({\dot {\varepsilon _{\rm {p}}}},T)\right]{\frac {\mu (p,T)}{\mu _{0}}};\quad \sigma _{a}f\leq \sigma _{\text{max}}~~{\text{and}}~~\sigma _{t}\leq \sigma _{p}}
where
σ
a
{\displaystyle \sigma _{a}}
is the athermal component of the flow stress,
f
(
ε
p
)
{\displaystyle f(\varepsilon _{\rm {p}})}
is a function that represents strain hardening,
σ
t
{\displaystyle \sigma _{t}}
is the thermally activated component of the flow stress,
μ
(
p
,
T
)
{\displaystyle \mu (p,T)}
is the pressure- and temperature-dependent shear modulus, and
μ
0
{\displaystyle \mu _{0}}
is the shear modulus at standard temperature and pressure. The saturation value of the athermal stress is
σ
max
{\displaystyle \sigma _{\text{max}}}
. The saturation of the thermally activated stress is the Peierls stress (
σ
p
{\displaystyle \sigma _{p}}
). The shear modulus for this model is usually computed with the Steinberg–Cochran–Guinan shear modulus model.
The strain hardening function (
f
{\displaystyle f}
) has the form
f
(
ε
p
)
=
[
1
+
β
(
ε
p
+
ε
p
i
)
]
n
{\displaystyle f(\varepsilon _{\rm {p}})=[1+\beta (\varepsilon _{\rm {p}}+\varepsilon _{\rm {p}}i)]^{n}}
where
β
,
n
{\displaystyle \beta ,n}
are work hardening parameters, and
ε
p
i
{\displaystyle \varepsilon _{\rm {p}}i}
is the initial equivalent plastic strain.
The thermal component (
σ
t
{\displaystyle \sigma _{t}}
) is computed using a bisection algorithm from the following equation.
ε
p
˙
=
[
1
C
1
exp
[
2
U
k
k
b
T
(
1
−
σ
t
σ
p
)
2
]
+
C
2
σ
t
]
−
1
;
σ
t
≤
σ
p
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}=\left[{\frac {1}{C_{1}}}\exp \left[{\frac {2U_{k}}{k_{b}~T}}\left(1-{\frac {\sigma _{t}}{\sigma _{p}}}\right)^{2}\right]+{\frac {C_{2}}{\sigma _{t}}}\right]^{-1};\quad \sigma _{t}\leq \sigma _{p}}
where
2
U
k
{\displaystyle 2U_{k}}
is the energy to form a kink-pair in a dislocation segment of length
L
d
{\displaystyle L_{d}}
,
k
b
{\displaystyle k_{b}}
is the Boltzmann constant,
σ
p
{\displaystyle \sigma _{p}}
is the Peierls stress. The constants
C
1
,
C
2
{\displaystyle C_{1},C_{2}}
are given by the relations
C
1
:=
ρ
d
L
d
a
b
2
ν
2
w
2
;
C
2
:=
D
ρ
d
b
2
{\displaystyle C_{1}:={\frac {\rho _{d}L_{d}ab^{2}\nu }{2w^{2}}};\quad C_{2}:={\frac {D}{\rho _{d}b^{2}}}}
where
ρ
d
{\displaystyle \rho _{d}}
is the dislocation density,
L
d
{\displaystyle L_{d}}
is the length of a dislocation segment,
a
{\displaystyle a}
is the distance between Peierls valleys,
b
{\displaystyle b}
is the magnitude of the Burgers vector,
ν
{\displaystyle \nu }
is the Debye frequency,
w
{\displaystyle w}
is the width of a kink loop, and
D
{\displaystyle D}
is the drag coefficient.
==== Zerilli–Armstrong flow stress model ====
The Zerilli–Armstrong (ZA) model is based on simplified dislocation mechanics. The general form of the equation for the flow stress is
(3)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
σ
a
+
B
exp
(
−
β
T
)
+
B
0
ε
p
exp
(
−
α
T
)
.
{\displaystyle {\text{(3)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)=\sigma _{a}+B\exp(-\beta T)+B_{0}{\sqrt {\varepsilon _{\rm {p}}}}\exp(-\alpha T)~.}
In this model,
σ
a
{\displaystyle \sigma _{a}}
is the athermal component of the flow stress given by
σ
a
:=
σ
g
+
k
h
l
+
K
ε
p
n
,
{\displaystyle \sigma _{a}:=\sigma _{g}+{\frac {k_{h}}{\sqrt {l}}}+K\varepsilon _{\rm {p}}^{n},}
where
σ
g
{\displaystyle \sigma _{g}}
is the contribution due to solutes and initial dislocation density,
k
h
{\displaystyle k_{h}}
is the microstructural stress intensity,
l
{\displaystyle l}
is the average grain diameter,
K
{\displaystyle K}
is zero for fcc materials,
B
,
B
0
{\displaystyle B,B_{0}}
are material constants.
In the thermally activated terms, the functional forms of the exponents
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
are
α
=
α
0
−
α
1
ln
(
ε
p
˙
)
;
β
=
β
0
−
β
1
ln
(
ε
p
˙
)
;
{\displaystyle \alpha =\alpha _{0}-\alpha _{1}\ln({\dot {\varepsilon _{\rm {p}}}});\quad \beta =\beta _{0}-\beta _{1}\ln({\dot {\varepsilon _{\rm {p}}}});}
where
α
0
,
α
1
,
β
0
,
β
1
{\displaystyle \alpha _{0},\alpha _{1},\beta _{0},\beta _{1}}
are material parameters that depend on the type of material (fcc, bcc, hcp, alloys). The Zerilli–Armstrong model has been modified by for better performance at high temperatures.
==== Mechanical threshold stress flow stress model ====
The Mechanical Threshold Stress (MTS) model) has the form
(4)
σ
y
(
ε
p
,
ε
˙
,
T
)
=
σ
a
+
(
S
i
σ
i
+
S
e
σ
e
)
μ
(
p
,
T
)
μ
0
{\displaystyle {\text{(4)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon }},T)=\sigma _{a}+(S_{i}\sigma _{i}+S_{e}\sigma _{e}){\frac {\mu (p,T)}{\mu _{0}}}}
where
σ
a
{\displaystyle \sigma _{a}}
is the athermal component of mechanical threshold stress,
σ
i
{\displaystyle \sigma _{i}}
is the component of the flow stress due to intrinsic barriers to thermally activated dislocation motion and dislocation-dislocation interactions,
σ
e
{\displaystyle \sigma _{e}}
is the component of the flow stress due to microstructural evolution with increasing deformation (strain hardening), (
S
i
,
S
e
{\displaystyle S_{i},S_{e}}
) are temperature and strain-rate dependent scaling factors, and
μ
0
{\displaystyle \mu _{0}}
is the shear modulus at 0 K and ambient pressure.
The scaling factors take the Arrhenius form
S
i
=
[
1
−
(
k
b
T
g
0
i
b
3
μ
(
p
,
T
)
ln
ε
0
˙
ε
˙
)
1
/
q
i
]
1
/
p
i
S
e
=
[
1
−
(
k
b
T
g
0
e
b
3
μ
(
p
,
T
)
ln
ε
0
˙
ε
˙
)
1
/
q
e
]
1
/
p
e
{\displaystyle {\begin{aligned}S_{i}&=\left[1-\left({\frac {k_{b}~T}{g_{0i}b^{3}\mu (p,T)}}\ln {\frac {\dot {\varepsilon _{\rm {0}}}}{\dot {\varepsilon }}}\right)^{1/q_{i}}\right]^{1/p_{i}}\\S_{e}&=\left[1-\left({\frac {k_{b}~T}{g_{0e}b^{3}\mu (p,T)}}\ln {\frac {\dot {\varepsilon _{\rm {0}}}}{\dot {\varepsilon }}}\right)^{1/q_{e}}\right]^{1/p_{e}}\end{aligned}}}
where
k
b
{\displaystyle k_{b}}
is the Boltzmann constant,
b
{\displaystyle b}
is the magnitude of the Burgers' vector, (
g
0
i
,
g
0
e
{\displaystyle g_{0i},g_{0e}}
) are normalized activation energies, (
ε
˙
,
ε
0
˙
{\displaystyle {\dot {\varepsilon }},{\dot {\varepsilon _{\rm {0}}}}}
) are the strain-rate and reference strain-rate, and (
q
i
,
p
i
,
q
e
,
p
e
{\displaystyle q_{i},p_{i},q_{e},p_{e}}
) are constants.
The strain hardening component of the mechanical threshold stress (
σ
e
{\displaystyle \sigma _{e}}
) is given by an empirical modified Voce law
(5)
d
σ
e
d
ε
p
=
θ
(
σ
e
)
{\displaystyle {\text{(5)}}\qquad {\frac {d\sigma _{e}}{d\varepsilon _{\rm {p}}}}=\theta (\sigma _{e})}
where
θ
(
σ
e
)
=
θ
0
[
1
−
F
(
σ
e
)
]
+
θ
I
V
F
(
σ
e
)
θ
0
=
a
0
+
a
1
ln
ε
p
˙
+
a
2
ε
p
˙
−
a
3
T
F
(
σ
e
)
=
tanh
(
α
σ
e
σ
e
s
)
tanh
(
α
)
ln
(
σ
e
s
σ
0
e
s
)
=
(
k
T
g
0
e
s
b
3
μ
(
p
,
T
)
)
ln
(
ε
p
˙
ε
p
˙
)
{\displaystyle {\begin{aligned}\theta (\sigma _{e})&=\theta _{0}[1-F(\sigma _{e})]+\theta _{IV}F(\sigma _{e})\\\theta _{0}&=a_{0}+a_{1}\ln {\dot {\varepsilon _{\rm {p}}}}+a_{2}{\sqrt {\dot {\varepsilon _{\rm {p}}}}}-a_{3}T\\F(\sigma _{e})&={\cfrac {\tanh \left(\alpha {\cfrac {\sigma _{e}}{\sigma _{es}}}\right)}{\tanh(\alpha )}}\\\ln({\cfrac {\sigma _{es}}{\sigma _{0es}}})&=\left({\frac {kT}{g_{0es}b^{3}\mu (p,T)}}\right)\ln \left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\dot {\varepsilon _{\rm {p}}}}}\right)\end{aligned}}}
and
θ
0
{\displaystyle \theta _{0}}
is the hardening due to dislocation accumulation,
θ
I
V
{\displaystyle \theta _{IV}}
is the contribution due to stage-IV hardening, (
a
0
,
a
1
,
a
2
,
a
3
,
α
{\displaystyle a_{0},a_{1},a_{2},a_{3},\alpha }
) are constants,
σ
e
s
{\displaystyle \sigma _{es}}
is the stress at zero strain hardening rate,
σ
0
e
s
{\displaystyle \sigma _{0es}}
is the saturation threshold stress for deformation at 0 K,
g
0
e
s
{\displaystyle g_{0es}}
is a constant, and
ε
p
˙
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}}
is the maximum strain-rate. Note that the maximum strain-rate is usually limited to about
10
7
{\displaystyle 10^{7}}
/s.
==== Preston–Tonks–Wallace flow stress model ====
The Preston–Tonks–Wallace (PTW) model attempts to provide a model for the flow stress for extreme strain-rates (up to 1011/s) and temperatures up to melt. A linear Voce hardening law is used in the model. The PTW flow stress is given by
(6)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
{
2
[
τ
s
+
α
ln
[
1
−
φ
exp
(
−
β
−
θ
ε
p
α
φ
)
]
]
μ
(
p
,
T
)
thermal regime
2
τ
s
μ
(
p
,
T
)
shock regime
{\displaystyle {\text{(6)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)={\begin{cases}2\left[\tau _{s}+\alpha \ln \left[1-\varphi \exp \left(-\beta -{\cfrac {\theta \varepsilon _{\rm {p}}}{\alpha \varphi }}\right)\right]\right]\mu (p,T)&{\text{thermal regime}}\\2\tau _{s}\mu (p,T)&{\text{shock regime}}\end{cases}}}
with
α
:=
s
0
−
τ
y
d
;
β
:=
τ
s
−
τ
y
α
;
φ
:=
exp
(
β
)
−
1
{\displaystyle \alpha :={\frac {s_{0}-\tau _{y}}{d}};\quad \beta :={\frac {\tau _{s}-\tau _{y}}{\alpha }};\quad \varphi :=\exp(\beta )-1}
where
τ
s
{\displaystyle \tau _{s}}
is a normalized work-hardening saturation stress,
s
0
{\displaystyle s_{0}}
is the value of
τ
s
{\displaystyle \tau _{s}}
at 0K,
τ
y
{\displaystyle \tau _{y}}
is a normalized yield stress,
θ
{\displaystyle \theta }
is the hardening constant in the Voce hardening law, and
d
{\displaystyle d}
is a dimensionless material parameter that modifies the Voce hardening law.
The saturation stress and the yield stress are given by
τ
s
=
max
{
s
0
−
(
s
0
−
s
∞
)
e
r
f
[
κ
T
^
ln
(
γ
ξ
˙
ε
p
˙
)
]
,
s
0
(
ε
p
˙
γ
ξ
˙
)
s
1
}
τ
y
=
max
{
y
0
−
(
y
0
−
y
∞
)
e
r
f
[
κ
T
^
ln
(
γ
ξ
˙
ε
p
˙
)
]
,
min
{
y
1
(
ε
p
˙
γ
ξ
˙
)
y
2
,
s
0
(
ε
p
˙
γ
ξ
˙
)
s
1
}
}
{\displaystyle {\begin{aligned}\tau _{s}&=\max \left\{s_{0}-(s_{0}-s_{\infty }){\rm {{erf}\left[\kappa {\hat {T}}\ln \left({\cfrac {\gamma {\dot {\xi }}}{\dot {\varepsilon _{\rm {p}}}}}\right)\right],s_{0}\left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\gamma {\dot {\xi }}}}\right)^{s_{1}}}}\right\}\\\tau _{y}&=\max \left\{y_{0}-(y_{0}-y_{\infty }){\rm {{erf}\left[\kappa {\hat {T}}\ln \left({\cfrac {\gamma {\dot {\xi }}}{\dot {\varepsilon _{\rm {p}}}}}\right)\right],\min \left\{y_{1}\left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\gamma {\dot {\xi }}}}\right)^{y_{2}},s_{0}\left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\gamma {\dot {\xi }}}}\right)^{s_{1}}\right\}}}\right\}\end{aligned}}}
where
s
∞
{\displaystyle s_{\infty }}
is the value of
τ
s
{\displaystyle \tau _{s}}
close to the melt temperature, (
y
0
,
y
∞
{\displaystyle y_{0},y_{\infty }}
) are the values of
τ
y
{\displaystyle \tau _{y}}
at 0 K and close to melt, respectively,
(
κ
,
γ
)
{\displaystyle (\kappa ,\gamma )}
are material constants,
T
^
=
T
/
T
m
{\displaystyle {\hat {T}}=T/T_{m}}
, (
s
1
,
y
1
,
y
2
{\displaystyle s_{1},y_{1},y_{2}}
) are material parameters for the high strain-rate regime, and
ξ
˙
=
1
2
(
4
π
ρ
3
M
)
1
/
3
(
μ
(
p
,
T
)
ρ
)
1
/
2
{\displaystyle {\dot {\xi }}={\frac {1}{2}}\left({\cfrac {4\pi \rho }{3M}}\right)^{1/3}\left({\cfrac {\mu (p,T)}{\rho }}\right)^{1/2}}
where
ρ
{\displaystyle \rho }
is the density, and
M
{\displaystyle M}
is the atomic mass.
== See also ==
Viscoelasticity
Bingham plastic
Dashpot
Creep (deformation)
Plasticity (physics)
Continuum mechanics
Quasi-solid
== References == | Wikipedia/Zerilli-Armstrong_plasticity_model |
Viscoplasticity is a theory in continuum mechanics that describes the rate-dependent inelastic behavior of solids. Rate-dependence in this context means that the deformation of the material depends on the rate at which loads are applied. The inelastic behavior that is the subject of viscoplasticity is plastic deformation which means that the material undergoes unrecoverable deformations when a load level is reached. Rate-dependent plasticity is important for transient plasticity calculations. The main difference between rate-independent plastic and viscoplastic material models is that the latter exhibit not only permanent deformations after the application of loads but continue to undergo a creep flow as a function of time under the influence of the applied load.
The elastic response of viscoplastic materials can be represented in one-dimension by Hookean spring elements. Rate-dependence can be represented by nonlinear dashpot elements in a manner similar to viscoelasticity. Plasticity can be accounted for by adding sliding frictional elements as shown in Figure 1. In the figure
E
{\displaystyle E}
is the modulus of elasticity,
λ
{\displaystyle \lambda }
is the viscosity parameter and
N
{\displaystyle N}
is a power-law type parameter that represents non-linear dashpot
[
σ
(
d
ε
/
d
t
)
=
σ
=
λ
(
d
ε
/
d
t
)
1
/
N
]
{\displaystyle [\sigma (\mathrm {d} \varepsilon /\mathrm {d} t)=\sigma =\lambda (\mathrm {d} \varepsilon /\mathrm {d} t)^{1/N}]}
. The sliding element can have a yield stress (
σ
y
{\displaystyle \sigma _{y}}
) that is strain rate dependent, or even constant, as shown in Figure 1c.
Viscoplasticity is usually modeled in three-dimensions using overstress models of the Perzyna or Duvaut-Lions types. In these models, the stress is allowed to increase beyond the rate-independent yield surface upon application of a load and then allowed to relax back to the yield surface over time. The yield surface is usually assumed not to be rate-dependent in such models. An alternative approach is to add a strain rate dependence to the yield stress and use the techniques of rate independent plasticity to calculate the response of a material.
For metals and alloys, viscoplasticity is the macroscopic behavior caused by a mechanism linked to the movement of dislocations in grains, with superposed effects of inter-crystalline gliding. The mechanism usually becomes dominant at temperatures greater than approximately one third of the absolute melting temperature. However, certain alloys exhibit viscoplasticity at room temperature (300 K). For polymers, wood, and bitumen, the theory of viscoplasticity is required to describe behavior beyond the limit of elasticity or viscoelasticity.
In general, viscoplasticity theories are useful in areas such as:
the calculation of permanent deformations,
the prediction of the plastic collapse of structures,
the investigation of stability,
crash simulations,
systems exposed to high temperatures such as turbines in engines, e.g. a power plant,
dynamic problems and systems exposed to high strain rates.
== History ==
Research on plasticity theories started in 1864 with the work of Henri Tresca, Saint Venant (1870) and Levy (1871) on the maximum shear criterion. An improved plasticity model was presented in 1913 by Von Mises which is now referred to as the von Mises yield criterion. In viscoplasticity, the development of a mathematical model heads back to 1910 with the representation of primary creep by Andrade's law. In 1929, Norton developed a one-dimensional dashpot model which linked the rate of secondary creep to the stress. In 1934, Odqvist generalized Norton's law to the multi-axial case.
Concepts such as the normality of plastic flow to the yield surface and flow rules for plasticity were introduced by Prandtl (1924) and Reuss (1930). In 1932, Hohenemser and Prager proposed the first model for slow viscoplastic flow. This model provided a relation between the deviatoric stress and the strain rate for an incompressible Bingham solid However, the application of these theories did not begin before 1950, where limit theorems were discovered.
In 1960, the first IUTAM Symposium "Creep in Structures" organized by Hoff provided a major development in viscoplasticity with the works of Hoff, Rabotnov, Perzyna, Hult, and Lemaitre for the isotropic hardening laws, and those of Kratochvil, Malinini and Khadjinsky, Ponter and Leckie, and Chaboche for the kinematic hardening laws. Perzyna, in 1963, introduced a viscosity coefficient that is temperature and time dependent. The formulated models were supported by the thermodynamics of irreversible processes and the phenomenological standpoint. The ideas presented in these works have been the basis for most subsequent research into rate-dependent plasticity.
== Phenomenology ==
For a qualitative analysis, several characteristic tests are performed to describe the phenomenology of viscoplastic materials. Some examples of these tests are
hardening tests at constant stress or strain rate,
creep tests at constant force, and
stress relaxation at constant elongation.
=== Strain hardening test ===
One consequence of yielding is that as plastic deformation proceeds, an increase in stress is required to produce additional strain. This phenomenon is known as Strain/Work hardening. For a viscoplastic material the hardening curves are not significantly different from those of rate-independent plastic material. Nevertheless, three essential differences can be observed.
At the same strain, the higher the rate of strain the higher the stress
A change in the rate of strain during the test results in an immediate change in the stress–strain curve.
The concept of a plastic yield limit is no longer strictly applicable.
The hypothesis of partitioning the strains by decoupling the elastic and plastic parts is still applicable where the strains are small, i.e.,
ε
=
ε
e
+
ε
v
p
{\displaystyle {\boldsymbol {\varepsilon }}={\boldsymbol {\varepsilon }}_{\mathrm {e} }+{\boldsymbol {\varepsilon }}_{\mathrm {vp} }}
where
ε
e
{\displaystyle {\boldsymbol {\varepsilon }}_{\mathrm {e} }}
is the elastic strain and
ε
v
p
{\displaystyle {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}
is the viscoplastic strain. To obtain the stress–strain behavior shown in blue in the figure, the material is initially loaded at a strain rate of 0.1/s. The strain rate is then instantaneously raised to 100/s and held constant at that value for some time. At the end of that time period the strain rate is dropped instantaneously back to 0.1/s and the cycle is continued for increasing values of strain. There is clearly a lag between the strain-rate change and the stress response. This lag is modeled quite accurately by overstress models (such as the Perzyna model) but not by models of rate-independent plasticity that have a rate-dependent yield stress.
=== Creep test ===
Creep is the tendency of a solid material to slowly move or deform permanently under constant stresses. Creep tests measure the strain response due to a constant stress as shown in Figure 3. The classical creep curve represents the evolution of strain as a function of time in a material subjected to uniaxial stress at a constant temperature. The creep test, for instance, is performed by applying a constant force/stress and analyzing the strain response of the system. In general, as shown in Figure 3b this curve usually shows three phases or periods of behavior:
A primary creep stage, also known as transient creep, is the starting stage during which hardening of the material leads to a decrease in the rate of flow which is initially very high.
(
0
≤
ε
≤
ε
1
)
{\displaystyle (0\leq {\boldsymbol {\varepsilon }}\leq {\boldsymbol {\varepsilon }}_{1})}
.
The secondary creep stage, also known as the steady state, is where the strain rate is constant.
(
ε
1
≤
ε
≤
ε
2
)
{\displaystyle ({\boldsymbol {\varepsilon }}_{1}\leq {\boldsymbol {\varepsilon }}\leq {\boldsymbol {\varepsilon }}_{2})}
.
A tertiary creep phase in which there is an increase in the strain rate up to the fracture strain.
(
ε
2
≤
ε
≤
ε
R
)
{\displaystyle ({\boldsymbol {\varepsilon }}_{2}\leq {\boldsymbol {\varepsilon }}\leq {\boldsymbol {\varepsilon }}_{R})}
.
=== Relaxation test ===
As shown in Figure 4, the relaxation test is defined as the stress response due to a constant strain for a period of time. In viscoplastic materials, relaxation tests demonstrate the stress relaxation in uniaxial loading at a constant strain. In fact, these tests characterize the viscosity and can be used to determine the relation which exists between the stress and the rate of viscoplastic strain. The decomposition of strain rate is
d
ε
d
t
=
d
ε
e
d
t
+
d
ε
v
p
d
t
.
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}}{\mathrm {d} t}}={\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {e} }}{\mathrm {d} t}}+{\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}{\mathrm {d} t}}~.}
The elastic part of the strain rate is given by
d
ε
e
d
t
=
E
−
1
d
σ
d
t
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {e} }}{\mathrm {d} t}}={\mathsf {E}}^{-1}~{\cfrac {\mathrm {d} {\boldsymbol {\sigma }}}{\mathrm {d} t}}}
For the flat region of the strain–time curve, the total strain rate is zero. Hence we have,
d
ε
v
p
d
t
=
−
E
−
1
d
σ
d
t
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}{\mathrm {d} t}}=-{\mathsf {E}}^{-1}~{\cfrac {\mathrm {d} {\boldsymbol {\sigma }}}{\mathrm {d} t}}}
Therefore, the relaxation curve can be used to determine rate of viscoplastic strain and hence the viscosity of the dashpot in a one-dimensional viscoplastic material model. The residual value that is reached when the stress has plateaued at the end of a relaxation test corresponds to the upper limit of elasticity. For some materials such as rock salt such an upper limit of elasticity occurs at a very small value of stress and relaxation tests can be continued for more than a year without any observable plateau in the stress.
It is important to note that relaxation tests are extremely difficult to perform because maintaining the condition
d
ε
d
t
=
0
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}}{\mathrm {d} t}}=0}
in a test requires considerable delicacy.
== Rheological models of viscoplasticity ==
One-dimensional constitutive models for viscoplasticity based on spring-dashpot-slider elements include the perfectly viscoplastic solid, the elastic perfectly viscoplastic solid, and the elastoviscoplastic hardening solid. The elements may be connected in series or in parallel. In models where the elements are connected in series the strain is additive while the stress is equal in each element. In parallel connections, the stress is additive while the strain is equal in each element. Many of these one-dimensional models can be generalized to three dimensions for the small strain regime. In the subsequent discussion, time rates strain and stress are written as
ε
˙
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}}
and
σ
˙
{\displaystyle {\dot {\boldsymbol {\sigma }}}}
, respectively.
=== Perfectly viscoplastic solid (Norton-Hoff model) ===
In a perfectly viscoplastic solid, also called the Norton-Hoff model of viscoplasticity, the stress (as for viscous fluids) is a function of the rate of permanent strain. The effect of elasticity is neglected in the model, i.e.,
ε
e
=
0
{\displaystyle {\boldsymbol {\varepsilon }}_{e}=0}
and hence there is no initial yield stress, i.e.,
σ
y
=
0
{\displaystyle \sigma _{y}=0}
. The viscous dashpot has a response given by
σ
=
η
ε
˙
v
p
⟹
ε
˙
v
p
=
σ
η
{\displaystyle {\boldsymbol {\sigma }}=\eta ~{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }\implies {\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\cfrac {\boldsymbol {\sigma }}{\eta }}}
where
η
{\displaystyle \eta }
is the viscosity of the dashpot. In the Norton-Hoff model the viscosity
η
{\displaystyle \eta }
is a nonlinear function of the applied stress and is given by
η
=
λ
[
λ
|
|
σ
|
|
]
N
−
1
{\displaystyle \eta =\lambda \left[{\cfrac {\lambda }{||{\boldsymbol {\sigma }}||}}\right]^{N-1}}
where
N
{\displaystyle N}
is a fitting parameter, λ is the kinematic viscosity of the material and
|
|
σ
|
|
=
σ
:
σ
=
σ
i
j
σ
i
j
{\displaystyle ||{\boldsymbol {\sigma }}||={\sqrt {{\boldsymbol {\sigma }}:{\boldsymbol {\sigma }}}}={\sqrt {\sigma _{ij}\sigma _{ij}}}}
. Then the viscoplastic strain rate is given by the relation
ε
˙
v
p
=
σ
λ
[
|
|
σ
|
|
λ
]
N
−
1
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\cfrac {\boldsymbol {\sigma }}{\lambda }}\left[{\cfrac {||{\boldsymbol {\sigma }}||}{\lambda }}\right]^{N-1}}
In one-dimensional form, the Norton-Hoff model can be expressed as
σ
=
λ
(
ε
˙
v
p
)
1
/
N
{\displaystyle \sigma =\lambda ~\left({\dot {\varepsilon }}_{\mathrm {vp} }\right)^{1/N}}
When
N
=
1.0
{\displaystyle N=1.0}
the solid is viscoelastic.
If we assume that plastic flow is isochoric (volume preserving), then the above relation can be expressed in the more familiar form
s
=
2
K
(
3
ε
˙
e
q
)
m
−
1
ε
˙
v
p
{\displaystyle {\boldsymbol {s}}=2K~\left({\sqrt {3}}{\dot {\varepsilon }}_{\mathrm {eq} }\right)^{m-1}~{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }}
where
s
{\displaystyle {\boldsymbol {s}}}
is the deviatoric stress tensor,
ε
˙
e
q
{\displaystyle {\dot {\varepsilon }}_{\mathrm {eq} }}
is the von Mises equivalent strain rate, and
K
,
m
{\displaystyle K,m}
are material parameters. The equivalent strain rate is defined as
ϵ
¯
˙
=
2
3
ϵ
¯
¯
˙
:
ϵ
¯
¯
˙
{\displaystyle {\dot {\bar {\epsilon }}}={\sqrt {{\frac {2}{3}}{\dot {\bar {\bar {\epsilon }}}}:{\dot {\bar {\bar {\epsilon }}}}}}}
These models can be applied in metals and alloys at temperatures higher than two thirds of their absolute melting point (in kelvins) and polymers/asphalt at elevated temperature. The responses for strain hardening, creep, and relaxation tests of such material are shown in Figure 6.
=== Elastic perfectly viscoplastic solid (Bingham–Norton model) ===
Two types of elementary approaches can be used to build up an elastic-perfectly viscoplastic mode. In the first situation, the sliding friction element and the dashpot are arranged in parallel and then connected in series to the elastic spring as shown in Figure 7. This model is called the Bingham–Maxwell model (by analogy with the Maxwell model and the Bingham model) or the Bingham–Norton model. In the second situation, all three elements are arranged in parallel. Such a model is called a Bingham–Kelvin model by analogy with the Kelvin model.
For elastic-perfectly viscoplastic materials, the elastic strain is no longer considered negligible but the rate of plastic strain is only a function of the initial yield stress and there is no influence of hardening. The sliding element represents a constant yielding stress when the elastic limit is exceeded irrespective of the strain. The model can be expressed as
σ
=
E
ε
f
o
r
‖
σ
‖
<
σ
y
ε
˙
=
ε
˙
e
+
ε
˙
v
p
=
E
−
1
σ
˙
+
σ
η
[
1
−
σ
y
‖
σ
‖
]
f
o
r
‖
σ
‖
≥
σ
y
{\displaystyle {\begin{aligned}&{\boldsymbol {\sigma }}={\mathsf {E}}~{\boldsymbol {\varepsilon }}&&\mathrm {for} ~\|{\boldsymbol {\sigma }}\|<\sigma _{y}\\&{\dot {\boldsymbol {\varepsilon }}}={\dot {\boldsymbol {\varepsilon }}}_{\mathrm {e} }+{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+{\cfrac {\boldsymbol {\sigma }}{\eta }}\left[1-{\cfrac {\sigma _{y}}{\|{\boldsymbol {\sigma }}\|}}\right]&&\mathrm {for} ~\|{\boldsymbol {\sigma }}\|\geq \sigma _{y}\end{aligned}}}
where
η
{\displaystyle \eta }
is the viscosity of the dashpot element. If the dashpot element has a response that is of the Norton form
σ
η
=
σ
λ
[
‖
σ
‖
λ
]
N
−
1
{\displaystyle {\cfrac {\boldsymbol {\sigma }}{\eta }}={\cfrac {\boldsymbol {\sigma }}{\lambda }}\left[{\cfrac {\|{\boldsymbol {\sigma }}\|}{\lambda }}\right]^{N-1}}
we get the Bingham–Norton model
ε
˙
=
E
−
1
σ
˙
+
σ
λ
[
‖
σ
‖
λ
]
N
−
1
[
1
−
σ
y
‖
σ
‖
]
f
o
r
‖
σ
‖
≥
σ
y
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+{\cfrac {\boldsymbol {\sigma }}{\lambda }}\left[{\cfrac {\|{\boldsymbol {\sigma }}\|}{\lambda }}\right]^{N-1}\left[1-{\cfrac {\sigma _{y}}{\|{\boldsymbol {\sigma }}\|}}\right]\quad \mathrm {for} ~\|{\boldsymbol {\sigma }}\|\geq \sigma _{y}}
Other expressions for the strain rate can also be observed in the literature with the general form
ε
˙
=
E
−
1
σ
˙
+
f
(
σ
,
σ
y
)
σ
f
o
r
‖
σ
‖
≥
σ
y
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+f({\boldsymbol {\sigma }},\sigma _{y})~{\boldsymbol {\sigma }}\quad \mathrm {for} ~\|{\boldsymbol {\sigma }}\|\geq \sigma _{y}}
The responses for strain hardening, creep, and relaxation tests of such material are shown in Figure 8.
=== Elastoviscoplastic hardening solid ===
An elastic-viscoplastic material with strain hardening is described by equations similar to those for an elastic-viscoplastic material with perfect plasticity. However, in this case the stress depends both on the plastic strain rate and on the plastic strain itself. For an elastoviscoplastic material the stress, after exceeding the yield stress, continues to increase beyond the initial yielding point. This implies that the yield stress in the sliding element increases with strain and the model may be expressed in generic terms as
ε
=
ε
e
=
E
−
1
σ
=
ε
f
o
r
|
|
σ
|
|
<
σ
y
ε
˙
=
ε
˙
e
+
ε
˙
v
p
=
E
−
1
σ
˙
+
f
(
σ
,
σ
y
,
ε
v
p
)
σ
f
o
r
|
|
σ
|
|
≥
σ
y
{\displaystyle {\begin{aligned}&{\boldsymbol {\varepsilon }}={\boldsymbol {\varepsilon }}_{\mathrm {e} }={\mathsf {E}}^{-1}~{\boldsymbol {\sigma }}=~{\boldsymbol {\varepsilon }}&&\mathrm {for} ~||{\boldsymbol {\sigma }}||<\sigma _{y}\\&{\dot {\boldsymbol {\varepsilon }}}={\dot {\boldsymbol {\varepsilon }}}_{\mathrm {e} }+{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+f({\boldsymbol {\sigma }},\sigma _{y},{\boldsymbol {\varepsilon }}_{\mathrm {vp} })~{\boldsymbol {\sigma }}&&\mathrm {for} ~||{\boldsymbol {\sigma }}||\geq \sigma _{y}\end{aligned}}}
This model is adopted when metals and alloys are at medium and higher temperatures and wood under high loads. The responses for strain hardening, creep, and relaxation tests of such a material are shown in Figure 9.
== Strain-rate dependent plasticity models ==
Classical phenomenological viscoplasticity models for small strains are usually categorized into two types:
the Perzyna formulation
the Duvaut–Lions formulation
=== Perzyna formulation ===
In the Perzyna formulation the plastic strain rate is assumed to be given by a constitutive relation of the form
ε
˙
v
p
=
⟨
f
(
σ
,
q
)
⟩
τ
∂
f
∂
σ
=
{
f
(
σ
,
q
)
τ
∂
f
∂
σ
i
f
f
(
σ
,
q
)
>
0
0
o
t
h
e
r
w
i
s
e
{\displaystyle {\dot {\varepsilon }}_{\mathrm {vp} }={\cfrac {\left\langle f({\boldsymbol {\sigma }},{\boldsymbol {q}})\right\rangle }{\tau }}{\cfrac {\partial f}{\partial {\boldsymbol {\sigma }}}}={\begin{cases}{\cfrac {f({\boldsymbol {\sigma }},{\boldsymbol {q}})}{\tau }}{\cfrac {\partial f}{\partial {\boldsymbol {\sigma }}}}&{\rm {if}}~f({\boldsymbol {\sigma }},{\boldsymbol {q}})>0\\0&{\rm {otherwise}}\\\end{cases}}}
where
f
(
.
,
.
)
{\displaystyle f(.,.)}
is a yield function,
σ
{\displaystyle {\boldsymbol {\sigma }}}
is the Cauchy stress,
q
{\displaystyle {\boldsymbol {q}}}
is a set of internal variables (such as the plastic strain
ε
v
p
{\displaystyle {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}
),
τ
{\displaystyle \tau }
is a relaxation time. The notation
⟨
…
⟩
{\displaystyle \langle \dots \rangle }
denotes the Macaulay brackets. The flow rule used in various versions of the Chaboche model is a special case of Perzyna's flow rule and has the form
ε
˙
v
p
=
⟨
f
f
0
⟩
n
s
i
g
n
(
σ
−
χ
)
{\displaystyle {\dot {\varepsilon }}_{\mathrm {vp} }=\left\langle {\frac {f}{f_{0}}}\right\rangle ^{n}sign({\boldsymbol {\sigma }}-{\boldsymbol {\chi }})}
where
f
0
{\displaystyle f_{0}}
is the quasistatic value of
f
{\displaystyle f}
and
χ
{\displaystyle {\boldsymbol {\chi }}}
is a backstress. Several models for the backstress also go by the name Chaboche model.
=== Duvaut–Lions formulation ===
The Duvaut–Lions formulation is equivalent to the Perzyna formulation and may be expressed as
ε
˙
v
p
=
{
C
−
1
:
σ
−
P
σ
τ
i
f
f
(
σ
,
q
)
>
0
0
o
t
h
e
r
w
i
s
e
{\displaystyle {\dot {\varepsilon }}_{\mathrm {vp} }={\begin{cases}{\mathsf {C}}^{-1}:{\cfrac {{\boldsymbol {\sigma }}-{\mathcal {P}}{\boldsymbol {\sigma }}}{\tau }}&{\rm {{if}~f({\boldsymbol {\sigma }},{\boldsymbol {q}})>0}}\\0&{\rm {otherwise}}\end{cases}}}
where
C
{\displaystyle {\mathsf {C}}}
is the elastic stiffness tensor,
P
σ
{\displaystyle {\mathcal {P}}{\boldsymbol {\sigma }}}
is the closest point projection of the stress state on to the boundary of the region that bounds all possible elastic stress states. The quantity
P
σ
{\displaystyle {\mathcal {P}}{\boldsymbol {\sigma }}}
is typically found from the rate-independent solution to a plasticity problem.
=== Flow stress models ===
The quantity
f
(
σ
,
q
)
{\displaystyle f({\boldsymbol {\sigma }},{\boldsymbol {q}})}
represents the evolution of the yield surface. The yield function
f
{\displaystyle f}
is often expressed as an equation consisting of some invariant of stress and a model for the yield stress (or plastic flow stress). An example is von Mises or
J
2
{\displaystyle J_{2}}
plasticity. In those situations the plastic strain rate is calculated in the same manner as in rate-independent plasticity. In other situations, the yield stress model provides a direct means of computing the plastic strain rate.
Numerous empirical and semi-empirical flow stress models are used the computational plasticity. The following temperature and strain-rate dependent models provide a sampling of the models in current use:
the Johnson–Cook model
the Steinberg–Cochran–Guinan–Lund model.
the Zerilli–Armstrong model.
the Mechanical threshold stress model.
the Preston–Tonks–Wallace model.
The Johnson–Cook (JC) model is purely empirical and is the most widely used of the five. However, this model exhibits an unrealistically small strain-rate dependence at high temperatures. The Steinberg–Cochran–Guinan–Lund (SCGL) model is semi-empirical. The model is purely empirical and strain-rate independent at high strain-rates. A dislocation-based extension based on is used at low strain-rates. The SCGL model is used extensively by the shock physics community. The Zerilli–Armstrong (ZA) model is a simple physically based model that has been used extensively. A more complex model that is based on ideas from dislocation dynamics is the Mechanical Threshold Stress (MTS) model. This model has been used to model the plastic deformation of copper, tantalum, alloys of steel, and aluminum alloys. However, the MTS model is limited to strain-rates less than around 107/s. The Preston–Tonks–Wallace (PTW) model is also physically based and has a form similar to the MTS model. However, the PTW model has components that can model plastic deformation in the overdriven shock regime (strain-rates greater that 107/s). Hence this model is valid for the largest range of strain-rates among the five flow stress models.
==== Johnson–Cook flow stress model ====
The Johnson–Cook (JC) model is purely empirical and gives the following relation for the flow stress (
σ
y
{\displaystyle \sigma _{y}}
)
(1)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
[
A
+
B
(
ε
p
)
n
]
[
1
+
C
ln
(
ε
p
˙
∗
)
]
[
1
−
(
T
∗
)
m
]
{\displaystyle {\text{(1)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)=\left[A+B(\varepsilon _{\rm {p}})^{n}\right]\left[1+C\ln({\dot {\varepsilon _{\rm {p}}}}^{*})\right]\left[1-(T^{*})^{m}\right]}
where
ε
p
{\displaystyle \varepsilon _{\rm {p}}}
is the equivalent plastic strain,
ε
p
˙
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}}
is the
plastic strain-rate, and
A
,
B
,
C
,
n
,
m
{\displaystyle A,B,C,n,m}
are material constants.
The normalized strain-rate and temperature in equation (1) are defined as
ε
p
˙
∗
:=
ε
p
˙
ε
p
0
˙
and
T
∗
:=
(
T
−
T
0
)
(
T
m
−
T
0
)
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}^{*}:={\cfrac {\dot {\varepsilon _{\rm {p}}}}{\dot {\varepsilon _{\rm {p0}}}}}\qquad {\text{and}}\qquad T^{*}:={\cfrac {(T-T_{0})}{(T_{m}-T_{0})}}}
where
ε
p
0
˙
{\displaystyle {\dot {\varepsilon _{\rm {p0}}}}}
is the effective plastic strain-rate of the quasi-static test used to determine the yield and hardening parameters A,B and n. This is not as it is often thought just a parameter to make
ε
p
˙
∗
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}^{*}}
non-dimensional.
T
0
{\displaystyle T_{0}}
is a reference temperature, and
T
m
{\displaystyle T_{m}}
is a reference melt temperature. For conditions where
T
∗
<
0
{\displaystyle T^{*}<0}
, we assume that
m
=
1
{\displaystyle m=1}
.
==== Steinberg–Cochran–Guinan–Lund flow stress model ====
The Steinberg–Cochran–Guinan–Lund (SCGL) model is a semi-empirical model that was developed by Steinberg et al. for high strain-rate situations and extended to low strain-rates and bcc materials by Steinberg and Lund. The flow stress in this model is given by
(2)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
[
σ
a
f
(
ε
p
)
+
σ
t
(
ε
p
˙
,
T
)
]
μ
(
p
,
T
)
μ
0
;
σ
a
f
≤
σ
max
and
σ
t
≤
σ
p
{\displaystyle {\text{(2)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)=\left[\sigma _{a}f(\varepsilon _{\rm {p}})+\sigma _{t}({\dot {\varepsilon _{\rm {p}}}},T)\right]{\frac {\mu (p,T)}{\mu _{0}}};\quad \sigma _{a}f\leq \sigma _{\text{max}}~~{\text{and}}~~\sigma _{t}\leq \sigma _{p}}
where
σ
a
{\displaystyle \sigma _{a}}
is the athermal component of the flow stress,
f
(
ε
p
)
{\displaystyle f(\varepsilon _{\rm {p}})}
is a function that represents strain hardening,
σ
t
{\displaystyle \sigma _{t}}
is the thermally activated component of the flow stress,
μ
(
p
,
T
)
{\displaystyle \mu (p,T)}
is the pressure- and temperature-dependent shear modulus, and
μ
0
{\displaystyle \mu _{0}}
is the shear modulus at standard temperature and pressure. The saturation value of the athermal stress is
σ
max
{\displaystyle \sigma _{\text{max}}}
. The saturation of the thermally activated stress is the Peierls stress (
σ
p
{\displaystyle \sigma _{p}}
). The shear modulus for this model is usually computed with the Steinberg–Cochran–Guinan shear modulus model.
The strain hardening function (
f
{\displaystyle f}
) has the form
f
(
ε
p
)
=
[
1
+
β
(
ε
p
+
ε
p
i
)
]
n
{\displaystyle f(\varepsilon _{\rm {p}})=[1+\beta (\varepsilon _{\rm {p}}+\varepsilon _{\rm {p}}i)]^{n}}
where
β
,
n
{\displaystyle \beta ,n}
are work hardening parameters, and
ε
p
i
{\displaystyle \varepsilon _{\rm {p}}i}
is the initial equivalent plastic strain.
The thermal component (
σ
t
{\displaystyle \sigma _{t}}
) is computed using a bisection algorithm from the following equation.
ε
p
˙
=
[
1
C
1
exp
[
2
U
k
k
b
T
(
1
−
σ
t
σ
p
)
2
]
+
C
2
σ
t
]
−
1
;
σ
t
≤
σ
p
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}=\left[{\frac {1}{C_{1}}}\exp \left[{\frac {2U_{k}}{k_{b}~T}}\left(1-{\frac {\sigma _{t}}{\sigma _{p}}}\right)^{2}\right]+{\frac {C_{2}}{\sigma _{t}}}\right]^{-1};\quad \sigma _{t}\leq \sigma _{p}}
where
2
U
k
{\displaystyle 2U_{k}}
is the energy to form a kink-pair in a dislocation segment of length
L
d
{\displaystyle L_{d}}
,
k
b
{\displaystyle k_{b}}
is the Boltzmann constant,
σ
p
{\displaystyle \sigma _{p}}
is the Peierls stress. The constants
C
1
,
C
2
{\displaystyle C_{1},C_{2}}
are given by the relations
C
1
:=
ρ
d
L
d
a
b
2
ν
2
w
2
;
C
2
:=
D
ρ
d
b
2
{\displaystyle C_{1}:={\frac {\rho _{d}L_{d}ab^{2}\nu }{2w^{2}}};\quad C_{2}:={\frac {D}{\rho _{d}b^{2}}}}
where
ρ
d
{\displaystyle \rho _{d}}
is the dislocation density,
L
d
{\displaystyle L_{d}}
is the length of a dislocation segment,
a
{\displaystyle a}
is the distance between Peierls valleys,
b
{\displaystyle b}
is the magnitude of the Burgers vector,
ν
{\displaystyle \nu }
is the Debye frequency,
w
{\displaystyle w}
is the width of a kink loop, and
D
{\displaystyle D}
is the drag coefficient.
==== Zerilli–Armstrong flow stress model ====
The Zerilli–Armstrong (ZA) model is based on simplified dislocation mechanics. The general form of the equation for the flow stress is
(3)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
σ
a
+
B
exp
(
−
β
T
)
+
B
0
ε
p
exp
(
−
α
T
)
.
{\displaystyle {\text{(3)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)=\sigma _{a}+B\exp(-\beta T)+B_{0}{\sqrt {\varepsilon _{\rm {p}}}}\exp(-\alpha T)~.}
In this model,
σ
a
{\displaystyle \sigma _{a}}
is the athermal component of the flow stress given by
σ
a
:=
σ
g
+
k
h
l
+
K
ε
p
n
,
{\displaystyle \sigma _{a}:=\sigma _{g}+{\frac {k_{h}}{\sqrt {l}}}+K\varepsilon _{\rm {p}}^{n},}
where
σ
g
{\displaystyle \sigma _{g}}
is the contribution due to solutes and initial dislocation density,
k
h
{\displaystyle k_{h}}
is the microstructural stress intensity,
l
{\displaystyle l}
is the average grain diameter,
K
{\displaystyle K}
is zero for fcc materials,
B
,
B
0
{\displaystyle B,B_{0}}
are material constants.
In the thermally activated terms, the functional forms of the exponents
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
are
α
=
α
0
−
α
1
ln
(
ε
p
˙
)
;
β
=
β
0
−
β
1
ln
(
ε
p
˙
)
;
{\displaystyle \alpha =\alpha _{0}-\alpha _{1}\ln({\dot {\varepsilon _{\rm {p}}}});\quad \beta =\beta _{0}-\beta _{1}\ln({\dot {\varepsilon _{\rm {p}}}});}
where
α
0
,
α
1
,
β
0
,
β
1
{\displaystyle \alpha _{0},\alpha _{1},\beta _{0},\beta _{1}}
are material parameters that depend on the type of material (fcc, bcc, hcp, alloys). The Zerilli–Armstrong model has been modified by for better performance at high temperatures.
==== Mechanical threshold stress flow stress model ====
The Mechanical Threshold Stress (MTS) model) has the form
(4)
σ
y
(
ε
p
,
ε
˙
,
T
)
=
σ
a
+
(
S
i
σ
i
+
S
e
σ
e
)
μ
(
p
,
T
)
μ
0
{\displaystyle {\text{(4)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon }},T)=\sigma _{a}+(S_{i}\sigma _{i}+S_{e}\sigma _{e}){\frac {\mu (p,T)}{\mu _{0}}}}
where
σ
a
{\displaystyle \sigma _{a}}
is the athermal component of mechanical threshold stress,
σ
i
{\displaystyle \sigma _{i}}
is the component of the flow stress due to intrinsic barriers to thermally activated dislocation motion and dislocation-dislocation interactions,
σ
e
{\displaystyle \sigma _{e}}
is the component of the flow stress due to microstructural evolution with increasing deformation (strain hardening), (
S
i
,
S
e
{\displaystyle S_{i},S_{e}}
) are temperature and strain-rate dependent scaling factors, and
μ
0
{\displaystyle \mu _{0}}
is the shear modulus at 0 K and ambient pressure.
The scaling factors take the Arrhenius form
S
i
=
[
1
−
(
k
b
T
g
0
i
b
3
μ
(
p
,
T
)
ln
ε
0
˙
ε
˙
)
1
/
q
i
]
1
/
p
i
S
e
=
[
1
−
(
k
b
T
g
0
e
b
3
μ
(
p
,
T
)
ln
ε
0
˙
ε
˙
)
1
/
q
e
]
1
/
p
e
{\displaystyle {\begin{aligned}S_{i}&=\left[1-\left({\frac {k_{b}~T}{g_{0i}b^{3}\mu (p,T)}}\ln {\frac {\dot {\varepsilon _{\rm {0}}}}{\dot {\varepsilon }}}\right)^{1/q_{i}}\right]^{1/p_{i}}\\S_{e}&=\left[1-\left({\frac {k_{b}~T}{g_{0e}b^{3}\mu (p,T)}}\ln {\frac {\dot {\varepsilon _{\rm {0}}}}{\dot {\varepsilon }}}\right)^{1/q_{e}}\right]^{1/p_{e}}\end{aligned}}}
where
k
b
{\displaystyle k_{b}}
is the Boltzmann constant,
b
{\displaystyle b}
is the magnitude of the Burgers' vector, (
g
0
i
,
g
0
e
{\displaystyle g_{0i},g_{0e}}
) are normalized activation energies, (
ε
˙
,
ε
0
˙
{\displaystyle {\dot {\varepsilon }},{\dot {\varepsilon _{\rm {0}}}}}
) are the strain-rate and reference strain-rate, and (
q
i
,
p
i
,
q
e
,
p
e
{\displaystyle q_{i},p_{i},q_{e},p_{e}}
) are constants.
The strain hardening component of the mechanical threshold stress (
σ
e
{\displaystyle \sigma _{e}}
) is given by an empirical modified Voce law
(5)
d
σ
e
d
ε
p
=
θ
(
σ
e
)
{\displaystyle {\text{(5)}}\qquad {\frac {d\sigma _{e}}{d\varepsilon _{\rm {p}}}}=\theta (\sigma _{e})}
where
θ
(
σ
e
)
=
θ
0
[
1
−
F
(
σ
e
)
]
+
θ
I
V
F
(
σ
e
)
θ
0
=
a
0
+
a
1
ln
ε
p
˙
+
a
2
ε
p
˙
−
a
3
T
F
(
σ
e
)
=
tanh
(
α
σ
e
σ
e
s
)
tanh
(
α
)
ln
(
σ
e
s
σ
0
e
s
)
=
(
k
T
g
0
e
s
b
3
μ
(
p
,
T
)
)
ln
(
ε
p
˙
ε
p
˙
)
{\displaystyle {\begin{aligned}\theta (\sigma _{e})&=\theta _{0}[1-F(\sigma _{e})]+\theta _{IV}F(\sigma _{e})\\\theta _{0}&=a_{0}+a_{1}\ln {\dot {\varepsilon _{\rm {p}}}}+a_{2}{\sqrt {\dot {\varepsilon _{\rm {p}}}}}-a_{3}T\\F(\sigma _{e})&={\cfrac {\tanh \left(\alpha {\cfrac {\sigma _{e}}{\sigma _{es}}}\right)}{\tanh(\alpha )}}\\\ln({\cfrac {\sigma _{es}}{\sigma _{0es}}})&=\left({\frac {kT}{g_{0es}b^{3}\mu (p,T)}}\right)\ln \left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\dot {\varepsilon _{\rm {p}}}}}\right)\end{aligned}}}
and
θ
0
{\displaystyle \theta _{0}}
is the hardening due to dislocation accumulation,
θ
I
V
{\displaystyle \theta _{IV}}
is the contribution due to stage-IV hardening, (
a
0
,
a
1
,
a
2
,
a
3
,
α
{\displaystyle a_{0},a_{1},a_{2},a_{3},\alpha }
) are constants,
σ
e
s
{\displaystyle \sigma _{es}}
is the stress at zero strain hardening rate,
σ
0
e
s
{\displaystyle \sigma _{0es}}
is the saturation threshold stress for deformation at 0 K,
g
0
e
s
{\displaystyle g_{0es}}
is a constant, and
ε
p
˙
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}}
is the maximum strain-rate. Note that the maximum strain-rate is usually limited to about
10
7
{\displaystyle 10^{7}}
/s.
==== Preston–Tonks–Wallace flow stress model ====
The Preston–Tonks–Wallace (PTW) model attempts to provide a model for the flow stress for extreme strain-rates (up to 1011/s) and temperatures up to melt. A linear Voce hardening law is used in the model. The PTW flow stress is given by
(6)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
{
2
[
τ
s
+
α
ln
[
1
−
φ
exp
(
−
β
−
θ
ε
p
α
φ
)
]
]
μ
(
p
,
T
)
thermal regime
2
τ
s
μ
(
p
,
T
)
shock regime
{\displaystyle {\text{(6)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)={\begin{cases}2\left[\tau _{s}+\alpha \ln \left[1-\varphi \exp \left(-\beta -{\cfrac {\theta \varepsilon _{\rm {p}}}{\alpha \varphi }}\right)\right]\right]\mu (p,T)&{\text{thermal regime}}\\2\tau _{s}\mu (p,T)&{\text{shock regime}}\end{cases}}}
with
α
:=
s
0
−
τ
y
d
;
β
:=
τ
s
−
τ
y
α
;
φ
:=
exp
(
β
)
−
1
{\displaystyle \alpha :={\frac {s_{0}-\tau _{y}}{d}};\quad \beta :={\frac {\tau _{s}-\tau _{y}}{\alpha }};\quad \varphi :=\exp(\beta )-1}
where
τ
s
{\displaystyle \tau _{s}}
is a normalized work-hardening saturation stress,
s
0
{\displaystyle s_{0}}
is the value of
τ
s
{\displaystyle \tau _{s}}
at 0K,
τ
y
{\displaystyle \tau _{y}}
is a normalized yield stress,
θ
{\displaystyle \theta }
is the hardening constant in the Voce hardening law, and
d
{\displaystyle d}
is a dimensionless material parameter that modifies the Voce hardening law.
The saturation stress and the yield stress are given by
τ
s
=
max
{
s
0
−
(
s
0
−
s
∞
)
e
r
f
[
κ
T
^
ln
(
γ
ξ
˙
ε
p
˙
)
]
,
s
0
(
ε
p
˙
γ
ξ
˙
)
s
1
}
τ
y
=
max
{
y
0
−
(
y
0
−
y
∞
)
e
r
f
[
κ
T
^
ln
(
γ
ξ
˙
ε
p
˙
)
]
,
min
{
y
1
(
ε
p
˙
γ
ξ
˙
)
y
2
,
s
0
(
ε
p
˙
γ
ξ
˙
)
s
1
}
}
{\displaystyle {\begin{aligned}\tau _{s}&=\max \left\{s_{0}-(s_{0}-s_{\infty }){\rm {{erf}\left[\kappa {\hat {T}}\ln \left({\cfrac {\gamma {\dot {\xi }}}{\dot {\varepsilon _{\rm {p}}}}}\right)\right],s_{0}\left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\gamma {\dot {\xi }}}}\right)^{s_{1}}}}\right\}\\\tau _{y}&=\max \left\{y_{0}-(y_{0}-y_{\infty }){\rm {{erf}\left[\kappa {\hat {T}}\ln \left({\cfrac {\gamma {\dot {\xi }}}{\dot {\varepsilon _{\rm {p}}}}}\right)\right],\min \left\{y_{1}\left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\gamma {\dot {\xi }}}}\right)^{y_{2}},s_{0}\left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\gamma {\dot {\xi }}}}\right)^{s_{1}}\right\}}}\right\}\end{aligned}}}
where
s
∞
{\displaystyle s_{\infty }}
is the value of
τ
s
{\displaystyle \tau _{s}}
close to the melt temperature, (
y
0
,
y
∞
{\displaystyle y_{0},y_{\infty }}
) are the values of
τ
y
{\displaystyle \tau _{y}}
at 0 K and close to melt, respectively,
(
κ
,
γ
)
{\displaystyle (\kappa ,\gamma )}
are material constants,
T
^
=
T
/
T
m
{\displaystyle {\hat {T}}=T/T_{m}}
, (
s
1
,
y
1
,
y
2
{\displaystyle s_{1},y_{1},y_{2}}
) are material parameters for the high strain-rate regime, and
ξ
˙
=
1
2
(
4
π
ρ
3
M
)
1
/
3
(
μ
(
p
,
T
)
ρ
)
1
/
2
{\displaystyle {\dot {\xi }}={\frac {1}{2}}\left({\cfrac {4\pi \rho }{3M}}\right)^{1/3}\left({\cfrac {\mu (p,T)}{\rho }}\right)^{1/2}}
where
ρ
{\displaystyle \rho }
is the density, and
M
{\displaystyle M}
is the atomic mass.
== See also ==
Viscoelasticity
Bingham plastic
Dashpot
Creep (deformation)
Plasticity (physics)
Continuum mechanics
Quasi-solid
== References == | Wikipedia/Steinberg-Guinan_plasticity_model |
Hankinson's equation (also called Hankinson's formula or Hankinson's criterion) is a mathematical relationship for predicting the off-axis uniaxial compressive strength of wood. The formula can also be used to compute the fiber stress or the stress wave velocity at the elastic limit as a function of grain angle in wood. For a wood that has uniaxial compressive strengths of
σ
0
{\displaystyle \sigma _{0}}
parallel to the grain and
σ
90
{\displaystyle \sigma _{90}}
perpendicular to the grain, Hankinson's equation predicts that the uniaxial compressive strength of the wood in a direction at an angle
α
{\displaystyle \alpha }
to the grain is given by
σ
α
=
σ
0
σ
90
σ
0
sin
2
α
+
σ
90
cos
2
α
{\displaystyle \sigma _{\alpha }={\cfrac {\sigma _{0}~\sigma _{90}}{\sigma _{0}~\sin ^{2}\alpha +\sigma _{90}~\cos ^{2}\alpha }}}
Even though the original relation was based on studies of spruce, Hankinson's equation has been found to be remarkably accurate for many other types of wood. A generalized form of the Hankinson formula has also been used for predicting the uniaxial tensile strength of wood at an angle to the grain. This formula has the form
σ
α
=
σ
0
σ
90
σ
0
sin
n
α
+
σ
90
cos
n
α
{\displaystyle \sigma _{\alpha }={\cfrac {\sigma _{0}~\sigma _{90}}{\sigma _{0}~\sin ^{n}\alpha +\sigma _{90}~\cos ^{n}\alpha }}}
where the exponent
n
{\displaystyle n}
can take values between 1.5 and 2.
The stress wave velocity at angle
α
{\displaystyle \alpha }
to the grain at the elastic limit can similarly be obtained from the Hankinson formula
V
(
α
)
=
V
0
V
90
V
0
sin
2
α
+
V
90
cos
2
α
{\displaystyle V(\alpha )={\frac {V_{0}V_{90}}{V_{0}\sin ^{2}\alpha +V_{90}\cos ^{2}\alpha }}}
where
V
0
{\displaystyle V_{0}}
is the velocity parallel to the grain,
V
90
{\displaystyle V_{90}}
is the velocity perpendicular to the grain and
α
{\displaystyle \alpha }
is the grain angle.
== See also ==
Material failure theory
Linear elasticity
Hooke's law
Orthotropic material
Transverse isotropy
== References == | Wikipedia/Hankinson's_equation |
Viscoplasticity is a theory in continuum mechanics that describes the rate-dependent inelastic behavior of solids. Rate-dependence in this context means that the deformation of the material depends on the rate at which loads are applied. The inelastic behavior that is the subject of viscoplasticity is plastic deformation which means that the material undergoes unrecoverable deformations when a load level is reached. Rate-dependent plasticity is important for transient plasticity calculations. The main difference between rate-independent plastic and viscoplastic material models is that the latter exhibit not only permanent deformations after the application of loads but continue to undergo a creep flow as a function of time under the influence of the applied load.
The elastic response of viscoplastic materials can be represented in one-dimension by Hookean spring elements. Rate-dependence can be represented by nonlinear dashpot elements in a manner similar to viscoelasticity. Plasticity can be accounted for by adding sliding frictional elements as shown in Figure 1. In the figure
E
{\displaystyle E}
is the modulus of elasticity,
λ
{\displaystyle \lambda }
is the viscosity parameter and
N
{\displaystyle N}
is a power-law type parameter that represents non-linear dashpot
[
σ
(
d
ε
/
d
t
)
=
σ
=
λ
(
d
ε
/
d
t
)
1
/
N
]
{\displaystyle [\sigma (\mathrm {d} \varepsilon /\mathrm {d} t)=\sigma =\lambda (\mathrm {d} \varepsilon /\mathrm {d} t)^{1/N}]}
. The sliding element can have a yield stress (
σ
y
{\displaystyle \sigma _{y}}
) that is strain rate dependent, or even constant, as shown in Figure 1c.
Viscoplasticity is usually modeled in three-dimensions using overstress models of the Perzyna or Duvaut-Lions types. In these models, the stress is allowed to increase beyond the rate-independent yield surface upon application of a load and then allowed to relax back to the yield surface over time. The yield surface is usually assumed not to be rate-dependent in such models. An alternative approach is to add a strain rate dependence to the yield stress and use the techniques of rate independent plasticity to calculate the response of a material.
For metals and alloys, viscoplasticity is the macroscopic behavior caused by a mechanism linked to the movement of dislocations in grains, with superposed effects of inter-crystalline gliding. The mechanism usually becomes dominant at temperatures greater than approximately one third of the absolute melting temperature. However, certain alloys exhibit viscoplasticity at room temperature (300 K). For polymers, wood, and bitumen, the theory of viscoplasticity is required to describe behavior beyond the limit of elasticity or viscoelasticity.
In general, viscoplasticity theories are useful in areas such as:
the calculation of permanent deformations,
the prediction of the plastic collapse of structures,
the investigation of stability,
crash simulations,
systems exposed to high temperatures such as turbines in engines, e.g. a power plant,
dynamic problems and systems exposed to high strain rates.
== History ==
Research on plasticity theories started in 1864 with the work of Henri Tresca, Saint Venant (1870) and Levy (1871) on the maximum shear criterion. An improved plasticity model was presented in 1913 by Von Mises which is now referred to as the von Mises yield criterion. In viscoplasticity, the development of a mathematical model heads back to 1910 with the representation of primary creep by Andrade's law. In 1929, Norton developed a one-dimensional dashpot model which linked the rate of secondary creep to the stress. In 1934, Odqvist generalized Norton's law to the multi-axial case.
Concepts such as the normality of plastic flow to the yield surface and flow rules for plasticity were introduced by Prandtl (1924) and Reuss (1930). In 1932, Hohenemser and Prager proposed the first model for slow viscoplastic flow. This model provided a relation between the deviatoric stress and the strain rate for an incompressible Bingham solid However, the application of these theories did not begin before 1950, where limit theorems were discovered.
In 1960, the first IUTAM Symposium "Creep in Structures" organized by Hoff provided a major development in viscoplasticity with the works of Hoff, Rabotnov, Perzyna, Hult, and Lemaitre for the isotropic hardening laws, and those of Kratochvil, Malinini and Khadjinsky, Ponter and Leckie, and Chaboche for the kinematic hardening laws. Perzyna, in 1963, introduced a viscosity coefficient that is temperature and time dependent. The formulated models were supported by the thermodynamics of irreversible processes and the phenomenological standpoint. The ideas presented in these works have been the basis for most subsequent research into rate-dependent plasticity.
== Phenomenology ==
For a qualitative analysis, several characteristic tests are performed to describe the phenomenology of viscoplastic materials. Some examples of these tests are
hardening tests at constant stress or strain rate,
creep tests at constant force, and
stress relaxation at constant elongation.
=== Strain hardening test ===
One consequence of yielding is that as plastic deformation proceeds, an increase in stress is required to produce additional strain. This phenomenon is known as Strain/Work hardening. For a viscoplastic material the hardening curves are not significantly different from those of rate-independent plastic material. Nevertheless, three essential differences can be observed.
At the same strain, the higher the rate of strain the higher the stress
A change in the rate of strain during the test results in an immediate change in the stress–strain curve.
The concept of a plastic yield limit is no longer strictly applicable.
The hypothesis of partitioning the strains by decoupling the elastic and plastic parts is still applicable where the strains are small, i.e.,
ε
=
ε
e
+
ε
v
p
{\displaystyle {\boldsymbol {\varepsilon }}={\boldsymbol {\varepsilon }}_{\mathrm {e} }+{\boldsymbol {\varepsilon }}_{\mathrm {vp} }}
where
ε
e
{\displaystyle {\boldsymbol {\varepsilon }}_{\mathrm {e} }}
is the elastic strain and
ε
v
p
{\displaystyle {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}
is the viscoplastic strain. To obtain the stress–strain behavior shown in blue in the figure, the material is initially loaded at a strain rate of 0.1/s. The strain rate is then instantaneously raised to 100/s and held constant at that value for some time. At the end of that time period the strain rate is dropped instantaneously back to 0.1/s and the cycle is continued for increasing values of strain. There is clearly a lag between the strain-rate change and the stress response. This lag is modeled quite accurately by overstress models (such as the Perzyna model) but not by models of rate-independent plasticity that have a rate-dependent yield stress.
=== Creep test ===
Creep is the tendency of a solid material to slowly move or deform permanently under constant stresses. Creep tests measure the strain response due to a constant stress as shown in Figure 3. The classical creep curve represents the evolution of strain as a function of time in a material subjected to uniaxial stress at a constant temperature. The creep test, for instance, is performed by applying a constant force/stress and analyzing the strain response of the system. In general, as shown in Figure 3b this curve usually shows three phases or periods of behavior:
A primary creep stage, also known as transient creep, is the starting stage during which hardening of the material leads to a decrease in the rate of flow which is initially very high.
(
0
≤
ε
≤
ε
1
)
{\displaystyle (0\leq {\boldsymbol {\varepsilon }}\leq {\boldsymbol {\varepsilon }}_{1})}
.
The secondary creep stage, also known as the steady state, is where the strain rate is constant.
(
ε
1
≤
ε
≤
ε
2
)
{\displaystyle ({\boldsymbol {\varepsilon }}_{1}\leq {\boldsymbol {\varepsilon }}\leq {\boldsymbol {\varepsilon }}_{2})}
.
A tertiary creep phase in which there is an increase in the strain rate up to the fracture strain.
(
ε
2
≤
ε
≤
ε
R
)
{\displaystyle ({\boldsymbol {\varepsilon }}_{2}\leq {\boldsymbol {\varepsilon }}\leq {\boldsymbol {\varepsilon }}_{R})}
.
=== Relaxation test ===
As shown in Figure 4, the relaxation test is defined as the stress response due to a constant strain for a period of time. In viscoplastic materials, relaxation tests demonstrate the stress relaxation in uniaxial loading at a constant strain. In fact, these tests characterize the viscosity and can be used to determine the relation which exists between the stress and the rate of viscoplastic strain. The decomposition of strain rate is
d
ε
d
t
=
d
ε
e
d
t
+
d
ε
v
p
d
t
.
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}}{\mathrm {d} t}}={\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {e} }}{\mathrm {d} t}}+{\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}{\mathrm {d} t}}~.}
The elastic part of the strain rate is given by
d
ε
e
d
t
=
E
−
1
d
σ
d
t
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {e} }}{\mathrm {d} t}}={\mathsf {E}}^{-1}~{\cfrac {\mathrm {d} {\boldsymbol {\sigma }}}{\mathrm {d} t}}}
For the flat region of the strain–time curve, the total strain rate is zero. Hence we have,
d
ε
v
p
d
t
=
−
E
−
1
d
σ
d
t
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}{\mathrm {d} t}}=-{\mathsf {E}}^{-1}~{\cfrac {\mathrm {d} {\boldsymbol {\sigma }}}{\mathrm {d} t}}}
Therefore, the relaxation curve can be used to determine rate of viscoplastic strain and hence the viscosity of the dashpot in a one-dimensional viscoplastic material model. The residual value that is reached when the stress has plateaued at the end of a relaxation test corresponds to the upper limit of elasticity. For some materials such as rock salt such an upper limit of elasticity occurs at a very small value of stress and relaxation tests can be continued for more than a year without any observable plateau in the stress.
It is important to note that relaxation tests are extremely difficult to perform because maintaining the condition
d
ε
d
t
=
0
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}}{\mathrm {d} t}}=0}
in a test requires considerable delicacy.
== Rheological models of viscoplasticity ==
One-dimensional constitutive models for viscoplasticity based on spring-dashpot-slider elements include the perfectly viscoplastic solid, the elastic perfectly viscoplastic solid, and the elastoviscoplastic hardening solid. The elements may be connected in series or in parallel. In models where the elements are connected in series the strain is additive while the stress is equal in each element. In parallel connections, the stress is additive while the strain is equal in each element. Many of these one-dimensional models can be generalized to three dimensions for the small strain regime. In the subsequent discussion, time rates strain and stress are written as
ε
˙
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}}
and
σ
˙
{\displaystyle {\dot {\boldsymbol {\sigma }}}}
, respectively.
=== Perfectly viscoplastic solid (Norton-Hoff model) ===
In a perfectly viscoplastic solid, also called the Norton-Hoff model of viscoplasticity, the stress (as for viscous fluids) is a function of the rate of permanent strain. The effect of elasticity is neglected in the model, i.e.,
ε
e
=
0
{\displaystyle {\boldsymbol {\varepsilon }}_{e}=0}
and hence there is no initial yield stress, i.e.,
σ
y
=
0
{\displaystyle \sigma _{y}=0}
. The viscous dashpot has a response given by
σ
=
η
ε
˙
v
p
⟹
ε
˙
v
p
=
σ
η
{\displaystyle {\boldsymbol {\sigma }}=\eta ~{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }\implies {\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\cfrac {\boldsymbol {\sigma }}{\eta }}}
where
η
{\displaystyle \eta }
is the viscosity of the dashpot. In the Norton-Hoff model the viscosity
η
{\displaystyle \eta }
is a nonlinear function of the applied stress and is given by
η
=
λ
[
λ
|
|
σ
|
|
]
N
−
1
{\displaystyle \eta =\lambda \left[{\cfrac {\lambda }{||{\boldsymbol {\sigma }}||}}\right]^{N-1}}
where
N
{\displaystyle N}
is a fitting parameter, λ is the kinematic viscosity of the material and
|
|
σ
|
|
=
σ
:
σ
=
σ
i
j
σ
i
j
{\displaystyle ||{\boldsymbol {\sigma }}||={\sqrt {{\boldsymbol {\sigma }}:{\boldsymbol {\sigma }}}}={\sqrt {\sigma _{ij}\sigma _{ij}}}}
. Then the viscoplastic strain rate is given by the relation
ε
˙
v
p
=
σ
λ
[
|
|
σ
|
|
λ
]
N
−
1
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\cfrac {\boldsymbol {\sigma }}{\lambda }}\left[{\cfrac {||{\boldsymbol {\sigma }}||}{\lambda }}\right]^{N-1}}
In one-dimensional form, the Norton-Hoff model can be expressed as
σ
=
λ
(
ε
˙
v
p
)
1
/
N
{\displaystyle \sigma =\lambda ~\left({\dot {\varepsilon }}_{\mathrm {vp} }\right)^{1/N}}
When
N
=
1.0
{\displaystyle N=1.0}
the solid is viscoelastic.
If we assume that plastic flow is isochoric (volume preserving), then the above relation can be expressed in the more familiar form
s
=
2
K
(
3
ε
˙
e
q
)
m
−
1
ε
˙
v
p
{\displaystyle {\boldsymbol {s}}=2K~\left({\sqrt {3}}{\dot {\varepsilon }}_{\mathrm {eq} }\right)^{m-1}~{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }}
where
s
{\displaystyle {\boldsymbol {s}}}
is the deviatoric stress tensor,
ε
˙
e
q
{\displaystyle {\dot {\varepsilon }}_{\mathrm {eq} }}
is the von Mises equivalent strain rate, and
K
,
m
{\displaystyle K,m}
are material parameters. The equivalent strain rate is defined as
ϵ
¯
˙
=
2
3
ϵ
¯
¯
˙
:
ϵ
¯
¯
˙
{\displaystyle {\dot {\bar {\epsilon }}}={\sqrt {{\frac {2}{3}}{\dot {\bar {\bar {\epsilon }}}}:{\dot {\bar {\bar {\epsilon }}}}}}}
These models can be applied in metals and alloys at temperatures higher than two thirds of their absolute melting point (in kelvins) and polymers/asphalt at elevated temperature. The responses for strain hardening, creep, and relaxation tests of such material are shown in Figure 6.
=== Elastic perfectly viscoplastic solid (Bingham–Norton model) ===
Two types of elementary approaches can be used to build up an elastic-perfectly viscoplastic mode. In the first situation, the sliding friction element and the dashpot are arranged in parallel and then connected in series to the elastic spring as shown in Figure 7. This model is called the Bingham–Maxwell model (by analogy with the Maxwell model and the Bingham model) or the Bingham–Norton model. In the second situation, all three elements are arranged in parallel. Such a model is called a Bingham–Kelvin model by analogy with the Kelvin model.
For elastic-perfectly viscoplastic materials, the elastic strain is no longer considered negligible but the rate of plastic strain is only a function of the initial yield stress and there is no influence of hardening. The sliding element represents a constant yielding stress when the elastic limit is exceeded irrespective of the strain. The model can be expressed as
σ
=
E
ε
f
o
r
‖
σ
‖
<
σ
y
ε
˙
=
ε
˙
e
+
ε
˙
v
p
=
E
−
1
σ
˙
+
σ
η
[
1
−
σ
y
‖
σ
‖
]
f
o
r
‖
σ
‖
≥
σ
y
{\displaystyle {\begin{aligned}&{\boldsymbol {\sigma }}={\mathsf {E}}~{\boldsymbol {\varepsilon }}&&\mathrm {for} ~\|{\boldsymbol {\sigma }}\|<\sigma _{y}\\&{\dot {\boldsymbol {\varepsilon }}}={\dot {\boldsymbol {\varepsilon }}}_{\mathrm {e} }+{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+{\cfrac {\boldsymbol {\sigma }}{\eta }}\left[1-{\cfrac {\sigma _{y}}{\|{\boldsymbol {\sigma }}\|}}\right]&&\mathrm {for} ~\|{\boldsymbol {\sigma }}\|\geq \sigma _{y}\end{aligned}}}
where
η
{\displaystyle \eta }
is the viscosity of the dashpot element. If the dashpot element has a response that is of the Norton form
σ
η
=
σ
λ
[
‖
σ
‖
λ
]
N
−
1
{\displaystyle {\cfrac {\boldsymbol {\sigma }}{\eta }}={\cfrac {\boldsymbol {\sigma }}{\lambda }}\left[{\cfrac {\|{\boldsymbol {\sigma }}\|}{\lambda }}\right]^{N-1}}
we get the Bingham–Norton model
ε
˙
=
E
−
1
σ
˙
+
σ
λ
[
‖
σ
‖
λ
]
N
−
1
[
1
−
σ
y
‖
σ
‖
]
f
o
r
‖
σ
‖
≥
σ
y
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+{\cfrac {\boldsymbol {\sigma }}{\lambda }}\left[{\cfrac {\|{\boldsymbol {\sigma }}\|}{\lambda }}\right]^{N-1}\left[1-{\cfrac {\sigma _{y}}{\|{\boldsymbol {\sigma }}\|}}\right]\quad \mathrm {for} ~\|{\boldsymbol {\sigma }}\|\geq \sigma _{y}}
Other expressions for the strain rate can also be observed in the literature with the general form
ε
˙
=
E
−
1
σ
˙
+
f
(
σ
,
σ
y
)
σ
f
o
r
‖
σ
‖
≥
σ
y
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+f({\boldsymbol {\sigma }},\sigma _{y})~{\boldsymbol {\sigma }}\quad \mathrm {for} ~\|{\boldsymbol {\sigma }}\|\geq \sigma _{y}}
The responses for strain hardening, creep, and relaxation tests of such material are shown in Figure 8.
=== Elastoviscoplastic hardening solid ===
An elastic-viscoplastic material with strain hardening is described by equations similar to those for an elastic-viscoplastic material with perfect plasticity. However, in this case the stress depends both on the plastic strain rate and on the plastic strain itself. For an elastoviscoplastic material the stress, after exceeding the yield stress, continues to increase beyond the initial yielding point. This implies that the yield stress in the sliding element increases with strain and the model may be expressed in generic terms as
ε
=
ε
e
=
E
−
1
σ
=
ε
f
o
r
|
|
σ
|
|
<
σ
y
ε
˙
=
ε
˙
e
+
ε
˙
v
p
=
E
−
1
σ
˙
+
f
(
σ
,
σ
y
,
ε
v
p
)
σ
f
o
r
|
|
σ
|
|
≥
σ
y
{\displaystyle {\begin{aligned}&{\boldsymbol {\varepsilon }}={\boldsymbol {\varepsilon }}_{\mathrm {e} }={\mathsf {E}}^{-1}~{\boldsymbol {\sigma }}=~{\boldsymbol {\varepsilon }}&&\mathrm {for} ~||{\boldsymbol {\sigma }}||<\sigma _{y}\\&{\dot {\boldsymbol {\varepsilon }}}={\dot {\boldsymbol {\varepsilon }}}_{\mathrm {e} }+{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+f({\boldsymbol {\sigma }},\sigma _{y},{\boldsymbol {\varepsilon }}_{\mathrm {vp} })~{\boldsymbol {\sigma }}&&\mathrm {for} ~||{\boldsymbol {\sigma }}||\geq \sigma _{y}\end{aligned}}}
This model is adopted when metals and alloys are at medium and higher temperatures and wood under high loads. The responses for strain hardening, creep, and relaxation tests of such a material are shown in Figure 9.
== Strain-rate dependent plasticity models ==
Classical phenomenological viscoplasticity models for small strains are usually categorized into two types:
the Perzyna formulation
the Duvaut–Lions formulation
=== Perzyna formulation ===
In the Perzyna formulation the plastic strain rate is assumed to be given by a constitutive relation of the form
ε
˙
v
p
=
⟨
f
(
σ
,
q
)
⟩
τ
∂
f
∂
σ
=
{
f
(
σ
,
q
)
τ
∂
f
∂
σ
i
f
f
(
σ
,
q
)
>
0
0
o
t
h
e
r
w
i
s
e
{\displaystyle {\dot {\varepsilon }}_{\mathrm {vp} }={\cfrac {\left\langle f({\boldsymbol {\sigma }},{\boldsymbol {q}})\right\rangle }{\tau }}{\cfrac {\partial f}{\partial {\boldsymbol {\sigma }}}}={\begin{cases}{\cfrac {f({\boldsymbol {\sigma }},{\boldsymbol {q}})}{\tau }}{\cfrac {\partial f}{\partial {\boldsymbol {\sigma }}}}&{\rm {if}}~f({\boldsymbol {\sigma }},{\boldsymbol {q}})>0\\0&{\rm {otherwise}}\\\end{cases}}}
where
f
(
.
,
.
)
{\displaystyle f(.,.)}
is a yield function,
σ
{\displaystyle {\boldsymbol {\sigma }}}
is the Cauchy stress,
q
{\displaystyle {\boldsymbol {q}}}
is a set of internal variables (such as the plastic strain
ε
v
p
{\displaystyle {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}
),
τ
{\displaystyle \tau }
is a relaxation time. The notation
⟨
…
⟩
{\displaystyle \langle \dots \rangle }
denotes the Macaulay brackets. The flow rule used in various versions of the Chaboche model is a special case of Perzyna's flow rule and has the form
ε
˙
v
p
=
⟨
f
f
0
⟩
n
s
i
g
n
(
σ
−
χ
)
{\displaystyle {\dot {\varepsilon }}_{\mathrm {vp} }=\left\langle {\frac {f}{f_{0}}}\right\rangle ^{n}sign({\boldsymbol {\sigma }}-{\boldsymbol {\chi }})}
where
f
0
{\displaystyle f_{0}}
is the quasistatic value of
f
{\displaystyle f}
and
χ
{\displaystyle {\boldsymbol {\chi }}}
is a backstress. Several models for the backstress also go by the name Chaboche model.
=== Duvaut–Lions formulation ===
The Duvaut–Lions formulation is equivalent to the Perzyna formulation and may be expressed as
ε
˙
v
p
=
{
C
−
1
:
σ
−
P
σ
τ
i
f
f
(
σ
,
q
)
>
0
0
o
t
h
e
r
w
i
s
e
{\displaystyle {\dot {\varepsilon }}_{\mathrm {vp} }={\begin{cases}{\mathsf {C}}^{-1}:{\cfrac {{\boldsymbol {\sigma }}-{\mathcal {P}}{\boldsymbol {\sigma }}}{\tau }}&{\rm {{if}~f({\boldsymbol {\sigma }},{\boldsymbol {q}})>0}}\\0&{\rm {otherwise}}\end{cases}}}
where
C
{\displaystyle {\mathsf {C}}}
is the elastic stiffness tensor,
P
σ
{\displaystyle {\mathcal {P}}{\boldsymbol {\sigma }}}
is the closest point projection of the stress state on to the boundary of the region that bounds all possible elastic stress states. The quantity
P
σ
{\displaystyle {\mathcal {P}}{\boldsymbol {\sigma }}}
is typically found from the rate-independent solution to a plasticity problem.
=== Flow stress models ===
The quantity
f
(
σ
,
q
)
{\displaystyle f({\boldsymbol {\sigma }},{\boldsymbol {q}})}
represents the evolution of the yield surface. The yield function
f
{\displaystyle f}
is often expressed as an equation consisting of some invariant of stress and a model for the yield stress (or plastic flow stress). An example is von Mises or
J
2
{\displaystyle J_{2}}
plasticity. In those situations the plastic strain rate is calculated in the same manner as in rate-independent plasticity. In other situations, the yield stress model provides a direct means of computing the plastic strain rate.
Numerous empirical and semi-empirical flow stress models are used the computational plasticity. The following temperature and strain-rate dependent models provide a sampling of the models in current use:
the Johnson–Cook model
the Steinberg–Cochran–Guinan–Lund model.
the Zerilli–Armstrong model.
the Mechanical threshold stress model.
the Preston–Tonks–Wallace model.
The Johnson–Cook (JC) model is purely empirical and is the most widely used of the five. However, this model exhibits an unrealistically small strain-rate dependence at high temperatures. The Steinberg–Cochran–Guinan–Lund (SCGL) model is semi-empirical. The model is purely empirical and strain-rate independent at high strain-rates. A dislocation-based extension based on is used at low strain-rates. The SCGL model is used extensively by the shock physics community. The Zerilli–Armstrong (ZA) model is a simple physically based model that has been used extensively. A more complex model that is based on ideas from dislocation dynamics is the Mechanical Threshold Stress (MTS) model. This model has been used to model the plastic deformation of copper, tantalum, alloys of steel, and aluminum alloys. However, the MTS model is limited to strain-rates less than around 107/s. The Preston–Tonks–Wallace (PTW) model is also physically based and has a form similar to the MTS model. However, the PTW model has components that can model plastic deformation in the overdriven shock regime (strain-rates greater that 107/s). Hence this model is valid for the largest range of strain-rates among the five flow stress models.
==== Johnson–Cook flow stress model ====
The Johnson–Cook (JC) model is purely empirical and gives the following relation for the flow stress (
σ
y
{\displaystyle \sigma _{y}}
)
(1)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
[
A
+
B
(
ε
p
)
n
]
[
1
+
C
ln
(
ε
p
˙
∗
)
]
[
1
−
(
T
∗
)
m
]
{\displaystyle {\text{(1)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)=\left[A+B(\varepsilon _{\rm {p}})^{n}\right]\left[1+C\ln({\dot {\varepsilon _{\rm {p}}}}^{*})\right]\left[1-(T^{*})^{m}\right]}
where
ε
p
{\displaystyle \varepsilon _{\rm {p}}}
is the equivalent plastic strain,
ε
p
˙
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}}
is the
plastic strain-rate, and
A
,
B
,
C
,
n
,
m
{\displaystyle A,B,C,n,m}
are material constants.
The normalized strain-rate and temperature in equation (1) are defined as
ε
p
˙
∗
:=
ε
p
˙
ε
p
0
˙
and
T
∗
:=
(
T
−
T
0
)
(
T
m
−
T
0
)
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}^{*}:={\cfrac {\dot {\varepsilon _{\rm {p}}}}{\dot {\varepsilon _{\rm {p0}}}}}\qquad {\text{and}}\qquad T^{*}:={\cfrac {(T-T_{0})}{(T_{m}-T_{0})}}}
where
ε
p
0
˙
{\displaystyle {\dot {\varepsilon _{\rm {p0}}}}}
is the effective plastic strain-rate of the quasi-static test used to determine the yield and hardening parameters A,B and n. This is not as it is often thought just a parameter to make
ε
p
˙
∗
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}^{*}}
non-dimensional.
T
0
{\displaystyle T_{0}}
is a reference temperature, and
T
m
{\displaystyle T_{m}}
is a reference melt temperature. For conditions where
T
∗
<
0
{\displaystyle T^{*}<0}
, we assume that
m
=
1
{\displaystyle m=1}
.
==== Steinberg–Cochran–Guinan–Lund flow stress model ====
The Steinberg–Cochran–Guinan–Lund (SCGL) model is a semi-empirical model that was developed by Steinberg et al. for high strain-rate situations and extended to low strain-rates and bcc materials by Steinberg and Lund. The flow stress in this model is given by
(2)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
[
σ
a
f
(
ε
p
)
+
σ
t
(
ε
p
˙
,
T
)
]
μ
(
p
,
T
)
μ
0
;
σ
a
f
≤
σ
max
and
σ
t
≤
σ
p
{\displaystyle {\text{(2)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)=\left[\sigma _{a}f(\varepsilon _{\rm {p}})+\sigma _{t}({\dot {\varepsilon _{\rm {p}}}},T)\right]{\frac {\mu (p,T)}{\mu _{0}}};\quad \sigma _{a}f\leq \sigma _{\text{max}}~~{\text{and}}~~\sigma _{t}\leq \sigma _{p}}
where
σ
a
{\displaystyle \sigma _{a}}
is the athermal component of the flow stress,
f
(
ε
p
)
{\displaystyle f(\varepsilon _{\rm {p}})}
is a function that represents strain hardening,
σ
t
{\displaystyle \sigma _{t}}
is the thermally activated component of the flow stress,
μ
(
p
,
T
)
{\displaystyle \mu (p,T)}
is the pressure- and temperature-dependent shear modulus, and
μ
0
{\displaystyle \mu _{0}}
is the shear modulus at standard temperature and pressure. The saturation value of the athermal stress is
σ
max
{\displaystyle \sigma _{\text{max}}}
. The saturation of the thermally activated stress is the Peierls stress (
σ
p
{\displaystyle \sigma _{p}}
). The shear modulus for this model is usually computed with the Steinberg–Cochran–Guinan shear modulus model.
The strain hardening function (
f
{\displaystyle f}
) has the form
f
(
ε
p
)
=
[
1
+
β
(
ε
p
+
ε
p
i
)
]
n
{\displaystyle f(\varepsilon _{\rm {p}})=[1+\beta (\varepsilon _{\rm {p}}+\varepsilon _{\rm {p}}i)]^{n}}
where
β
,
n
{\displaystyle \beta ,n}
are work hardening parameters, and
ε
p
i
{\displaystyle \varepsilon _{\rm {p}}i}
is the initial equivalent plastic strain.
The thermal component (
σ
t
{\displaystyle \sigma _{t}}
) is computed using a bisection algorithm from the following equation.
ε
p
˙
=
[
1
C
1
exp
[
2
U
k
k
b
T
(
1
−
σ
t
σ
p
)
2
]
+
C
2
σ
t
]
−
1
;
σ
t
≤
σ
p
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}=\left[{\frac {1}{C_{1}}}\exp \left[{\frac {2U_{k}}{k_{b}~T}}\left(1-{\frac {\sigma _{t}}{\sigma _{p}}}\right)^{2}\right]+{\frac {C_{2}}{\sigma _{t}}}\right]^{-1};\quad \sigma _{t}\leq \sigma _{p}}
where
2
U
k
{\displaystyle 2U_{k}}
is the energy to form a kink-pair in a dislocation segment of length
L
d
{\displaystyle L_{d}}
,
k
b
{\displaystyle k_{b}}
is the Boltzmann constant,
σ
p
{\displaystyle \sigma _{p}}
is the Peierls stress. The constants
C
1
,
C
2
{\displaystyle C_{1},C_{2}}
are given by the relations
C
1
:=
ρ
d
L
d
a
b
2
ν
2
w
2
;
C
2
:=
D
ρ
d
b
2
{\displaystyle C_{1}:={\frac {\rho _{d}L_{d}ab^{2}\nu }{2w^{2}}};\quad C_{2}:={\frac {D}{\rho _{d}b^{2}}}}
where
ρ
d
{\displaystyle \rho _{d}}
is the dislocation density,
L
d
{\displaystyle L_{d}}
is the length of a dislocation segment,
a
{\displaystyle a}
is the distance between Peierls valleys,
b
{\displaystyle b}
is the magnitude of the Burgers vector,
ν
{\displaystyle \nu }
is the Debye frequency,
w
{\displaystyle w}
is the width of a kink loop, and
D
{\displaystyle D}
is the drag coefficient.
==== Zerilli–Armstrong flow stress model ====
The Zerilli–Armstrong (ZA) model is based on simplified dislocation mechanics. The general form of the equation for the flow stress is
(3)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
σ
a
+
B
exp
(
−
β
T
)
+
B
0
ε
p
exp
(
−
α
T
)
.
{\displaystyle {\text{(3)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)=\sigma _{a}+B\exp(-\beta T)+B_{0}{\sqrt {\varepsilon _{\rm {p}}}}\exp(-\alpha T)~.}
In this model,
σ
a
{\displaystyle \sigma _{a}}
is the athermal component of the flow stress given by
σ
a
:=
σ
g
+
k
h
l
+
K
ε
p
n
,
{\displaystyle \sigma _{a}:=\sigma _{g}+{\frac {k_{h}}{\sqrt {l}}}+K\varepsilon _{\rm {p}}^{n},}
where
σ
g
{\displaystyle \sigma _{g}}
is the contribution due to solutes and initial dislocation density,
k
h
{\displaystyle k_{h}}
is the microstructural stress intensity,
l
{\displaystyle l}
is the average grain diameter,
K
{\displaystyle K}
is zero for fcc materials,
B
,
B
0
{\displaystyle B,B_{0}}
are material constants.
In the thermally activated terms, the functional forms of the exponents
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
are
α
=
α
0
−
α
1
ln
(
ε
p
˙
)
;
β
=
β
0
−
β
1
ln
(
ε
p
˙
)
;
{\displaystyle \alpha =\alpha _{0}-\alpha _{1}\ln({\dot {\varepsilon _{\rm {p}}}});\quad \beta =\beta _{0}-\beta _{1}\ln({\dot {\varepsilon _{\rm {p}}}});}
where
α
0
,
α
1
,
β
0
,
β
1
{\displaystyle \alpha _{0},\alpha _{1},\beta _{0},\beta _{1}}
are material parameters that depend on the type of material (fcc, bcc, hcp, alloys). The Zerilli–Armstrong model has been modified by for better performance at high temperatures.
==== Mechanical threshold stress flow stress model ====
The Mechanical Threshold Stress (MTS) model) has the form
(4)
σ
y
(
ε
p
,
ε
˙
,
T
)
=
σ
a
+
(
S
i
σ
i
+
S
e
σ
e
)
μ
(
p
,
T
)
μ
0
{\displaystyle {\text{(4)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon }},T)=\sigma _{a}+(S_{i}\sigma _{i}+S_{e}\sigma _{e}){\frac {\mu (p,T)}{\mu _{0}}}}
where
σ
a
{\displaystyle \sigma _{a}}
is the athermal component of mechanical threshold stress,
σ
i
{\displaystyle \sigma _{i}}
is the component of the flow stress due to intrinsic barriers to thermally activated dislocation motion and dislocation-dislocation interactions,
σ
e
{\displaystyle \sigma _{e}}
is the component of the flow stress due to microstructural evolution with increasing deformation (strain hardening), (
S
i
,
S
e
{\displaystyle S_{i},S_{e}}
) are temperature and strain-rate dependent scaling factors, and
μ
0
{\displaystyle \mu _{0}}
is the shear modulus at 0 K and ambient pressure.
The scaling factors take the Arrhenius form
S
i
=
[
1
−
(
k
b
T
g
0
i
b
3
μ
(
p
,
T
)
ln
ε
0
˙
ε
˙
)
1
/
q
i
]
1
/
p
i
S
e
=
[
1
−
(
k
b
T
g
0
e
b
3
μ
(
p
,
T
)
ln
ε
0
˙
ε
˙
)
1
/
q
e
]
1
/
p
e
{\displaystyle {\begin{aligned}S_{i}&=\left[1-\left({\frac {k_{b}~T}{g_{0i}b^{3}\mu (p,T)}}\ln {\frac {\dot {\varepsilon _{\rm {0}}}}{\dot {\varepsilon }}}\right)^{1/q_{i}}\right]^{1/p_{i}}\\S_{e}&=\left[1-\left({\frac {k_{b}~T}{g_{0e}b^{3}\mu (p,T)}}\ln {\frac {\dot {\varepsilon _{\rm {0}}}}{\dot {\varepsilon }}}\right)^{1/q_{e}}\right]^{1/p_{e}}\end{aligned}}}
where
k
b
{\displaystyle k_{b}}
is the Boltzmann constant,
b
{\displaystyle b}
is the magnitude of the Burgers' vector, (
g
0
i
,
g
0
e
{\displaystyle g_{0i},g_{0e}}
) are normalized activation energies, (
ε
˙
,
ε
0
˙
{\displaystyle {\dot {\varepsilon }},{\dot {\varepsilon _{\rm {0}}}}}
) are the strain-rate and reference strain-rate, and (
q
i
,
p
i
,
q
e
,
p
e
{\displaystyle q_{i},p_{i},q_{e},p_{e}}
) are constants.
The strain hardening component of the mechanical threshold stress (
σ
e
{\displaystyle \sigma _{e}}
) is given by an empirical modified Voce law
(5)
d
σ
e
d
ε
p
=
θ
(
σ
e
)
{\displaystyle {\text{(5)}}\qquad {\frac {d\sigma _{e}}{d\varepsilon _{\rm {p}}}}=\theta (\sigma _{e})}
where
θ
(
σ
e
)
=
θ
0
[
1
−
F
(
σ
e
)
]
+
θ
I
V
F
(
σ
e
)
θ
0
=
a
0
+
a
1
ln
ε
p
˙
+
a
2
ε
p
˙
−
a
3
T
F
(
σ
e
)
=
tanh
(
α
σ
e
σ
e
s
)
tanh
(
α
)
ln
(
σ
e
s
σ
0
e
s
)
=
(
k
T
g
0
e
s
b
3
μ
(
p
,
T
)
)
ln
(
ε
p
˙
ε
p
˙
)
{\displaystyle {\begin{aligned}\theta (\sigma _{e})&=\theta _{0}[1-F(\sigma _{e})]+\theta _{IV}F(\sigma _{e})\\\theta _{0}&=a_{0}+a_{1}\ln {\dot {\varepsilon _{\rm {p}}}}+a_{2}{\sqrt {\dot {\varepsilon _{\rm {p}}}}}-a_{3}T\\F(\sigma _{e})&={\cfrac {\tanh \left(\alpha {\cfrac {\sigma _{e}}{\sigma _{es}}}\right)}{\tanh(\alpha )}}\\\ln({\cfrac {\sigma _{es}}{\sigma _{0es}}})&=\left({\frac {kT}{g_{0es}b^{3}\mu (p,T)}}\right)\ln \left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\dot {\varepsilon _{\rm {p}}}}}\right)\end{aligned}}}
and
θ
0
{\displaystyle \theta _{0}}
is the hardening due to dislocation accumulation,
θ
I
V
{\displaystyle \theta _{IV}}
is the contribution due to stage-IV hardening, (
a
0
,
a
1
,
a
2
,
a
3
,
α
{\displaystyle a_{0},a_{1},a_{2},a_{3},\alpha }
) are constants,
σ
e
s
{\displaystyle \sigma _{es}}
is the stress at zero strain hardening rate,
σ
0
e
s
{\displaystyle \sigma _{0es}}
is the saturation threshold stress for deformation at 0 K,
g
0
e
s
{\displaystyle g_{0es}}
is a constant, and
ε
p
˙
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}}
is the maximum strain-rate. Note that the maximum strain-rate is usually limited to about
10
7
{\displaystyle 10^{7}}
/s.
==== Preston–Tonks–Wallace flow stress model ====
The Preston–Tonks–Wallace (PTW) model attempts to provide a model for the flow stress for extreme strain-rates (up to 1011/s) and temperatures up to melt. A linear Voce hardening law is used in the model. The PTW flow stress is given by
(6)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
{
2
[
τ
s
+
α
ln
[
1
−
φ
exp
(
−
β
−
θ
ε
p
α
φ
)
]
]
μ
(
p
,
T
)
thermal regime
2
τ
s
μ
(
p
,
T
)
shock regime
{\displaystyle {\text{(6)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)={\begin{cases}2\left[\tau _{s}+\alpha \ln \left[1-\varphi \exp \left(-\beta -{\cfrac {\theta \varepsilon _{\rm {p}}}{\alpha \varphi }}\right)\right]\right]\mu (p,T)&{\text{thermal regime}}\\2\tau _{s}\mu (p,T)&{\text{shock regime}}\end{cases}}}
with
α
:=
s
0
−
τ
y
d
;
β
:=
τ
s
−
τ
y
α
;
φ
:=
exp
(
β
)
−
1
{\displaystyle \alpha :={\frac {s_{0}-\tau _{y}}{d}};\quad \beta :={\frac {\tau _{s}-\tau _{y}}{\alpha }};\quad \varphi :=\exp(\beta )-1}
where
τ
s
{\displaystyle \tau _{s}}
is a normalized work-hardening saturation stress,
s
0
{\displaystyle s_{0}}
is the value of
τ
s
{\displaystyle \tau _{s}}
at 0K,
τ
y
{\displaystyle \tau _{y}}
is a normalized yield stress,
θ
{\displaystyle \theta }
is the hardening constant in the Voce hardening law, and
d
{\displaystyle d}
is a dimensionless material parameter that modifies the Voce hardening law.
The saturation stress and the yield stress are given by
τ
s
=
max
{
s
0
−
(
s
0
−
s
∞
)
e
r
f
[
κ
T
^
ln
(
γ
ξ
˙
ε
p
˙
)
]
,
s
0
(
ε
p
˙
γ
ξ
˙
)
s
1
}
τ
y
=
max
{
y
0
−
(
y
0
−
y
∞
)
e
r
f
[
κ
T
^
ln
(
γ
ξ
˙
ε
p
˙
)
]
,
min
{
y
1
(
ε
p
˙
γ
ξ
˙
)
y
2
,
s
0
(
ε
p
˙
γ
ξ
˙
)
s
1
}
}
{\displaystyle {\begin{aligned}\tau _{s}&=\max \left\{s_{0}-(s_{0}-s_{\infty }){\rm {{erf}\left[\kappa {\hat {T}}\ln \left({\cfrac {\gamma {\dot {\xi }}}{\dot {\varepsilon _{\rm {p}}}}}\right)\right],s_{0}\left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\gamma {\dot {\xi }}}}\right)^{s_{1}}}}\right\}\\\tau _{y}&=\max \left\{y_{0}-(y_{0}-y_{\infty }){\rm {{erf}\left[\kappa {\hat {T}}\ln \left({\cfrac {\gamma {\dot {\xi }}}{\dot {\varepsilon _{\rm {p}}}}}\right)\right],\min \left\{y_{1}\left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\gamma {\dot {\xi }}}}\right)^{y_{2}},s_{0}\left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\gamma {\dot {\xi }}}}\right)^{s_{1}}\right\}}}\right\}\end{aligned}}}
where
s
∞
{\displaystyle s_{\infty }}
is the value of
τ
s
{\displaystyle \tau _{s}}
close to the melt temperature, (
y
0
,
y
∞
{\displaystyle y_{0},y_{\infty }}
) are the values of
τ
y
{\displaystyle \tau _{y}}
at 0 K and close to melt, respectively,
(
κ
,
γ
)
{\displaystyle (\kappa ,\gamma )}
are material constants,
T
^
=
T
/
T
m
{\displaystyle {\hat {T}}=T/T_{m}}
, (
s
1
,
y
1
,
y
2
{\displaystyle s_{1},y_{1},y_{2}}
) are material parameters for the high strain-rate regime, and
ξ
˙
=
1
2
(
4
π
ρ
3
M
)
1
/
3
(
μ
(
p
,
T
)
ρ
)
1
/
2
{\displaystyle {\dot {\xi }}={\frac {1}{2}}\left({\cfrac {4\pi \rho }{3M}}\right)^{1/3}\left({\cfrac {\mu (p,T)}{\rho }}\right)^{1/2}}
where
ρ
{\displaystyle \rho }
is the density, and
M
{\displaystyle M}
is the atomic mass.
== See also ==
Viscoelasticity
Bingham plastic
Dashpot
Creep (deformation)
Plasticity (physics)
Continuum mechanics
Quasi-solid
== References == | Wikipedia/Johnson-Cook_plasticity_model |
Viscoplasticity is a theory in continuum mechanics that describes the rate-dependent inelastic behavior of solids. Rate-dependence in this context means that the deformation of the material depends on the rate at which loads are applied. The inelastic behavior that is the subject of viscoplasticity is plastic deformation which means that the material undergoes unrecoverable deformations when a load level is reached. Rate-dependent plasticity is important for transient plasticity calculations. The main difference between rate-independent plastic and viscoplastic material models is that the latter exhibit not only permanent deformations after the application of loads but continue to undergo a creep flow as a function of time under the influence of the applied load.
The elastic response of viscoplastic materials can be represented in one-dimension by Hookean spring elements. Rate-dependence can be represented by nonlinear dashpot elements in a manner similar to viscoelasticity. Plasticity can be accounted for by adding sliding frictional elements as shown in Figure 1. In the figure
E
{\displaystyle E}
is the modulus of elasticity,
λ
{\displaystyle \lambda }
is the viscosity parameter and
N
{\displaystyle N}
is a power-law type parameter that represents non-linear dashpot
[
σ
(
d
ε
/
d
t
)
=
σ
=
λ
(
d
ε
/
d
t
)
1
/
N
]
{\displaystyle [\sigma (\mathrm {d} \varepsilon /\mathrm {d} t)=\sigma =\lambda (\mathrm {d} \varepsilon /\mathrm {d} t)^{1/N}]}
. The sliding element can have a yield stress (
σ
y
{\displaystyle \sigma _{y}}
) that is strain rate dependent, or even constant, as shown in Figure 1c.
Viscoplasticity is usually modeled in three-dimensions using overstress models of the Perzyna or Duvaut-Lions types. In these models, the stress is allowed to increase beyond the rate-independent yield surface upon application of a load and then allowed to relax back to the yield surface over time. The yield surface is usually assumed not to be rate-dependent in such models. An alternative approach is to add a strain rate dependence to the yield stress and use the techniques of rate independent plasticity to calculate the response of a material.
For metals and alloys, viscoplasticity is the macroscopic behavior caused by a mechanism linked to the movement of dislocations in grains, with superposed effects of inter-crystalline gliding. The mechanism usually becomes dominant at temperatures greater than approximately one third of the absolute melting temperature. However, certain alloys exhibit viscoplasticity at room temperature (300 K). For polymers, wood, and bitumen, the theory of viscoplasticity is required to describe behavior beyond the limit of elasticity or viscoelasticity.
In general, viscoplasticity theories are useful in areas such as:
the calculation of permanent deformations,
the prediction of the plastic collapse of structures,
the investigation of stability,
crash simulations,
systems exposed to high temperatures such as turbines in engines, e.g. a power plant,
dynamic problems and systems exposed to high strain rates.
== History ==
Research on plasticity theories started in 1864 with the work of Henri Tresca, Saint Venant (1870) and Levy (1871) on the maximum shear criterion. An improved plasticity model was presented in 1913 by Von Mises which is now referred to as the von Mises yield criterion. In viscoplasticity, the development of a mathematical model heads back to 1910 with the representation of primary creep by Andrade's law. In 1929, Norton developed a one-dimensional dashpot model which linked the rate of secondary creep to the stress. In 1934, Odqvist generalized Norton's law to the multi-axial case.
Concepts such as the normality of plastic flow to the yield surface and flow rules for plasticity were introduced by Prandtl (1924) and Reuss (1930). In 1932, Hohenemser and Prager proposed the first model for slow viscoplastic flow. This model provided a relation between the deviatoric stress and the strain rate for an incompressible Bingham solid However, the application of these theories did not begin before 1950, where limit theorems were discovered.
In 1960, the first IUTAM Symposium "Creep in Structures" organized by Hoff provided a major development in viscoplasticity with the works of Hoff, Rabotnov, Perzyna, Hult, and Lemaitre for the isotropic hardening laws, and those of Kratochvil, Malinini and Khadjinsky, Ponter and Leckie, and Chaboche for the kinematic hardening laws. Perzyna, in 1963, introduced a viscosity coefficient that is temperature and time dependent. The formulated models were supported by the thermodynamics of irreversible processes and the phenomenological standpoint. The ideas presented in these works have been the basis for most subsequent research into rate-dependent plasticity.
== Phenomenology ==
For a qualitative analysis, several characteristic tests are performed to describe the phenomenology of viscoplastic materials. Some examples of these tests are
hardening tests at constant stress or strain rate,
creep tests at constant force, and
stress relaxation at constant elongation.
=== Strain hardening test ===
One consequence of yielding is that as plastic deformation proceeds, an increase in stress is required to produce additional strain. This phenomenon is known as Strain/Work hardening. For a viscoplastic material the hardening curves are not significantly different from those of rate-independent plastic material. Nevertheless, three essential differences can be observed.
At the same strain, the higher the rate of strain the higher the stress
A change in the rate of strain during the test results in an immediate change in the stress–strain curve.
The concept of a plastic yield limit is no longer strictly applicable.
The hypothesis of partitioning the strains by decoupling the elastic and plastic parts is still applicable where the strains are small, i.e.,
ε
=
ε
e
+
ε
v
p
{\displaystyle {\boldsymbol {\varepsilon }}={\boldsymbol {\varepsilon }}_{\mathrm {e} }+{\boldsymbol {\varepsilon }}_{\mathrm {vp} }}
where
ε
e
{\displaystyle {\boldsymbol {\varepsilon }}_{\mathrm {e} }}
is the elastic strain and
ε
v
p
{\displaystyle {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}
is the viscoplastic strain. To obtain the stress–strain behavior shown in blue in the figure, the material is initially loaded at a strain rate of 0.1/s. The strain rate is then instantaneously raised to 100/s and held constant at that value for some time. At the end of that time period the strain rate is dropped instantaneously back to 0.1/s and the cycle is continued for increasing values of strain. There is clearly a lag between the strain-rate change and the stress response. This lag is modeled quite accurately by overstress models (such as the Perzyna model) but not by models of rate-independent plasticity that have a rate-dependent yield stress.
=== Creep test ===
Creep is the tendency of a solid material to slowly move or deform permanently under constant stresses. Creep tests measure the strain response due to a constant stress as shown in Figure 3. The classical creep curve represents the evolution of strain as a function of time in a material subjected to uniaxial stress at a constant temperature. The creep test, for instance, is performed by applying a constant force/stress and analyzing the strain response of the system. In general, as shown in Figure 3b this curve usually shows three phases or periods of behavior:
A primary creep stage, also known as transient creep, is the starting stage during which hardening of the material leads to a decrease in the rate of flow which is initially very high.
(
0
≤
ε
≤
ε
1
)
{\displaystyle (0\leq {\boldsymbol {\varepsilon }}\leq {\boldsymbol {\varepsilon }}_{1})}
.
The secondary creep stage, also known as the steady state, is where the strain rate is constant.
(
ε
1
≤
ε
≤
ε
2
)
{\displaystyle ({\boldsymbol {\varepsilon }}_{1}\leq {\boldsymbol {\varepsilon }}\leq {\boldsymbol {\varepsilon }}_{2})}
.
A tertiary creep phase in which there is an increase in the strain rate up to the fracture strain.
(
ε
2
≤
ε
≤
ε
R
)
{\displaystyle ({\boldsymbol {\varepsilon }}_{2}\leq {\boldsymbol {\varepsilon }}\leq {\boldsymbol {\varepsilon }}_{R})}
.
=== Relaxation test ===
As shown in Figure 4, the relaxation test is defined as the stress response due to a constant strain for a period of time. In viscoplastic materials, relaxation tests demonstrate the stress relaxation in uniaxial loading at a constant strain. In fact, these tests characterize the viscosity and can be used to determine the relation which exists between the stress and the rate of viscoplastic strain. The decomposition of strain rate is
d
ε
d
t
=
d
ε
e
d
t
+
d
ε
v
p
d
t
.
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}}{\mathrm {d} t}}={\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {e} }}{\mathrm {d} t}}+{\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}{\mathrm {d} t}}~.}
The elastic part of the strain rate is given by
d
ε
e
d
t
=
E
−
1
d
σ
d
t
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {e} }}{\mathrm {d} t}}={\mathsf {E}}^{-1}~{\cfrac {\mathrm {d} {\boldsymbol {\sigma }}}{\mathrm {d} t}}}
For the flat region of the strain–time curve, the total strain rate is zero. Hence we have,
d
ε
v
p
d
t
=
−
E
−
1
d
σ
d
t
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}{\mathrm {d} t}}=-{\mathsf {E}}^{-1}~{\cfrac {\mathrm {d} {\boldsymbol {\sigma }}}{\mathrm {d} t}}}
Therefore, the relaxation curve can be used to determine rate of viscoplastic strain and hence the viscosity of the dashpot in a one-dimensional viscoplastic material model. The residual value that is reached when the stress has plateaued at the end of a relaxation test corresponds to the upper limit of elasticity. For some materials such as rock salt such an upper limit of elasticity occurs at a very small value of stress and relaxation tests can be continued for more than a year without any observable plateau in the stress.
It is important to note that relaxation tests are extremely difficult to perform because maintaining the condition
d
ε
d
t
=
0
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}}{\mathrm {d} t}}=0}
in a test requires considerable delicacy.
== Rheological models of viscoplasticity ==
One-dimensional constitutive models for viscoplasticity based on spring-dashpot-slider elements include the perfectly viscoplastic solid, the elastic perfectly viscoplastic solid, and the elastoviscoplastic hardening solid. The elements may be connected in series or in parallel. In models where the elements are connected in series the strain is additive while the stress is equal in each element. In parallel connections, the stress is additive while the strain is equal in each element. Many of these one-dimensional models can be generalized to three dimensions for the small strain regime. In the subsequent discussion, time rates strain and stress are written as
ε
˙
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}}
and
σ
˙
{\displaystyle {\dot {\boldsymbol {\sigma }}}}
, respectively.
=== Perfectly viscoplastic solid (Norton-Hoff model) ===
In a perfectly viscoplastic solid, also called the Norton-Hoff model of viscoplasticity, the stress (as for viscous fluids) is a function of the rate of permanent strain. The effect of elasticity is neglected in the model, i.e.,
ε
e
=
0
{\displaystyle {\boldsymbol {\varepsilon }}_{e}=0}
and hence there is no initial yield stress, i.e.,
σ
y
=
0
{\displaystyle \sigma _{y}=0}
. The viscous dashpot has a response given by
σ
=
η
ε
˙
v
p
⟹
ε
˙
v
p
=
σ
η
{\displaystyle {\boldsymbol {\sigma }}=\eta ~{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }\implies {\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\cfrac {\boldsymbol {\sigma }}{\eta }}}
where
η
{\displaystyle \eta }
is the viscosity of the dashpot. In the Norton-Hoff model the viscosity
η
{\displaystyle \eta }
is a nonlinear function of the applied stress and is given by
η
=
λ
[
λ
|
|
σ
|
|
]
N
−
1
{\displaystyle \eta =\lambda \left[{\cfrac {\lambda }{||{\boldsymbol {\sigma }}||}}\right]^{N-1}}
where
N
{\displaystyle N}
is a fitting parameter, λ is the kinematic viscosity of the material and
|
|
σ
|
|
=
σ
:
σ
=
σ
i
j
σ
i
j
{\displaystyle ||{\boldsymbol {\sigma }}||={\sqrt {{\boldsymbol {\sigma }}:{\boldsymbol {\sigma }}}}={\sqrt {\sigma _{ij}\sigma _{ij}}}}
. Then the viscoplastic strain rate is given by the relation
ε
˙
v
p
=
σ
λ
[
|
|
σ
|
|
λ
]
N
−
1
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\cfrac {\boldsymbol {\sigma }}{\lambda }}\left[{\cfrac {||{\boldsymbol {\sigma }}||}{\lambda }}\right]^{N-1}}
In one-dimensional form, the Norton-Hoff model can be expressed as
σ
=
λ
(
ε
˙
v
p
)
1
/
N
{\displaystyle \sigma =\lambda ~\left({\dot {\varepsilon }}_{\mathrm {vp} }\right)^{1/N}}
When
N
=
1.0
{\displaystyle N=1.0}
the solid is viscoelastic.
If we assume that plastic flow is isochoric (volume preserving), then the above relation can be expressed in the more familiar form
s
=
2
K
(
3
ε
˙
e
q
)
m
−
1
ε
˙
v
p
{\displaystyle {\boldsymbol {s}}=2K~\left({\sqrt {3}}{\dot {\varepsilon }}_{\mathrm {eq} }\right)^{m-1}~{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }}
where
s
{\displaystyle {\boldsymbol {s}}}
is the deviatoric stress tensor,
ε
˙
e
q
{\displaystyle {\dot {\varepsilon }}_{\mathrm {eq} }}
is the von Mises equivalent strain rate, and
K
,
m
{\displaystyle K,m}
are material parameters. The equivalent strain rate is defined as
ϵ
¯
˙
=
2
3
ϵ
¯
¯
˙
:
ϵ
¯
¯
˙
{\displaystyle {\dot {\bar {\epsilon }}}={\sqrt {{\frac {2}{3}}{\dot {\bar {\bar {\epsilon }}}}:{\dot {\bar {\bar {\epsilon }}}}}}}
These models can be applied in metals and alloys at temperatures higher than two thirds of their absolute melting point (in kelvins) and polymers/asphalt at elevated temperature. The responses for strain hardening, creep, and relaxation tests of such material are shown in Figure 6.
=== Elastic perfectly viscoplastic solid (Bingham–Norton model) ===
Two types of elementary approaches can be used to build up an elastic-perfectly viscoplastic mode. In the first situation, the sliding friction element and the dashpot are arranged in parallel and then connected in series to the elastic spring as shown in Figure 7. This model is called the Bingham–Maxwell model (by analogy with the Maxwell model and the Bingham model) or the Bingham–Norton model. In the second situation, all three elements are arranged in parallel. Such a model is called a Bingham–Kelvin model by analogy with the Kelvin model.
For elastic-perfectly viscoplastic materials, the elastic strain is no longer considered negligible but the rate of plastic strain is only a function of the initial yield stress and there is no influence of hardening. The sliding element represents a constant yielding stress when the elastic limit is exceeded irrespective of the strain. The model can be expressed as
σ
=
E
ε
f
o
r
‖
σ
‖
<
σ
y
ε
˙
=
ε
˙
e
+
ε
˙
v
p
=
E
−
1
σ
˙
+
σ
η
[
1
−
σ
y
‖
σ
‖
]
f
o
r
‖
σ
‖
≥
σ
y
{\displaystyle {\begin{aligned}&{\boldsymbol {\sigma }}={\mathsf {E}}~{\boldsymbol {\varepsilon }}&&\mathrm {for} ~\|{\boldsymbol {\sigma }}\|<\sigma _{y}\\&{\dot {\boldsymbol {\varepsilon }}}={\dot {\boldsymbol {\varepsilon }}}_{\mathrm {e} }+{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+{\cfrac {\boldsymbol {\sigma }}{\eta }}\left[1-{\cfrac {\sigma _{y}}{\|{\boldsymbol {\sigma }}\|}}\right]&&\mathrm {for} ~\|{\boldsymbol {\sigma }}\|\geq \sigma _{y}\end{aligned}}}
where
η
{\displaystyle \eta }
is the viscosity of the dashpot element. If the dashpot element has a response that is of the Norton form
σ
η
=
σ
λ
[
‖
σ
‖
λ
]
N
−
1
{\displaystyle {\cfrac {\boldsymbol {\sigma }}{\eta }}={\cfrac {\boldsymbol {\sigma }}{\lambda }}\left[{\cfrac {\|{\boldsymbol {\sigma }}\|}{\lambda }}\right]^{N-1}}
we get the Bingham–Norton model
ε
˙
=
E
−
1
σ
˙
+
σ
λ
[
‖
σ
‖
λ
]
N
−
1
[
1
−
σ
y
‖
σ
‖
]
f
o
r
‖
σ
‖
≥
σ
y
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+{\cfrac {\boldsymbol {\sigma }}{\lambda }}\left[{\cfrac {\|{\boldsymbol {\sigma }}\|}{\lambda }}\right]^{N-1}\left[1-{\cfrac {\sigma _{y}}{\|{\boldsymbol {\sigma }}\|}}\right]\quad \mathrm {for} ~\|{\boldsymbol {\sigma }}\|\geq \sigma _{y}}
Other expressions for the strain rate can also be observed in the literature with the general form
ε
˙
=
E
−
1
σ
˙
+
f
(
σ
,
σ
y
)
σ
f
o
r
‖
σ
‖
≥
σ
y
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+f({\boldsymbol {\sigma }},\sigma _{y})~{\boldsymbol {\sigma }}\quad \mathrm {for} ~\|{\boldsymbol {\sigma }}\|\geq \sigma _{y}}
The responses for strain hardening, creep, and relaxation tests of such material are shown in Figure 8.
=== Elastoviscoplastic hardening solid ===
An elastic-viscoplastic material with strain hardening is described by equations similar to those for an elastic-viscoplastic material with perfect plasticity. However, in this case the stress depends both on the plastic strain rate and on the plastic strain itself. For an elastoviscoplastic material the stress, after exceeding the yield stress, continues to increase beyond the initial yielding point. This implies that the yield stress in the sliding element increases with strain and the model may be expressed in generic terms as
ε
=
ε
e
=
E
−
1
σ
=
ε
f
o
r
|
|
σ
|
|
<
σ
y
ε
˙
=
ε
˙
e
+
ε
˙
v
p
=
E
−
1
σ
˙
+
f
(
σ
,
σ
y
,
ε
v
p
)
σ
f
o
r
|
|
σ
|
|
≥
σ
y
{\displaystyle {\begin{aligned}&{\boldsymbol {\varepsilon }}={\boldsymbol {\varepsilon }}_{\mathrm {e} }={\mathsf {E}}^{-1}~{\boldsymbol {\sigma }}=~{\boldsymbol {\varepsilon }}&&\mathrm {for} ~||{\boldsymbol {\sigma }}||<\sigma _{y}\\&{\dot {\boldsymbol {\varepsilon }}}={\dot {\boldsymbol {\varepsilon }}}_{\mathrm {e} }+{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+f({\boldsymbol {\sigma }},\sigma _{y},{\boldsymbol {\varepsilon }}_{\mathrm {vp} })~{\boldsymbol {\sigma }}&&\mathrm {for} ~||{\boldsymbol {\sigma }}||\geq \sigma _{y}\end{aligned}}}
This model is adopted when metals and alloys are at medium and higher temperatures and wood under high loads. The responses for strain hardening, creep, and relaxation tests of such a material are shown in Figure 9.
== Strain-rate dependent plasticity models ==
Classical phenomenological viscoplasticity models for small strains are usually categorized into two types:
the Perzyna formulation
the Duvaut–Lions formulation
=== Perzyna formulation ===
In the Perzyna formulation the plastic strain rate is assumed to be given by a constitutive relation of the form
ε
˙
v
p
=
⟨
f
(
σ
,
q
)
⟩
τ
∂
f
∂
σ
=
{
f
(
σ
,
q
)
τ
∂
f
∂
σ
i
f
f
(
σ
,
q
)
>
0
0
o
t
h
e
r
w
i
s
e
{\displaystyle {\dot {\varepsilon }}_{\mathrm {vp} }={\cfrac {\left\langle f({\boldsymbol {\sigma }},{\boldsymbol {q}})\right\rangle }{\tau }}{\cfrac {\partial f}{\partial {\boldsymbol {\sigma }}}}={\begin{cases}{\cfrac {f({\boldsymbol {\sigma }},{\boldsymbol {q}})}{\tau }}{\cfrac {\partial f}{\partial {\boldsymbol {\sigma }}}}&{\rm {if}}~f({\boldsymbol {\sigma }},{\boldsymbol {q}})>0\\0&{\rm {otherwise}}\\\end{cases}}}
where
f
(
.
,
.
)
{\displaystyle f(.,.)}
is a yield function,
σ
{\displaystyle {\boldsymbol {\sigma }}}
is the Cauchy stress,
q
{\displaystyle {\boldsymbol {q}}}
is a set of internal variables (such as the plastic strain
ε
v
p
{\displaystyle {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}
),
τ
{\displaystyle \tau }
is a relaxation time. The notation
⟨
…
⟩
{\displaystyle \langle \dots \rangle }
denotes the Macaulay brackets. The flow rule used in various versions of the Chaboche model is a special case of Perzyna's flow rule and has the form
ε
˙
v
p
=
⟨
f
f
0
⟩
n
s
i
g
n
(
σ
−
χ
)
{\displaystyle {\dot {\varepsilon }}_{\mathrm {vp} }=\left\langle {\frac {f}{f_{0}}}\right\rangle ^{n}sign({\boldsymbol {\sigma }}-{\boldsymbol {\chi }})}
where
f
0
{\displaystyle f_{0}}
is the quasistatic value of
f
{\displaystyle f}
and
χ
{\displaystyle {\boldsymbol {\chi }}}
is a backstress. Several models for the backstress also go by the name Chaboche model.
=== Duvaut–Lions formulation ===
The Duvaut–Lions formulation is equivalent to the Perzyna formulation and may be expressed as
ε
˙
v
p
=
{
C
−
1
:
σ
−
P
σ
τ
i
f
f
(
σ
,
q
)
>
0
0
o
t
h
e
r
w
i
s
e
{\displaystyle {\dot {\varepsilon }}_{\mathrm {vp} }={\begin{cases}{\mathsf {C}}^{-1}:{\cfrac {{\boldsymbol {\sigma }}-{\mathcal {P}}{\boldsymbol {\sigma }}}{\tau }}&{\rm {{if}~f({\boldsymbol {\sigma }},{\boldsymbol {q}})>0}}\\0&{\rm {otherwise}}\end{cases}}}
where
C
{\displaystyle {\mathsf {C}}}
is the elastic stiffness tensor,
P
σ
{\displaystyle {\mathcal {P}}{\boldsymbol {\sigma }}}
is the closest point projection of the stress state on to the boundary of the region that bounds all possible elastic stress states. The quantity
P
σ
{\displaystyle {\mathcal {P}}{\boldsymbol {\sigma }}}
is typically found from the rate-independent solution to a plasticity problem.
=== Flow stress models ===
The quantity
f
(
σ
,
q
)
{\displaystyle f({\boldsymbol {\sigma }},{\boldsymbol {q}})}
represents the evolution of the yield surface. The yield function
f
{\displaystyle f}
is often expressed as an equation consisting of some invariant of stress and a model for the yield stress (or plastic flow stress). An example is von Mises or
J
2
{\displaystyle J_{2}}
plasticity. In those situations the plastic strain rate is calculated in the same manner as in rate-independent plasticity. In other situations, the yield stress model provides a direct means of computing the plastic strain rate.
Numerous empirical and semi-empirical flow stress models are used the computational plasticity. The following temperature and strain-rate dependent models provide a sampling of the models in current use:
the Johnson–Cook model
the Steinberg–Cochran–Guinan–Lund model.
the Zerilli–Armstrong model.
the Mechanical threshold stress model.
the Preston–Tonks–Wallace model.
The Johnson–Cook (JC) model is purely empirical and is the most widely used of the five. However, this model exhibits an unrealistically small strain-rate dependence at high temperatures. The Steinberg–Cochran–Guinan–Lund (SCGL) model is semi-empirical. The model is purely empirical and strain-rate independent at high strain-rates. A dislocation-based extension based on is used at low strain-rates. The SCGL model is used extensively by the shock physics community. The Zerilli–Armstrong (ZA) model is a simple physically based model that has been used extensively. A more complex model that is based on ideas from dislocation dynamics is the Mechanical Threshold Stress (MTS) model. This model has been used to model the plastic deformation of copper, tantalum, alloys of steel, and aluminum alloys. However, the MTS model is limited to strain-rates less than around 107/s. The Preston–Tonks–Wallace (PTW) model is also physically based and has a form similar to the MTS model. However, the PTW model has components that can model plastic deformation in the overdriven shock regime (strain-rates greater that 107/s). Hence this model is valid for the largest range of strain-rates among the five flow stress models.
==== Johnson–Cook flow stress model ====
The Johnson–Cook (JC) model is purely empirical and gives the following relation for the flow stress (
σ
y
{\displaystyle \sigma _{y}}
)
(1)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
[
A
+
B
(
ε
p
)
n
]
[
1
+
C
ln
(
ε
p
˙
∗
)
]
[
1
−
(
T
∗
)
m
]
{\displaystyle {\text{(1)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)=\left[A+B(\varepsilon _{\rm {p}})^{n}\right]\left[1+C\ln({\dot {\varepsilon _{\rm {p}}}}^{*})\right]\left[1-(T^{*})^{m}\right]}
where
ε
p
{\displaystyle \varepsilon _{\rm {p}}}
is the equivalent plastic strain,
ε
p
˙
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}}
is the
plastic strain-rate, and
A
,
B
,
C
,
n
,
m
{\displaystyle A,B,C,n,m}
are material constants.
The normalized strain-rate and temperature in equation (1) are defined as
ε
p
˙
∗
:=
ε
p
˙
ε
p
0
˙
and
T
∗
:=
(
T
−
T
0
)
(
T
m
−
T
0
)
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}^{*}:={\cfrac {\dot {\varepsilon _{\rm {p}}}}{\dot {\varepsilon _{\rm {p0}}}}}\qquad {\text{and}}\qquad T^{*}:={\cfrac {(T-T_{0})}{(T_{m}-T_{0})}}}
where
ε
p
0
˙
{\displaystyle {\dot {\varepsilon _{\rm {p0}}}}}
is the effective plastic strain-rate of the quasi-static test used to determine the yield and hardening parameters A,B and n. This is not as it is often thought just a parameter to make
ε
p
˙
∗
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}^{*}}
non-dimensional.
T
0
{\displaystyle T_{0}}
is a reference temperature, and
T
m
{\displaystyle T_{m}}
is a reference melt temperature. For conditions where
T
∗
<
0
{\displaystyle T^{*}<0}
, we assume that
m
=
1
{\displaystyle m=1}
.
==== Steinberg–Cochran–Guinan–Lund flow stress model ====
The Steinberg–Cochran–Guinan–Lund (SCGL) model is a semi-empirical model that was developed by Steinberg et al. for high strain-rate situations and extended to low strain-rates and bcc materials by Steinberg and Lund. The flow stress in this model is given by
(2)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
[
σ
a
f
(
ε
p
)
+
σ
t
(
ε
p
˙
,
T
)
]
μ
(
p
,
T
)
μ
0
;
σ
a
f
≤
σ
max
and
σ
t
≤
σ
p
{\displaystyle {\text{(2)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)=\left[\sigma _{a}f(\varepsilon _{\rm {p}})+\sigma _{t}({\dot {\varepsilon _{\rm {p}}}},T)\right]{\frac {\mu (p,T)}{\mu _{0}}};\quad \sigma _{a}f\leq \sigma _{\text{max}}~~{\text{and}}~~\sigma _{t}\leq \sigma _{p}}
where
σ
a
{\displaystyle \sigma _{a}}
is the athermal component of the flow stress,
f
(
ε
p
)
{\displaystyle f(\varepsilon _{\rm {p}})}
is a function that represents strain hardening,
σ
t
{\displaystyle \sigma _{t}}
is the thermally activated component of the flow stress,
μ
(
p
,
T
)
{\displaystyle \mu (p,T)}
is the pressure- and temperature-dependent shear modulus, and
μ
0
{\displaystyle \mu _{0}}
is the shear modulus at standard temperature and pressure. The saturation value of the athermal stress is
σ
max
{\displaystyle \sigma _{\text{max}}}
. The saturation of the thermally activated stress is the Peierls stress (
σ
p
{\displaystyle \sigma _{p}}
). The shear modulus for this model is usually computed with the Steinberg–Cochran–Guinan shear modulus model.
The strain hardening function (
f
{\displaystyle f}
) has the form
f
(
ε
p
)
=
[
1
+
β
(
ε
p
+
ε
p
i
)
]
n
{\displaystyle f(\varepsilon _{\rm {p}})=[1+\beta (\varepsilon _{\rm {p}}+\varepsilon _{\rm {p}}i)]^{n}}
where
β
,
n
{\displaystyle \beta ,n}
are work hardening parameters, and
ε
p
i
{\displaystyle \varepsilon _{\rm {p}}i}
is the initial equivalent plastic strain.
The thermal component (
σ
t
{\displaystyle \sigma _{t}}
) is computed using a bisection algorithm from the following equation.
ε
p
˙
=
[
1
C
1
exp
[
2
U
k
k
b
T
(
1
−
σ
t
σ
p
)
2
]
+
C
2
σ
t
]
−
1
;
σ
t
≤
σ
p
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}=\left[{\frac {1}{C_{1}}}\exp \left[{\frac {2U_{k}}{k_{b}~T}}\left(1-{\frac {\sigma _{t}}{\sigma _{p}}}\right)^{2}\right]+{\frac {C_{2}}{\sigma _{t}}}\right]^{-1};\quad \sigma _{t}\leq \sigma _{p}}
where
2
U
k
{\displaystyle 2U_{k}}
is the energy to form a kink-pair in a dislocation segment of length
L
d
{\displaystyle L_{d}}
,
k
b
{\displaystyle k_{b}}
is the Boltzmann constant,
σ
p
{\displaystyle \sigma _{p}}
is the Peierls stress. The constants
C
1
,
C
2
{\displaystyle C_{1},C_{2}}
are given by the relations
C
1
:=
ρ
d
L
d
a
b
2
ν
2
w
2
;
C
2
:=
D
ρ
d
b
2
{\displaystyle C_{1}:={\frac {\rho _{d}L_{d}ab^{2}\nu }{2w^{2}}};\quad C_{2}:={\frac {D}{\rho _{d}b^{2}}}}
where
ρ
d
{\displaystyle \rho _{d}}
is the dislocation density,
L
d
{\displaystyle L_{d}}
is the length of a dislocation segment,
a
{\displaystyle a}
is the distance between Peierls valleys,
b
{\displaystyle b}
is the magnitude of the Burgers vector,
ν
{\displaystyle \nu }
is the Debye frequency,
w
{\displaystyle w}
is the width of a kink loop, and
D
{\displaystyle D}
is the drag coefficient.
==== Zerilli–Armstrong flow stress model ====
The Zerilli–Armstrong (ZA) model is based on simplified dislocation mechanics. The general form of the equation for the flow stress is
(3)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
σ
a
+
B
exp
(
−
β
T
)
+
B
0
ε
p
exp
(
−
α
T
)
.
{\displaystyle {\text{(3)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)=\sigma _{a}+B\exp(-\beta T)+B_{0}{\sqrt {\varepsilon _{\rm {p}}}}\exp(-\alpha T)~.}
In this model,
σ
a
{\displaystyle \sigma _{a}}
is the athermal component of the flow stress given by
σ
a
:=
σ
g
+
k
h
l
+
K
ε
p
n
,
{\displaystyle \sigma _{a}:=\sigma _{g}+{\frac {k_{h}}{\sqrt {l}}}+K\varepsilon _{\rm {p}}^{n},}
where
σ
g
{\displaystyle \sigma _{g}}
is the contribution due to solutes and initial dislocation density,
k
h
{\displaystyle k_{h}}
is the microstructural stress intensity,
l
{\displaystyle l}
is the average grain diameter,
K
{\displaystyle K}
is zero for fcc materials,
B
,
B
0
{\displaystyle B,B_{0}}
are material constants.
In the thermally activated terms, the functional forms of the exponents
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
are
α
=
α
0
−
α
1
ln
(
ε
p
˙
)
;
β
=
β
0
−
β
1
ln
(
ε
p
˙
)
;
{\displaystyle \alpha =\alpha _{0}-\alpha _{1}\ln({\dot {\varepsilon _{\rm {p}}}});\quad \beta =\beta _{0}-\beta _{1}\ln({\dot {\varepsilon _{\rm {p}}}});}
where
α
0
,
α
1
,
β
0
,
β
1
{\displaystyle \alpha _{0},\alpha _{1},\beta _{0},\beta _{1}}
are material parameters that depend on the type of material (fcc, bcc, hcp, alloys). The Zerilli–Armstrong model has been modified by for better performance at high temperatures.
==== Mechanical threshold stress flow stress model ====
The Mechanical Threshold Stress (MTS) model) has the form
(4)
σ
y
(
ε
p
,
ε
˙
,
T
)
=
σ
a
+
(
S
i
σ
i
+
S
e
σ
e
)
μ
(
p
,
T
)
μ
0
{\displaystyle {\text{(4)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon }},T)=\sigma _{a}+(S_{i}\sigma _{i}+S_{e}\sigma _{e}){\frac {\mu (p,T)}{\mu _{0}}}}
where
σ
a
{\displaystyle \sigma _{a}}
is the athermal component of mechanical threshold stress,
σ
i
{\displaystyle \sigma _{i}}
is the component of the flow stress due to intrinsic barriers to thermally activated dislocation motion and dislocation-dislocation interactions,
σ
e
{\displaystyle \sigma _{e}}
is the component of the flow stress due to microstructural evolution with increasing deformation (strain hardening), (
S
i
,
S
e
{\displaystyle S_{i},S_{e}}
) are temperature and strain-rate dependent scaling factors, and
μ
0
{\displaystyle \mu _{0}}
is the shear modulus at 0 K and ambient pressure.
The scaling factors take the Arrhenius form
S
i
=
[
1
−
(
k
b
T
g
0
i
b
3
μ
(
p
,
T
)
ln
ε
0
˙
ε
˙
)
1
/
q
i
]
1
/
p
i
S
e
=
[
1
−
(
k
b
T
g
0
e
b
3
μ
(
p
,
T
)
ln
ε
0
˙
ε
˙
)
1
/
q
e
]
1
/
p
e
{\displaystyle {\begin{aligned}S_{i}&=\left[1-\left({\frac {k_{b}~T}{g_{0i}b^{3}\mu (p,T)}}\ln {\frac {\dot {\varepsilon _{\rm {0}}}}{\dot {\varepsilon }}}\right)^{1/q_{i}}\right]^{1/p_{i}}\\S_{e}&=\left[1-\left({\frac {k_{b}~T}{g_{0e}b^{3}\mu (p,T)}}\ln {\frac {\dot {\varepsilon _{\rm {0}}}}{\dot {\varepsilon }}}\right)^{1/q_{e}}\right]^{1/p_{e}}\end{aligned}}}
where
k
b
{\displaystyle k_{b}}
is the Boltzmann constant,
b
{\displaystyle b}
is the magnitude of the Burgers' vector, (
g
0
i
,
g
0
e
{\displaystyle g_{0i},g_{0e}}
) are normalized activation energies, (
ε
˙
,
ε
0
˙
{\displaystyle {\dot {\varepsilon }},{\dot {\varepsilon _{\rm {0}}}}}
) are the strain-rate and reference strain-rate, and (
q
i
,
p
i
,
q
e
,
p
e
{\displaystyle q_{i},p_{i},q_{e},p_{e}}
) are constants.
The strain hardening component of the mechanical threshold stress (
σ
e
{\displaystyle \sigma _{e}}
) is given by an empirical modified Voce law
(5)
d
σ
e
d
ε
p
=
θ
(
σ
e
)
{\displaystyle {\text{(5)}}\qquad {\frac {d\sigma _{e}}{d\varepsilon _{\rm {p}}}}=\theta (\sigma _{e})}
where
θ
(
σ
e
)
=
θ
0
[
1
−
F
(
σ
e
)
]
+
θ
I
V
F
(
σ
e
)
θ
0
=
a
0
+
a
1
ln
ε
p
˙
+
a
2
ε
p
˙
−
a
3
T
F
(
σ
e
)
=
tanh
(
α
σ
e
σ
e
s
)
tanh
(
α
)
ln
(
σ
e
s
σ
0
e
s
)
=
(
k
T
g
0
e
s
b
3
μ
(
p
,
T
)
)
ln
(
ε
p
˙
ε
p
˙
)
{\displaystyle {\begin{aligned}\theta (\sigma _{e})&=\theta _{0}[1-F(\sigma _{e})]+\theta _{IV}F(\sigma _{e})\\\theta _{0}&=a_{0}+a_{1}\ln {\dot {\varepsilon _{\rm {p}}}}+a_{2}{\sqrt {\dot {\varepsilon _{\rm {p}}}}}-a_{3}T\\F(\sigma _{e})&={\cfrac {\tanh \left(\alpha {\cfrac {\sigma _{e}}{\sigma _{es}}}\right)}{\tanh(\alpha )}}\\\ln({\cfrac {\sigma _{es}}{\sigma _{0es}}})&=\left({\frac {kT}{g_{0es}b^{3}\mu (p,T)}}\right)\ln \left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\dot {\varepsilon _{\rm {p}}}}}\right)\end{aligned}}}
and
θ
0
{\displaystyle \theta _{0}}
is the hardening due to dislocation accumulation,
θ
I
V
{\displaystyle \theta _{IV}}
is the contribution due to stage-IV hardening, (
a
0
,
a
1
,
a
2
,
a
3
,
α
{\displaystyle a_{0},a_{1},a_{2},a_{3},\alpha }
) are constants,
σ
e
s
{\displaystyle \sigma _{es}}
is the stress at zero strain hardening rate,
σ
0
e
s
{\displaystyle \sigma _{0es}}
is the saturation threshold stress for deformation at 0 K,
g
0
e
s
{\displaystyle g_{0es}}
is a constant, and
ε
p
˙
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}}
is the maximum strain-rate. Note that the maximum strain-rate is usually limited to about
10
7
{\displaystyle 10^{7}}
/s.
==== Preston–Tonks–Wallace flow stress model ====
The Preston–Tonks–Wallace (PTW) model attempts to provide a model for the flow stress for extreme strain-rates (up to 1011/s) and temperatures up to melt. A linear Voce hardening law is used in the model. The PTW flow stress is given by
(6)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
{
2
[
τ
s
+
α
ln
[
1
−
φ
exp
(
−
β
−
θ
ε
p
α
φ
)
]
]
μ
(
p
,
T
)
thermal regime
2
τ
s
μ
(
p
,
T
)
shock regime
{\displaystyle {\text{(6)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)={\begin{cases}2\left[\tau _{s}+\alpha \ln \left[1-\varphi \exp \left(-\beta -{\cfrac {\theta \varepsilon _{\rm {p}}}{\alpha \varphi }}\right)\right]\right]\mu (p,T)&{\text{thermal regime}}\\2\tau _{s}\mu (p,T)&{\text{shock regime}}\end{cases}}}
with
α
:=
s
0
−
τ
y
d
;
β
:=
τ
s
−
τ
y
α
;
φ
:=
exp
(
β
)
−
1
{\displaystyle \alpha :={\frac {s_{0}-\tau _{y}}{d}};\quad \beta :={\frac {\tau _{s}-\tau _{y}}{\alpha }};\quad \varphi :=\exp(\beta )-1}
where
τ
s
{\displaystyle \tau _{s}}
is a normalized work-hardening saturation stress,
s
0
{\displaystyle s_{0}}
is the value of
τ
s
{\displaystyle \tau _{s}}
at 0K,
τ
y
{\displaystyle \tau _{y}}
is a normalized yield stress,
θ
{\displaystyle \theta }
is the hardening constant in the Voce hardening law, and
d
{\displaystyle d}
is a dimensionless material parameter that modifies the Voce hardening law.
The saturation stress and the yield stress are given by
τ
s
=
max
{
s
0
−
(
s
0
−
s
∞
)
e
r
f
[
κ
T
^
ln
(
γ
ξ
˙
ε
p
˙
)
]
,
s
0
(
ε
p
˙
γ
ξ
˙
)
s
1
}
τ
y
=
max
{
y
0
−
(
y
0
−
y
∞
)
e
r
f
[
κ
T
^
ln
(
γ
ξ
˙
ε
p
˙
)
]
,
min
{
y
1
(
ε
p
˙
γ
ξ
˙
)
y
2
,
s
0
(
ε
p
˙
γ
ξ
˙
)
s
1
}
}
{\displaystyle {\begin{aligned}\tau _{s}&=\max \left\{s_{0}-(s_{0}-s_{\infty }){\rm {{erf}\left[\kappa {\hat {T}}\ln \left({\cfrac {\gamma {\dot {\xi }}}{\dot {\varepsilon _{\rm {p}}}}}\right)\right],s_{0}\left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\gamma {\dot {\xi }}}}\right)^{s_{1}}}}\right\}\\\tau _{y}&=\max \left\{y_{0}-(y_{0}-y_{\infty }){\rm {{erf}\left[\kappa {\hat {T}}\ln \left({\cfrac {\gamma {\dot {\xi }}}{\dot {\varepsilon _{\rm {p}}}}}\right)\right],\min \left\{y_{1}\left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\gamma {\dot {\xi }}}}\right)^{y_{2}},s_{0}\left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\gamma {\dot {\xi }}}}\right)^{s_{1}}\right\}}}\right\}\end{aligned}}}
where
s
∞
{\displaystyle s_{\infty }}
is the value of
τ
s
{\displaystyle \tau _{s}}
close to the melt temperature, (
y
0
,
y
∞
{\displaystyle y_{0},y_{\infty }}
) are the values of
τ
y
{\displaystyle \tau _{y}}
at 0 K and close to melt, respectively,
(
κ
,
γ
)
{\displaystyle (\kappa ,\gamma )}
are material constants,
T
^
=
T
/
T
m
{\displaystyle {\hat {T}}=T/T_{m}}
, (
s
1
,
y
1
,
y
2
{\displaystyle s_{1},y_{1},y_{2}}
) are material parameters for the high strain-rate regime, and
ξ
˙
=
1
2
(
4
π
ρ
3
M
)
1
/
3
(
μ
(
p
,
T
)
ρ
)
1
/
2
{\displaystyle {\dot {\xi }}={\frac {1}{2}}\left({\cfrac {4\pi \rho }{3M}}\right)^{1/3}\left({\cfrac {\mu (p,T)}{\rho }}\right)^{1/2}}
where
ρ
{\displaystyle \rho }
is the density, and
M
{\displaystyle M}
is the atomic mass.
== See also ==
Viscoelasticity
Bingham plastic
Dashpot
Creep (deformation)
Plasticity (physics)
Continuum mechanics
Quasi-solid
== References == | Wikipedia/Preston-Tonks-Wallace_plasticity_model |
Viscoplasticity is a theory in continuum mechanics that describes the rate-dependent inelastic behavior of solids. Rate-dependence in this context means that the deformation of the material depends on the rate at which loads are applied. The inelastic behavior that is the subject of viscoplasticity is plastic deformation which means that the material undergoes unrecoverable deformations when a load level is reached. Rate-dependent plasticity is important for transient plasticity calculations. The main difference between rate-independent plastic and viscoplastic material models is that the latter exhibit not only permanent deformations after the application of loads but continue to undergo a creep flow as a function of time under the influence of the applied load.
The elastic response of viscoplastic materials can be represented in one-dimension by Hookean spring elements. Rate-dependence can be represented by nonlinear dashpot elements in a manner similar to viscoelasticity. Plasticity can be accounted for by adding sliding frictional elements as shown in Figure 1. In the figure
E
{\displaystyle E}
is the modulus of elasticity,
λ
{\displaystyle \lambda }
is the viscosity parameter and
N
{\displaystyle N}
is a power-law type parameter that represents non-linear dashpot
[
σ
(
d
ε
/
d
t
)
=
σ
=
λ
(
d
ε
/
d
t
)
1
/
N
]
{\displaystyle [\sigma (\mathrm {d} \varepsilon /\mathrm {d} t)=\sigma =\lambda (\mathrm {d} \varepsilon /\mathrm {d} t)^{1/N}]}
. The sliding element can have a yield stress (
σ
y
{\displaystyle \sigma _{y}}
) that is strain rate dependent, or even constant, as shown in Figure 1c.
Viscoplasticity is usually modeled in three-dimensions using overstress models of the Perzyna or Duvaut-Lions types. In these models, the stress is allowed to increase beyond the rate-independent yield surface upon application of a load and then allowed to relax back to the yield surface over time. The yield surface is usually assumed not to be rate-dependent in such models. An alternative approach is to add a strain rate dependence to the yield stress and use the techniques of rate independent plasticity to calculate the response of a material.
For metals and alloys, viscoplasticity is the macroscopic behavior caused by a mechanism linked to the movement of dislocations in grains, with superposed effects of inter-crystalline gliding. The mechanism usually becomes dominant at temperatures greater than approximately one third of the absolute melting temperature. However, certain alloys exhibit viscoplasticity at room temperature (300 K). For polymers, wood, and bitumen, the theory of viscoplasticity is required to describe behavior beyond the limit of elasticity or viscoelasticity.
In general, viscoplasticity theories are useful in areas such as:
the calculation of permanent deformations,
the prediction of the plastic collapse of structures,
the investigation of stability,
crash simulations,
systems exposed to high temperatures such as turbines in engines, e.g. a power plant,
dynamic problems and systems exposed to high strain rates.
== History ==
Research on plasticity theories started in 1864 with the work of Henri Tresca, Saint Venant (1870) and Levy (1871) on the maximum shear criterion. An improved plasticity model was presented in 1913 by Von Mises which is now referred to as the von Mises yield criterion. In viscoplasticity, the development of a mathematical model heads back to 1910 with the representation of primary creep by Andrade's law. In 1929, Norton developed a one-dimensional dashpot model which linked the rate of secondary creep to the stress. In 1934, Odqvist generalized Norton's law to the multi-axial case.
Concepts such as the normality of plastic flow to the yield surface and flow rules for plasticity were introduced by Prandtl (1924) and Reuss (1930). In 1932, Hohenemser and Prager proposed the first model for slow viscoplastic flow. This model provided a relation between the deviatoric stress and the strain rate for an incompressible Bingham solid However, the application of these theories did not begin before 1950, where limit theorems were discovered.
In 1960, the first IUTAM Symposium "Creep in Structures" organized by Hoff provided a major development in viscoplasticity with the works of Hoff, Rabotnov, Perzyna, Hult, and Lemaitre for the isotropic hardening laws, and those of Kratochvil, Malinini and Khadjinsky, Ponter and Leckie, and Chaboche for the kinematic hardening laws. Perzyna, in 1963, introduced a viscosity coefficient that is temperature and time dependent. The formulated models were supported by the thermodynamics of irreversible processes and the phenomenological standpoint. The ideas presented in these works have been the basis for most subsequent research into rate-dependent plasticity.
== Phenomenology ==
For a qualitative analysis, several characteristic tests are performed to describe the phenomenology of viscoplastic materials. Some examples of these tests are
hardening tests at constant stress or strain rate,
creep tests at constant force, and
stress relaxation at constant elongation.
=== Strain hardening test ===
One consequence of yielding is that as plastic deformation proceeds, an increase in stress is required to produce additional strain. This phenomenon is known as Strain/Work hardening. For a viscoplastic material the hardening curves are not significantly different from those of rate-independent plastic material. Nevertheless, three essential differences can be observed.
At the same strain, the higher the rate of strain the higher the stress
A change in the rate of strain during the test results in an immediate change in the stress–strain curve.
The concept of a plastic yield limit is no longer strictly applicable.
The hypothesis of partitioning the strains by decoupling the elastic and plastic parts is still applicable where the strains are small, i.e.,
ε
=
ε
e
+
ε
v
p
{\displaystyle {\boldsymbol {\varepsilon }}={\boldsymbol {\varepsilon }}_{\mathrm {e} }+{\boldsymbol {\varepsilon }}_{\mathrm {vp} }}
where
ε
e
{\displaystyle {\boldsymbol {\varepsilon }}_{\mathrm {e} }}
is the elastic strain and
ε
v
p
{\displaystyle {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}
is the viscoplastic strain. To obtain the stress–strain behavior shown in blue in the figure, the material is initially loaded at a strain rate of 0.1/s. The strain rate is then instantaneously raised to 100/s and held constant at that value for some time. At the end of that time period the strain rate is dropped instantaneously back to 0.1/s and the cycle is continued for increasing values of strain. There is clearly a lag between the strain-rate change and the stress response. This lag is modeled quite accurately by overstress models (such as the Perzyna model) but not by models of rate-independent plasticity that have a rate-dependent yield stress.
=== Creep test ===
Creep is the tendency of a solid material to slowly move or deform permanently under constant stresses. Creep tests measure the strain response due to a constant stress as shown in Figure 3. The classical creep curve represents the evolution of strain as a function of time in a material subjected to uniaxial stress at a constant temperature. The creep test, for instance, is performed by applying a constant force/stress and analyzing the strain response of the system. In general, as shown in Figure 3b this curve usually shows three phases or periods of behavior:
A primary creep stage, also known as transient creep, is the starting stage during which hardening of the material leads to a decrease in the rate of flow which is initially very high.
(
0
≤
ε
≤
ε
1
)
{\displaystyle (0\leq {\boldsymbol {\varepsilon }}\leq {\boldsymbol {\varepsilon }}_{1})}
.
The secondary creep stage, also known as the steady state, is where the strain rate is constant.
(
ε
1
≤
ε
≤
ε
2
)
{\displaystyle ({\boldsymbol {\varepsilon }}_{1}\leq {\boldsymbol {\varepsilon }}\leq {\boldsymbol {\varepsilon }}_{2})}
.
A tertiary creep phase in which there is an increase in the strain rate up to the fracture strain.
(
ε
2
≤
ε
≤
ε
R
)
{\displaystyle ({\boldsymbol {\varepsilon }}_{2}\leq {\boldsymbol {\varepsilon }}\leq {\boldsymbol {\varepsilon }}_{R})}
.
=== Relaxation test ===
As shown in Figure 4, the relaxation test is defined as the stress response due to a constant strain for a period of time. In viscoplastic materials, relaxation tests demonstrate the stress relaxation in uniaxial loading at a constant strain. In fact, these tests characterize the viscosity and can be used to determine the relation which exists between the stress and the rate of viscoplastic strain. The decomposition of strain rate is
d
ε
d
t
=
d
ε
e
d
t
+
d
ε
v
p
d
t
.
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}}{\mathrm {d} t}}={\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {e} }}{\mathrm {d} t}}+{\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}{\mathrm {d} t}}~.}
The elastic part of the strain rate is given by
d
ε
e
d
t
=
E
−
1
d
σ
d
t
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {e} }}{\mathrm {d} t}}={\mathsf {E}}^{-1}~{\cfrac {\mathrm {d} {\boldsymbol {\sigma }}}{\mathrm {d} t}}}
For the flat region of the strain–time curve, the total strain rate is zero. Hence we have,
d
ε
v
p
d
t
=
−
E
−
1
d
σ
d
t
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}{\mathrm {d} t}}=-{\mathsf {E}}^{-1}~{\cfrac {\mathrm {d} {\boldsymbol {\sigma }}}{\mathrm {d} t}}}
Therefore, the relaxation curve can be used to determine rate of viscoplastic strain and hence the viscosity of the dashpot in a one-dimensional viscoplastic material model. The residual value that is reached when the stress has plateaued at the end of a relaxation test corresponds to the upper limit of elasticity. For some materials such as rock salt such an upper limit of elasticity occurs at a very small value of stress and relaxation tests can be continued for more than a year without any observable plateau in the stress.
It is important to note that relaxation tests are extremely difficult to perform because maintaining the condition
d
ε
d
t
=
0
{\displaystyle {\cfrac {\mathrm {d} {\boldsymbol {\varepsilon }}}{\mathrm {d} t}}=0}
in a test requires considerable delicacy.
== Rheological models of viscoplasticity ==
One-dimensional constitutive models for viscoplasticity based on spring-dashpot-slider elements include the perfectly viscoplastic solid, the elastic perfectly viscoplastic solid, and the elastoviscoplastic hardening solid. The elements may be connected in series or in parallel. In models where the elements are connected in series the strain is additive while the stress is equal in each element. In parallel connections, the stress is additive while the strain is equal in each element. Many of these one-dimensional models can be generalized to three dimensions for the small strain regime. In the subsequent discussion, time rates strain and stress are written as
ε
˙
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}}
and
σ
˙
{\displaystyle {\dot {\boldsymbol {\sigma }}}}
, respectively.
=== Perfectly viscoplastic solid (Norton-Hoff model) ===
In a perfectly viscoplastic solid, also called the Norton-Hoff model of viscoplasticity, the stress (as for viscous fluids) is a function of the rate of permanent strain. The effect of elasticity is neglected in the model, i.e.,
ε
e
=
0
{\displaystyle {\boldsymbol {\varepsilon }}_{e}=0}
and hence there is no initial yield stress, i.e.,
σ
y
=
0
{\displaystyle \sigma _{y}=0}
. The viscous dashpot has a response given by
σ
=
η
ε
˙
v
p
⟹
ε
˙
v
p
=
σ
η
{\displaystyle {\boldsymbol {\sigma }}=\eta ~{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }\implies {\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\cfrac {\boldsymbol {\sigma }}{\eta }}}
where
η
{\displaystyle \eta }
is the viscosity of the dashpot. In the Norton-Hoff model the viscosity
η
{\displaystyle \eta }
is a nonlinear function of the applied stress and is given by
η
=
λ
[
λ
|
|
σ
|
|
]
N
−
1
{\displaystyle \eta =\lambda \left[{\cfrac {\lambda }{||{\boldsymbol {\sigma }}||}}\right]^{N-1}}
where
N
{\displaystyle N}
is a fitting parameter, λ is the kinematic viscosity of the material and
|
|
σ
|
|
=
σ
:
σ
=
σ
i
j
σ
i
j
{\displaystyle ||{\boldsymbol {\sigma }}||={\sqrt {{\boldsymbol {\sigma }}:{\boldsymbol {\sigma }}}}={\sqrt {\sigma _{ij}\sigma _{ij}}}}
. Then the viscoplastic strain rate is given by the relation
ε
˙
v
p
=
σ
λ
[
|
|
σ
|
|
λ
]
N
−
1
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\cfrac {\boldsymbol {\sigma }}{\lambda }}\left[{\cfrac {||{\boldsymbol {\sigma }}||}{\lambda }}\right]^{N-1}}
In one-dimensional form, the Norton-Hoff model can be expressed as
σ
=
λ
(
ε
˙
v
p
)
1
/
N
{\displaystyle \sigma =\lambda ~\left({\dot {\varepsilon }}_{\mathrm {vp} }\right)^{1/N}}
When
N
=
1.0
{\displaystyle N=1.0}
the solid is viscoelastic.
If we assume that plastic flow is isochoric (volume preserving), then the above relation can be expressed in the more familiar form
s
=
2
K
(
3
ε
˙
e
q
)
m
−
1
ε
˙
v
p
{\displaystyle {\boldsymbol {s}}=2K~\left({\sqrt {3}}{\dot {\varepsilon }}_{\mathrm {eq} }\right)^{m-1}~{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }}
where
s
{\displaystyle {\boldsymbol {s}}}
is the deviatoric stress tensor,
ε
˙
e
q
{\displaystyle {\dot {\varepsilon }}_{\mathrm {eq} }}
is the von Mises equivalent strain rate, and
K
,
m
{\displaystyle K,m}
are material parameters. The equivalent strain rate is defined as
ϵ
¯
˙
=
2
3
ϵ
¯
¯
˙
:
ϵ
¯
¯
˙
{\displaystyle {\dot {\bar {\epsilon }}}={\sqrt {{\frac {2}{3}}{\dot {\bar {\bar {\epsilon }}}}:{\dot {\bar {\bar {\epsilon }}}}}}}
These models can be applied in metals and alloys at temperatures higher than two thirds of their absolute melting point (in kelvins) and polymers/asphalt at elevated temperature. The responses for strain hardening, creep, and relaxation tests of such material are shown in Figure 6.
=== Elastic perfectly viscoplastic solid (Bingham–Norton model) ===
Two types of elementary approaches can be used to build up an elastic-perfectly viscoplastic mode. In the first situation, the sliding friction element and the dashpot are arranged in parallel and then connected in series to the elastic spring as shown in Figure 7. This model is called the Bingham–Maxwell model (by analogy with the Maxwell model and the Bingham model) or the Bingham–Norton model. In the second situation, all three elements are arranged in parallel. Such a model is called a Bingham–Kelvin model by analogy with the Kelvin model.
For elastic-perfectly viscoplastic materials, the elastic strain is no longer considered negligible but the rate of plastic strain is only a function of the initial yield stress and there is no influence of hardening. The sliding element represents a constant yielding stress when the elastic limit is exceeded irrespective of the strain. The model can be expressed as
σ
=
E
ε
f
o
r
‖
σ
‖
<
σ
y
ε
˙
=
ε
˙
e
+
ε
˙
v
p
=
E
−
1
σ
˙
+
σ
η
[
1
−
σ
y
‖
σ
‖
]
f
o
r
‖
σ
‖
≥
σ
y
{\displaystyle {\begin{aligned}&{\boldsymbol {\sigma }}={\mathsf {E}}~{\boldsymbol {\varepsilon }}&&\mathrm {for} ~\|{\boldsymbol {\sigma }}\|<\sigma _{y}\\&{\dot {\boldsymbol {\varepsilon }}}={\dot {\boldsymbol {\varepsilon }}}_{\mathrm {e} }+{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+{\cfrac {\boldsymbol {\sigma }}{\eta }}\left[1-{\cfrac {\sigma _{y}}{\|{\boldsymbol {\sigma }}\|}}\right]&&\mathrm {for} ~\|{\boldsymbol {\sigma }}\|\geq \sigma _{y}\end{aligned}}}
where
η
{\displaystyle \eta }
is the viscosity of the dashpot element. If the dashpot element has a response that is of the Norton form
σ
η
=
σ
λ
[
‖
σ
‖
λ
]
N
−
1
{\displaystyle {\cfrac {\boldsymbol {\sigma }}{\eta }}={\cfrac {\boldsymbol {\sigma }}{\lambda }}\left[{\cfrac {\|{\boldsymbol {\sigma }}\|}{\lambda }}\right]^{N-1}}
we get the Bingham–Norton model
ε
˙
=
E
−
1
σ
˙
+
σ
λ
[
‖
σ
‖
λ
]
N
−
1
[
1
−
σ
y
‖
σ
‖
]
f
o
r
‖
σ
‖
≥
σ
y
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+{\cfrac {\boldsymbol {\sigma }}{\lambda }}\left[{\cfrac {\|{\boldsymbol {\sigma }}\|}{\lambda }}\right]^{N-1}\left[1-{\cfrac {\sigma _{y}}{\|{\boldsymbol {\sigma }}\|}}\right]\quad \mathrm {for} ~\|{\boldsymbol {\sigma }}\|\geq \sigma _{y}}
Other expressions for the strain rate can also be observed in the literature with the general form
ε
˙
=
E
−
1
σ
˙
+
f
(
σ
,
σ
y
)
σ
f
o
r
‖
σ
‖
≥
σ
y
{\displaystyle {\dot {\boldsymbol {\varepsilon }}}={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+f({\boldsymbol {\sigma }},\sigma _{y})~{\boldsymbol {\sigma }}\quad \mathrm {for} ~\|{\boldsymbol {\sigma }}\|\geq \sigma _{y}}
The responses for strain hardening, creep, and relaxation tests of such material are shown in Figure 8.
=== Elastoviscoplastic hardening solid ===
An elastic-viscoplastic material with strain hardening is described by equations similar to those for an elastic-viscoplastic material with perfect plasticity. However, in this case the stress depends both on the plastic strain rate and on the plastic strain itself. For an elastoviscoplastic material the stress, after exceeding the yield stress, continues to increase beyond the initial yielding point. This implies that the yield stress in the sliding element increases with strain and the model may be expressed in generic terms as
ε
=
ε
e
=
E
−
1
σ
=
ε
f
o
r
|
|
σ
|
|
<
σ
y
ε
˙
=
ε
˙
e
+
ε
˙
v
p
=
E
−
1
σ
˙
+
f
(
σ
,
σ
y
,
ε
v
p
)
σ
f
o
r
|
|
σ
|
|
≥
σ
y
{\displaystyle {\begin{aligned}&{\boldsymbol {\varepsilon }}={\boldsymbol {\varepsilon }}_{\mathrm {e} }={\mathsf {E}}^{-1}~{\boldsymbol {\sigma }}=~{\boldsymbol {\varepsilon }}&&\mathrm {for} ~||{\boldsymbol {\sigma }}||<\sigma _{y}\\&{\dot {\boldsymbol {\varepsilon }}}={\dot {\boldsymbol {\varepsilon }}}_{\mathrm {e} }+{\dot {\boldsymbol {\varepsilon }}}_{\mathrm {vp} }={\mathsf {E}}^{-1}~{\dot {\boldsymbol {\sigma }}}+f({\boldsymbol {\sigma }},\sigma _{y},{\boldsymbol {\varepsilon }}_{\mathrm {vp} })~{\boldsymbol {\sigma }}&&\mathrm {for} ~||{\boldsymbol {\sigma }}||\geq \sigma _{y}\end{aligned}}}
This model is adopted when metals and alloys are at medium and higher temperatures and wood under high loads. The responses for strain hardening, creep, and relaxation tests of such a material are shown in Figure 9.
== Strain-rate dependent plasticity models ==
Classical phenomenological viscoplasticity models for small strains are usually categorized into two types:
the Perzyna formulation
the Duvaut–Lions formulation
=== Perzyna formulation ===
In the Perzyna formulation the plastic strain rate is assumed to be given by a constitutive relation of the form
ε
˙
v
p
=
⟨
f
(
σ
,
q
)
⟩
τ
∂
f
∂
σ
=
{
f
(
σ
,
q
)
τ
∂
f
∂
σ
i
f
f
(
σ
,
q
)
>
0
0
o
t
h
e
r
w
i
s
e
{\displaystyle {\dot {\varepsilon }}_{\mathrm {vp} }={\cfrac {\left\langle f({\boldsymbol {\sigma }},{\boldsymbol {q}})\right\rangle }{\tau }}{\cfrac {\partial f}{\partial {\boldsymbol {\sigma }}}}={\begin{cases}{\cfrac {f({\boldsymbol {\sigma }},{\boldsymbol {q}})}{\tau }}{\cfrac {\partial f}{\partial {\boldsymbol {\sigma }}}}&{\rm {if}}~f({\boldsymbol {\sigma }},{\boldsymbol {q}})>0\\0&{\rm {otherwise}}\\\end{cases}}}
where
f
(
.
,
.
)
{\displaystyle f(.,.)}
is a yield function,
σ
{\displaystyle {\boldsymbol {\sigma }}}
is the Cauchy stress,
q
{\displaystyle {\boldsymbol {q}}}
is a set of internal variables (such as the plastic strain
ε
v
p
{\displaystyle {\boldsymbol {\varepsilon }}_{\mathrm {vp} }}
),
τ
{\displaystyle \tau }
is a relaxation time. The notation
⟨
…
⟩
{\displaystyle \langle \dots \rangle }
denotes the Macaulay brackets. The flow rule used in various versions of the Chaboche model is a special case of Perzyna's flow rule and has the form
ε
˙
v
p
=
⟨
f
f
0
⟩
n
s
i
g
n
(
σ
−
χ
)
{\displaystyle {\dot {\varepsilon }}_{\mathrm {vp} }=\left\langle {\frac {f}{f_{0}}}\right\rangle ^{n}sign({\boldsymbol {\sigma }}-{\boldsymbol {\chi }})}
where
f
0
{\displaystyle f_{0}}
is the quasistatic value of
f
{\displaystyle f}
and
χ
{\displaystyle {\boldsymbol {\chi }}}
is a backstress. Several models for the backstress also go by the name Chaboche model.
=== Duvaut–Lions formulation ===
The Duvaut–Lions formulation is equivalent to the Perzyna formulation and may be expressed as
ε
˙
v
p
=
{
C
−
1
:
σ
−
P
σ
τ
i
f
f
(
σ
,
q
)
>
0
0
o
t
h
e
r
w
i
s
e
{\displaystyle {\dot {\varepsilon }}_{\mathrm {vp} }={\begin{cases}{\mathsf {C}}^{-1}:{\cfrac {{\boldsymbol {\sigma }}-{\mathcal {P}}{\boldsymbol {\sigma }}}{\tau }}&{\rm {{if}~f({\boldsymbol {\sigma }},{\boldsymbol {q}})>0}}\\0&{\rm {otherwise}}\end{cases}}}
where
C
{\displaystyle {\mathsf {C}}}
is the elastic stiffness tensor,
P
σ
{\displaystyle {\mathcal {P}}{\boldsymbol {\sigma }}}
is the closest point projection of the stress state on to the boundary of the region that bounds all possible elastic stress states. The quantity
P
σ
{\displaystyle {\mathcal {P}}{\boldsymbol {\sigma }}}
is typically found from the rate-independent solution to a plasticity problem.
=== Flow stress models ===
The quantity
f
(
σ
,
q
)
{\displaystyle f({\boldsymbol {\sigma }},{\boldsymbol {q}})}
represents the evolution of the yield surface. The yield function
f
{\displaystyle f}
is often expressed as an equation consisting of some invariant of stress and a model for the yield stress (or plastic flow stress). An example is von Mises or
J
2
{\displaystyle J_{2}}
plasticity. In those situations the plastic strain rate is calculated in the same manner as in rate-independent plasticity. In other situations, the yield stress model provides a direct means of computing the plastic strain rate.
Numerous empirical and semi-empirical flow stress models are used the computational plasticity. The following temperature and strain-rate dependent models provide a sampling of the models in current use:
the Johnson–Cook model
the Steinberg–Cochran–Guinan–Lund model.
the Zerilli–Armstrong model.
the Mechanical threshold stress model.
the Preston–Tonks–Wallace model.
The Johnson–Cook (JC) model is purely empirical and is the most widely used of the five. However, this model exhibits an unrealistically small strain-rate dependence at high temperatures. The Steinberg–Cochran–Guinan–Lund (SCGL) model is semi-empirical. The model is purely empirical and strain-rate independent at high strain-rates. A dislocation-based extension based on is used at low strain-rates. The SCGL model is used extensively by the shock physics community. The Zerilli–Armstrong (ZA) model is a simple physically based model that has been used extensively. A more complex model that is based on ideas from dislocation dynamics is the Mechanical Threshold Stress (MTS) model. This model has been used to model the plastic deformation of copper, tantalum, alloys of steel, and aluminum alloys. However, the MTS model is limited to strain-rates less than around 107/s. The Preston–Tonks–Wallace (PTW) model is also physically based and has a form similar to the MTS model. However, the PTW model has components that can model plastic deformation in the overdriven shock regime (strain-rates greater that 107/s). Hence this model is valid for the largest range of strain-rates among the five flow stress models.
==== Johnson–Cook flow stress model ====
The Johnson–Cook (JC) model is purely empirical and gives the following relation for the flow stress (
σ
y
{\displaystyle \sigma _{y}}
)
(1)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
[
A
+
B
(
ε
p
)
n
]
[
1
+
C
ln
(
ε
p
˙
∗
)
]
[
1
−
(
T
∗
)
m
]
{\displaystyle {\text{(1)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)=\left[A+B(\varepsilon _{\rm {p}})^{n}\right]\left[1+C\ln({\dot {\varepsilon _{\rm {p}}}}^{*})\right]\left[1-(T^{*})^{m}\right]}
where
ε
p
{\displaystyle \varepsilon _{\rm {p}}}
is the equivalent plastic strain,
ε
p
˙
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}}
is the
plastic strain-rate, and
A
,
B
,
C
,
n
,
m
{\displaystyle A,B,C,n,m}
are material constants.
The normalized strain-rate and temperature in equation (1) are defined as
ε
p
˙
∗
:=
ε
p
˙
ε
p
0
˙
and
T
∗
:=
(
T
−
T
0
)
(
T
m
−
T
0
)
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}^{*}:={\cfrac {\dot {\varepsilon _{\rm {p}}}}{\dot {\varepsilon _{\rm {p0}}}}}\qquad {\text{and}}\qquad T^{*}:={\cfrac {(T-T_{0})}{(T_{m}-T_{0})}}}
where
ε
p
0
˙
{\displaystyle {\dot {\varepsilon _{\rm {p0}}}}}
is the effective plastic strain-rate of the quasi-static test used to determine the yield and hardening parameters A,B and n. This is not as it is often thought just a parameter to make
ε
p
˙
∗
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}^{*}}
non-dimensional.
T
0
{\displaystyle T_{0}}
is a reference temperature, and
T
m
{\displaystyle T_{m}}
is a reference melt temperature. For conditions where
T
∗
<
0
{\displaystyle T^{*}<0}
, we assume that
m
=
1
{\displaystyle m=1}
.
==== Steinberg–Cochran–Guinan–Lund flow stress model ====
The Steinberg–Cochran–Guinan–Lund (SCGL) model is a semi-empirical model that was developed by Steinberg et al. for high strain-rate situations and extended to low strain-rates and bcc materials by Steinberg and Lund. The flow stress in this model is given by
(2)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
[
σ
a
f
(
ε
p
)
+
σ
t
(
ε
p
˙
,
T
)
]
μ
(
p
,
T
)
μ
0
;
σ
a
f
≤
σ
max
and
σ
t
≤
σ
p
{\displaystyle {\text{(2)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)=\left[\sigma _{a}f(\varepsilon _{\rm {p}})+\sigma _{t}({\dot {\varepsilon _{\rm {p}}}},T)\right]{\frac {\mu (p,T)}{\mu _{0}}};\quad \sigma _{a}f\leq \sigma _{\text{max}}~~{\text{and}}~~\sigma _{t}\leq \sigma _{p}}
where
σ
a
{\displaystyle \sigma _{a}}
is the athermal component of the flow stress,
f
(
ε
p
)
{\displaystyle f(\varepsilon _{\rm {p}})}
is a function that represents strain hardening,
σ
t
{\displaystyle \sigma _{t}}
is the thermally activated component of the flow stress,
μ
(
p
,
T
)
{\displaystyle \mu (p,T)}
is the pressure- and temperature-dependent shear modulus, and
μ
0
{\displaystyle \mu _{0}}
is the shear modulus at standard temperature and pressure. The saturation value of the athermal stress is
σ
max
{\displaystyle \sigma _{\text{max}}}
. The saturation of the thermally activated stress is the Peierls stress (
σ
p
{\displaystyle \sigma _{p}}
). The shear modulus for this model is usually computed with the Steinberg–Cochran–Guinan shear modulus model.
The strain hardening function (
f
{\displaystyle f}
) has the form
f
(
ε
p
)
=
[
1
+
β
(
ε
p
+
ε
p
i
)
]
n
{\displaystyle f(\varepsilon _{\rm {p}})=[1+\beta (\varepsilon _{\rm {p}}+\varepsilon _{\rm {p}}i)]^{n}}
where
β
,
n
{\displaystyle \beta ,n}
are work hardening parameters, and
ε
p
i
{\displaystyle \varepsilon _{\rm {p}}i}
is the initial equivalent plastic strain.
The thermal component (
σ
t
{\displaystyle \sigma _{t}}
) is computed using a bisection algorithm from the following equation.
ε
p
˙
=
[
1
C
1
exp
[
2
U
k
k
b
T
(
1
−
σ
t
σ
p
)
2
]
+
C
2
σ
t
]
−
1
;
σ
t
≤
σ
p
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}=\left[{\frac {1}{C_{1}}}\exp \left[{\frac {2U_{k}}{k_{b}~T}}\left(1-{\frac {\sigma _{t}}{\sigma _{p}}}\right)^{2}\right]+{\frac {C_{2}}{\sigma _{t}}}\right]^{-1};\quad \sigma _{t}\leq \sigma _{p}}
where
2
U
k
{\displaystyle 2U_{k}}
is the energy to form a kink-pair in a dislocation segment of length
L
d
{\displaystyle L_{d}}
,
k
b
{\displaystyle k_{b}}
is the Boltzmann constant,
σ
p
{\displaystyle \sigma _{p}}
is the Peierls stress. The constants
C
1
,
C
2
{\displaystyle C_{1},C_{2}}
are given by the relations
C
1
:=
ρ
d
L
d
a
b
2
ν
2
w
2
;
C
2
:=
D
ρ
d
b
2
{\displaystyle C_{1}:={\frac {\rho _{d}L_{d}ab^{2}\nu }{2w^{2}}};\quad C_{2}:={\frac {D}{\rho _{d}b^{2}}}}
where
ρ
d
{\displaystyle \rho _{d}}
is the dislocation density,
L
d
{\displaystyle L_{d}}
is the length of a dislocation segment,
a
{\displaystyle a}
is the distance between Peierls valleys,
b
{\displaystyle b}
is the magnitude of the Burgers vector,
ν
{\displaystyle \nu }
is the Debye frequency,
w
{\displaystyle w}
is the width of a kink loop, and
D
{\displaystyle D}
is the drag coefficient.
==== Zerilli–Armstrong flow stress model ====
The Zerilli–Armstrong (ZA) model is based on simplified dislocation mechanics. The general form of the equation for the flow stress is
(3)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
σ
a
+
B
exp
(
−
β
T
)
+
B
0
ε
p
exp
(
−
α
T
)
.
{\displaystyle {\text{(3)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)=\sigma _{a}+B\exp(-\beta T)+B_{0}{\sqrt {\varepsilon _{\rm {p}}}}\exp(-\alpha T)~.}
In this model,
σ
a
{\displaystyle \sigma _{a}}
is the athermal component of the flow stress given by
σ
a
:=
σ
g
+
k
h
l
+
K
ε
p
n
,
{\displaystyle \sigma _{a}:=\sigma _{g}+{\frac {k_{h}}{\sqrt {l}}}+K\varepsilon _{\rm {p}}^{n},}
where
σ
g
{\displaystyle \sigma _{g}}
is the contribution due to solutes and initial dislocation density,
k
h
{\displaystyle k_{h}}
is the microstructural stress intensity,
l
{\displaystyle l}
is the average grain diameter,
K
{\displaystyle K}
is zero for fcc materials,
B
,
B
0
{\displaystyle B,B_{0}}
are material constants.
In the thermally activated terms, the functional forms of the exponents
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
are
α
=
α
0
−
α
1
ln
(
ε
p
˙
)
;
β
=
β
0
−
β
1
ln
(
ε
p
˙
)
;
{\displaystyle \alpha =\alpha _{0}-\alpha _{1}\ln({\dot {\varepsilon _{\rm {p}}}});\quad \beta =\beta _{0}-\beta _{1}\ln({\dot {\varepsilon _{\rm {p}}}});}
where
α
0
,
α
1
,
β
0
,
β
1
{\displaystyle \alpha _{0},\alpha _{1},\beta _{0},\beta _{1}}
are material parameters that depend on the type of material (fcc, bcc, hcp, alloys). The Zerilli–Armstrong model has been modified by for better performance at high temperatures.
==== Mechanical threshold stress flow stress model ====
The Mechanical Threshold Stress (MTS) model) has the form
(4)
σ
y
(
ε
p
,
ε
˙
,
T
)
=
σ
a
+
(
S
i
σ
i
+
S
e
σ
e
)
μ
(
p
,
T
)
μ
0
{\displaystyle {\text{(4)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon }},T)=\sigma _{a}+(S_{i}\sigma _{i}+S_{e}\sigma _{e}){\frac {\mu (p,T)}{\mu _{0}}}}
where
σ
a
{\displaystyle \sigma _{a}}
is the athermal component of mechanical threshold stress,
σ
i
{\displaystyle \sigma _{i}}
is the component of the flow stress due to intrinsic barriers to thermally activated dislocation motion and dislocation-dislocation interactions,
σ
e
{\displaystyle \sigma _{e}}
is the component of the flow stress due to microstructural evolution with increasing deformation (strain hardening), (
S
i
,
S
e
{\displaystyle S_{i},S_{e}}
) are temperature and strain-rate dependent scaling factors, and
μ
0
{\displaystyle \mu _{0}}
is the shear modulus at 0 K and ambient pressure.
The scaling factors take the Arrhenius form
S
i
=
[
1
−
(
k
b
T
g
0
i
b
3
μ
(
p
,
T
)
ln
ε
0
˙
ε
˙
)
1
/
q
i
]
1
/
p
i
S
e
=
[
1
−
(
k
b
T
g
0
e
b
3
μ
(
p
,
T
)
ln
ε
0
˙
ε
˙
)
1
/
q
e
]
1
/
p
e
{\displaystyle {\begin{aligned}S_{i}&=\left[1-\left({\frac {k_{b}~T}{g_{0i}b^{3}\mu (p,T)}}\ln {\frac {\dot {\varepsilon _{\rm {0}}}}{\dot {\varepsilon }}}\right)^{1/q_{i}}\right]^{1/p_{i}}\\S_{e}&=\left[1-\left({\frac {k_{b}~T}{g_{0e}b^{3}\mu (p,T)}}\ln {\frac {\dot {\varepsilon _{\rm {0}}}}{\dot {\varepsilon }}}\right)^{1/q_{e}}\right]^{1/p_{e}}\end{aligned}}}
where
k
b
{\displaystyle k_{b}}
is the Boltzmann constant,
b
{\displaystyle b}
is the magnitude of the Burgers' vector, (
g
0
i
,
g
0
e
{\displaystyle g_{0i},g_{0e}}
) are normalized activation energies, (
ε
˙
,
ε
0
˙
{\displaystyle {\dot {\varepsilon }},{\dot {\varepsilon _{\rm {0}}}}}
) are the strain-rate and reference strain-rate, and (
q
i
,
p
i
,
q
e
,
p
e
{\displaystyle q_{i},p_{i},q_{e},p_{e}}
) are constants.
The strain hardening component of the mechanical threshold stress (
σ
e
{\displaystyle \sigma _{e}}
) is given by an empirical modified Voce law
(5)
d
σ
e
d
ε
p
=
θ
(
σ
e
)
{\displaystyle {\text{(5)}}\qquad {\frac {d\sigma _{e}}{d\varepsilon _{\rm {p}}}}=\theta (\sigma _{e})}
where
θ
(
σ
e
)
=
θ
0
[
1
−
F
(
σ
e
)
]
+
θ
I
V
F
(
σ
e
)
θ
0
=
a
0
+
a
1
ln
ε
p
˙
+
a
2
ε
p
˙
−
a
3
T
F
(
σ
e
)
=
tanh
(
α
σ
e
σ
e
s
)
tanh
(
α
)
ln
(
σ
e
s
σ
0
e
s
)
=
(
k
T
g
0
e
s
b
3
μ
(
p
,
T
)
)
ln
(
ε
p
˙
ε
p
˙
)
{\displaystyle {\begin{aligned}\theta (\sigma _{e})&=\theta _{0}[1-F(\sigma _{e})]+\theta _{IV}F(\sigma _{e})\\\theta _{0}&=a_{0}+a_{1}\ln {\dot {\varepsilon _{\rm {p}}}}+a_{2}{\sqrt {\dot {\varepsilon _{\rm {p}}}}}-a_{3}T\\F(\sigma _{e})&={\cfrac {\tanh \left(\alpha {\cfrac {\sigma _{e}}{\sigma _{es}}}\right)}{\tanh(\alpha )}}\\\ln({\cfrac {\sigma _{es}}{\sigma _{0es}}})&=\left({\frac {kT}{g_{0es}b^{3}\mu (p,T)}}\right)\ln \left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\dot {\varepsilon _{\rm {p}}}}}\right)\end{aligned}}}
and
θ
0
{\displaystyle \theta _{0}}
is the hardening due to dislocation accumulation,
θ
I
V
{\displaystyle \theta _{IV}}
is the contribution due to stage-IV hardening, (
a
0
,
a
1
,
a
2
,
a
3
,
α
{\displaystyle a_{0},a_{1},a_{2},a_{3},\alpha }
) are constants,
σ
e
s
{\displaystyle \sigma _{es}}
is the stress at zero strain hardening rate,
σ
0
e
s
{\displaystyle \sigma _{0es}}
is the saturation threshold stress for deformation at 0 K,
g
0
e
s
{\displaystyle g_{0es}}
is a constant, and
ε
p
˙
{\displaystyle {\dot {\varepsilon _{\rm {p}}}}}
is the maximum strain-rate. Note that the maximum strain-rate is usually limited to about
10
7
{\displaystyle 10^{7}}
/s.
==== Preston–Tonks–Wallace flow stress model ====
The Preston–Tonks–Wallace (PTW) model attempts to provide a model for the flow stress for extreme strain-rates (up to 1011/s) and temperatures up to melt. A linear Voce hardening law is used in the model. The PTW flow stress is given by
(6)
σ
y
(
ε
p
,
ε
p
˙
,
T
)
=
{
2
[
τ
s
+
α
ln
[
1
−
φ
exp
(
−
β
−
θ
ε
p
α
φ
)
]
]
μ
(
p
,
T
)
thermal regime
2
τ
s
μ
(
p
,
T
)
shock regime
{\displaystyle {\text{(6)}}\qquad \sigma _{y}(\varepsilon _{\rm {p}},{\dot {\varepsilon _{\rm {p}}}},T)={\begin{cases}2\left[\tau _{s}+\alpha \ln \left[1-\varphi \exp \left(-\beta -{\cfrac {\theta \varepsilon _{\rm {p}}}{\alpha \varphi }}\right)\right]\right]\mu (p,T)&{\text{thermal regime}}\\2\tau _{s}\mu (p,T)&{\text{shock regime}}\end{cases}}}
with
α
:=
s
0
−
τ
y
d
;
β
:=
τ
s
−
τ
y
α
;
φ
:=
exp
(
β
)
−
1
{\displaystyle \alpha :={\frac {s_{0}-\tau _{y}}{d}};\quad \beta :={\frac {\tau _{s}-\tau _{y}}{\alpha }};\quad \varphi :=\exp(\beta )-1}
where
τ
s
{\displaystyle \tau _{s}}
is a normalized work-hardening saturation stress,
s
0
{\displaystyle s_{0}}
is the value of
τ
s
{\displaystyle \tau _{s}}
at 0K,
τ
y
{\displaystyle \tau _{y}}
is a normalized yield stress,
θ
{\displaystyle \theta }
is the hardening constant in the Voce hardening law, and
d
{\displaystyle d}
is a dimensionless material parameter that modifies the Voce hardening law.
The saturation stress and the yield stress are given by
τ
s
=
max
{
s
0
−
(
s
0
−
s
∞
)
e
r
f
[
κ
T
^
ln
(
γ
ξ
˙
ε
p
˙
)
]
,
s
0
(
ε
p
˙
γ
ξ
˙
)
s
1
}
τ
y
=
max
{
y
0
−
(
y
0
−
y
∞
)
e
r
f
[
κ
T
^
ln
(
γ
ξ
˙
ε
p
˙
)
]
,
min
{
y
1
(
ε
p
˙
γ
ξ
˙
)
y
2
,
s
0
(
ε
p
˙
γ
ξ
˙
)
s
1
}
}
{\displaystyle {\begin{aligned}\tau _{s}&=\max \left\{s_{0}-(s_{0}-s_{\infty }){\rm {{erf}\left[\kappa {\hat {T}}\ln \left({\cfrac {\gamma {\dot {\xi }}}{\dot {\varepsilon _{\rm {p}}}}}\right)\right],s_{0}\left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\gamma {\dot {\xi }}}}\right)^{s_{1}}}}\right\}\\\tau _{y}&=\max \left\{y_{0}-(y_{0}-y_{\infty }){\rm {{erf}\left[\kappa {\hat {T}}\ln \left({\cfrac {\gamma {\dot {\xi }}}{\dot {\varepsilon _{\rm {p}}}}}\right)\right],\min \left\{y_{1}\left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\gamma {\dot {\xi }}}}\right)^{y_{2}},s_{0}\left({\cfrac {\dot {\varepsilon _{\rm {p}}}}{\gamma {\dot {\xi }}}}\right)^{s_{1}}\right\}}}\right\}\end{aligned}}}
where
s
∞
{\displaystyle s_{\infty }}
is the value of
τ
s
{\displaystyle \tau _{s}}
close to the melt temperature, (
y
0
,
y
∞
{\displaystyle y_{0},y_{\infty }}
) are the values of
τ
y
{\displaystyle \tau _{y}}
at 0 K and close to melt, respectively,
(
κ
,
γ
)
{\displaystyle (\kappa ,\gamma )}
are material constants,
T
^
=
T
/
T
m
{\displaystyle {\hat {T}}=T/T_{m}}
, (
s
1
,
y
1
,
y
2
{\displaystyle s_{1},y_{1},y_{2}}
) are material parameters for the high strain-rate regime, and
ξ
˙
=
1
2
(
4
π
ρ
3
M
)
1
/
3
(
μ
(
p
,
T
)
ρ
)
1
/
2
{\displaystyle {\dot {\xi }}={\frac {1}{2}}\left({\cfrac {4\pi \rho }{3M}}\right)^{1/3}\left({\cfrac {\mu (p,T)}{\rho }}\right)^{1/2}}
where
ρ
{\displaystyle \rho }
is the density, and
M
{\displaystyle M}
is the atomic mass.
== See also ==
Viscoelasticity
Bingham plastic
Dashpot
Creep (deformation)
Plasticity (physics)
Continuum mechanics
Quasi-solid
== References == | Wikipedia/Mechanical_threshold_stress_plasticity_model |
Mohr–Coulomb theory is a mathematical model (see yield surface) describing the response of brittle materials such as concrete, or rubble piles, to shear stress as well as normal stress. Most of the classical engineering materials follow this rule in at least a portion of their shear failure envelope. Generally the theory applies to materials for which the compressive strength far exceeds the tensile strength.
In geotechnical engineering it is used to define shear strength of soils and rocks at different effective stresses.
In structural engineering it is used to determine failure load as well as the angle of fracture of a displacement fracture in concrete and similar materials. Coulomb's friction hypothesis is used to determine the combination of shear and normal stress that will cause a fracture of the material. Mohr's circle is used to determine which principal stresses will produce this combination of shear and normal stress, and the angle of the plane in which this will occur. According to the principle of normality the stress introduced at failure will be perpendicular to the line describing the fracture condition.
It can be shown that a material failing according to Coulomb's friction hypothesis will show the displacement introduced at failure forming an angle to the line of fracture equal to the angle of friction. This makes the strength of the material determinable by comparing the external mechanical work introduced by the displacement and the external load with the internal mechanical work introduced by the strain and stress at the line of failure. By conservation of energy the sum of these must be zero and this will make it possible to calculate the failure load of the construction.
A common improvement of this model is to combine Coulomb's friction hypothesis with Rankine's principal stress hypothesis to describe a separation fracture. An alternative view derives the Mohr-Coulomb criterion as extension failure.
== History of the development ==
The Mohr–Coulomb theory is named in honour of Charles-Augustin de Coulomb and Christian Otto Mohr. Coulomb's contribution was a 1776 essay entitled "Essai sur une application des règles des maximis et minimis à quelques problèmes de statique relatifs à l'architecture"
.
Mohr developed a generalised form of the theory around the end of the 19th century.
As the generalised form affected the interpretation of the criterion, but not the substance of it, some texts continue to refer to the criterion as simply the 'Coulomb criterion'.
== Mohr–Coulomb failure criterion ==
The Mohr–Coulomb failure criterion represents the linear envelope that is obtained from a plot of the shear strength of a material versus the applied normal stress. This relation is expressed as
τ
=
σ
tan
(
ϕ
)
+
c
{\displaystyle \tau =\sigma ~\tan(\phi )+c}
where
τ
{\displaystyle \tau }
is the shear strength,
σ
{\displaystyle \sigma }
is the normal stress,
c
{\displaystyle c}
is the intercept of the failure envelope with the
τ
{\displaystyle \tau }
axis, and
tan
(
ϕ
)
{\displaystyle \tan(\phi )}
is the slope of the failure envelope. The quantity
c
{\displaystyle c}
is often called the cohesion and the angle
ϕ
{\displaystyle \phi }
is called the angle of internal friction. Compression is assumed to be positive in the following discussion. If compression is assumed to be negative then
σ
{\displaystyle \sigma }
should be replaced with
−
σ
{\displaystyle -\sigma }
.
If
ϕ
=
0
{\displaystyle \phi =0}
, the Mohr–Coulomb criterion reduces to the Tresca criterion. On the other hand, if
ϕ
=
90
∘
{\displaystyle \phi =90^{\circ }}
the Mohr–Coulomb model is equivalent to the Rankine model. Higher values of
ϕ
{\displaystyle \phi }
are not allowed.
From Mohr's circle we have
σ
=
σ
m
−
τ
m
sin
ϕ
;
τ
=
τ
m
cos
ϕ
{\displaystyle \sigma =\sigma _{m}-\tau _{m}\sin \phi ~;~~\tau =\tau _{m}\cos \phi }
where
τ
m
=
σ
1
−
σ
3
2
;
σ
m
=
σ
1
+
σ
3
2
{\displaystyle \tau _{m}={\cfrac {\sigma _{1}-\sigma _{3}}{2}}~;~~\sigma _{m}={\cfrac {\sigma _{1}+\sigma _{3}}{2}}}
and
σ
1
{\displaystyle \sigma _{1}}
is the maximum principal stress and
σ
3
{\displaystyle \sigma _{3}}
is the minimum principal stress.
Therefore, the Mohr–Coulomb criterion may also be expressed as
τ
m
=
σ
m
sin
ϕ
+
c
cos
ϕ
.
{\displaystyle \tau _{m}=\sigma _{m}\sin \phi +c\cos \phi ~.}
This form of the Mohr–Coulomb criterion is applicable to failure on a plane that is parallel to the
σ
2
{\displaystyle \sigma _{2}}
direction.
=== Mohr–Coulomb failure criterion in three dimensions ===
The Mohr–Coulomb criterion in three dimensions is often expressed as
{
±
σ
1
−
σ
2
2
=
[
σ
1
+
σ
2
2
]
sin
(
ϕ
)
+
c
cos
(
ϕ
)
±
σ
2
−
σ
3
2
=
[
σ
2
+
σ
3
2
]
sin
(
ϕ
)
+
c
cos
(
ϕ
)
±
σ
3
−
σ
1
2
=
[
σ
3
+
σ
1
2
]
sin
(
ϕ
)
+
c
cos
(
ϕ
)
.
{\displaystyle \left\{{\begin{aligned}\pm {\cfrac {\sigma _{1}-\sigma _{2}}{2}}&=\left[{\cfrac {\sigma _{1}+\sigma _{2}}{2}}\right]\sin(\phi )+c\cos(\phi )\\\pm {\cfrac {\sigma _{2}-\sigma _{3}}{2}}&=\left[{\cfrac {\sigma _{2}+\sigma _{3}}{2}}\right]\sin(\phi )+c\cos(\phi )\\\pm {\cfrac {\sigma _{3}-\sigma _{1}}{2}}&=\left[{\cfrac {\sigma _{3}+\sigma _{1}}{2}}\right]\sin(\phi )+c\cos(\phi ).\end{aligned}}\right.}
The Mohr–Coulomb failure surface is a cone with a hexagonal cross section in deviatoric stress space.
The expressions for
τ
{\displaystyle \tau }
and
σ
{\displaystyle \sigma }
can be generalized to three dimensions by developing expressions for the normal stress and the resolved shear stress on a plane of arbitrary orientation with respect to the coordinate axes (basis vectors). If the unit normal to the plane of interest is
n
=
n
1
e
1
+
n
2
e
2
+
n
3
e
3
{\displaystyle \mathbf {n} =n_{1}~\mathbf {e} _{1}+n_{2}~\mathbf {e} _{2}+n_{3}~\mathbf {e} _{3}}
where
e
i
,
i
=
1
,
2
,
3
{\displaystyle \mathbf {e} _{i},~~i=1,2,3}
are three orthonormal unit basis vectors, and if the principal stresses
σ
1
,
σ
2
,
σ
3
{\displaystyle \sigma _{1},\sigma _{2},\sigma _{3}}
are aligned with the basis vectors
e
1
,
e
2
,
e
3
{\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3}}
, then the expressions for
σ
,
τ
{\displaystyle \sigma ,\tau }
are
σ
=
n
1
2
σ
1
+
n
2
2
σ
2
+
n
3
2
σ
3
τ
=
(
n
1
σ
1
)
2
+
(
n
2
σ
2
)
2
+
(
n
3
σ
3
)
2
−
σ
2
=
n
1
2
n
2
2
(
σ
1
−
σ
2
)
2
+
n
2
2
n
3
2
(
σ
2
−
σ
3
)
2
+
n
3
2
n
1
2
(
σ
3
−
σ
1
)
2
.
{\displaystyle {\begin{aligned}\sigma &=n_{1}^{2}\sigma _{1}+n_{2}^{2}\sigma _{2}+n_{3}^{2}\sigma _{3}\\\tau &={\sqrt {(n_{1}\sigma _{1})^{2}+(n_{2}\sigma _{2})^{2}+(n_{3}\sigma _{3})^{2}-\sigma ^{2}}}\\&={\sqrt {n_{1}^{2}n_{2}^{2}(\sigma _{1}-\sigma _{2})^{2}+n_{2}^{2}n_{3}^{2}(\sigma _{2}-\sigma _{3})^{2}+n_{3}^{2}n_{1}^{2}(\sigma _{3}-\sigma _{1})^{2}}}.\end{aligned}}}
The Mohr–Coulomb failure criterion can then be evaluated using the usual expression
τ
=
σ
tan
(
ϕ
)
+
c
{\displaystyle \tau =\sigma ~\tan(\phi )+c}
for the six planes of maximum shear stress.
== Mohr–Coulomb failure surface in Haigh–Westergaard space ==
The Mohr–Coulomb failure (yield) surface is often expressed in Haigh–Westergaad coordinates. For example, the function
σ
1
−
σ
3
2
=
σ
1
+
σ
3
2
sin
ϕ
+
c
cos
ϕ
{\displaystyle {\cfrac {\sigma _{1}-\sigma _{3}}{2}}={\cfrac {\sigma _{1}+\sigma _{3}}{2}}~\sin \phi +c\cos \phi }
can be expressed as
[
3
sin
(
θ
+
π
3
)
+
sin
ϕ
cos
(
θ
+
π
3
)
]
ρ
−
2
sin
(
ϕ
)
ξ
=
6
c
cos
ϕ
.
{\displaystyle \left[{\sqrt {3}}~\sin \left(\theta +{\cfrac {\pi }{3}}\right)+\sin \phi \cos \left(\theta +{\cfrac {\pi }{3}}\right)\right]\rho -{\sqrt {2}}\sin(\phi )\xi ={\sqrt {6}}c\cos \phi .}
Alternatively, in terms of the invariants
p
,
q
,
r
{\displaystyle p,q,r}
we can write
[
1
3
cos
ϕ
sin
(
θ
+
π
3
)
+
1
3
tan
ϕ
cos
(
θ
+
π
3
)
]
q
−
p
tan
ϕ
=
c
{\displaystyle \left[{\cfrac {1}{{\sqrt {3}}~\cos \phi }}~\sin \left(\theta +{\cfrac {\pi }{3}}\right)+{\cfrac {1}{3}}\tan \phi ~\cos \left(\theta +{\cfrac {\pi }{3}}\right)\right]q-p~\tan \phi =c}
where
θ
=
1
3
arccos
[
(
r
q
)
3
]
.
{\displaystyle \theta ={\cfrac {1}{3}}\arccos \left[\left({\cfrac {r}{q}}\right)^{3}\right]~.}
== Mohr–Coulomb yield and plasticity ==
The Mohr–Coulomb yield surface is often used to model the plastic flow of geomaterials (and other cohesive-frictional materials). Many such materials show dilatational behavior under triaxial states of stress which the Mohr–Coulomb model does not include. Also, since the yield surface has corners, it may be inconvenient to use the original Mohr–Coulomb model to determine the direction of plastic flow (in the flow theory of plasticity).
A common approach is to use a non-associated plastic flow potential that is smooth. An example of such a potential is the function
g
:=
(
α
c
y
tan
ψ
)
2
+
G
2
(
ϕ
,
θ
)
q
2
−
p
tan
ϕ
{\displaystyle g:={\sqrt {(\alpha c_{\mathrm {y} }\tan \psi )^{2}+G^{2}(\phi ,\theta )~q^{2}}}-p\tan \phi }
where
α
{\displaystyle \alpha }
is a parameter,
c
y
{\displaystyle c_{\mathrm {y} }}
is the value of
c
{\displaystyle c}
when the plastic strain is zero (also called the initial cohesion yield stress),
ψ
{\displaystyle \psi }
is the angle made by the yield surface in the Rendulic plane at high values of
p
{\displaystyle p}
(this angle is also called the dilation angle), and
G
(
ϕ
,
θ
)
{\displaystyle G(\phi ,\theta )}
is an appropriate function that is also smooth in the deviatoric stress plane.
== Typical values of cohesion and angle of internal friction ==
Cohesion (alternatively called the cohesive strength) and friction angle values for rocks and some common soils are listed in the tables below.
== See also ==
3-D elasticity
Hoek–Brown failure criterion
Byerlee's law
Lateral earth pressure
von Mises stress
Yield (engineering)
Drucker Prager yield criterion — a smooth version of the M–C yield criterion
Lode coordinates
Bigoni–Piccolroaz yield criterion
== References == | Wikipedia/Mohr-Coulomb_theory |
In fracture mechanics, the energy release rate,
G
{\displaystyle G}
, is the rate at which energy is transformed as a material undergoes fracture. Mathematically, the energy release rate is expressed as the decrease in total potential energy per increase in fracture surface area, and is thus expressed in terms of energy per unit area. Various energy balances can be constructed relating the energy released during fracture to the energy of the resulting new surface, as well as other dissipative processes such as plasticity and heat generation. The energy release rate is central to the field of fracture mechanics when solving problems and estimating material properties related to fracture and fatigue.
== Definition ==
The energy release rate
G
{\displaystyle G}
is defined as the instantaneous loss of total potential energy
Π
{\displaystyle \Pi }
per unit crack growth area
s
{\displaystyle s}
,
G
≡
−
∂
Π
∂
s
,
{\displaystyle G\equiv -{\frac {\partial \Pi }{\partial s}},}
where the total potential energy is written in terms of the total strain energy
Ω
{\displaystyle \Omega }
, surface traction
t
{\displaystyle \mathbf {t} }
, displacement
u
{\displaystyle \mathbf {u} }
, and body force
b
{\displaystyle \mathbf {b} }
by
Π
=
Ω
−
{
∫
S
t
t
⋅
u
d
S
+
∫
V
b
⋅
u
d
V
}
.
{\displaystyle \Pi =\Omega -\left\{\int _{{\mathcal {S}}_{t}}\mathbf {t} \cdot \mathbf {u} \,dS+\int _{\mathcal {V}}\mathbf {b} \cdot \mathbf {u} \,dV\right\}.}
The first integral is over the surface
S
t
{\displaystyle S_{t}}
of the material, and the second is over its volume
V
{\displaystyle V}
.
The figure on the right shows the plot of an external force
P
{\displaystyle P}
vs. the load-point displacement
q
{\displaystyle q}
, in which the area under the curve is the strain energy. The white area between the curve and the
P
{\displaystyle P}
-axis is referred to as the complementary energy. In the case of a linearly-elastic material,
P
(
q
)
{\displaystyle P(q)}
is a straight line and the strain energy is equal to the complementary energy.
=== Prescribed displacement ===
In the case of prescribed displacement, the strain energy can be expressed in terms of the specified displacement and the crack surface
Ω
(
q
,
s
)
{\displaystyle \Omega (q,s)}
, and the change in this strain energy is only affected by the change in fracture surface area:
δ
Ω
=
(
∂
Ω
/
∂
s
)
δ
s
{\displaystyle \delta \Omega =(\partial \Omega /\partial s)\delta s}
. Correspondingly, the energy release rate in this case is expressed as
G
=
−
∂
Ω
∂
s
|
q
.
{\displaystyle G=-\left.{\frac {\partial \Omega }{\partial s}}\right|_{q}.}
Here is where one can accurately refer to
G
{\displaystyle G}
as the strain energy release rate.
=== Prescribed loads ===
When the load is prescribed instead of the displacement, the strain energy needs to be modified as
Ω
(
q
(
P
,
s
)
,
s
)
{\displaystyle \Omega (q(P,s),s)}
. The energy release rate is then computed as
G
=
−
∂
∂
s
|
P
(
Ω
−
P
q
)
.
{\displaystyle G=-\left.{\frac {\partial }{\partial s}}\right|_{P}\left(\Omega -Pq\right).}
If the material is linearly-elastic, then
Ω
=
P
q
/
2
{\displaystyle \Omega =Pq/2}
and one may instead write
G
=
∂
Ω
∂
s
|
P
.
{\displaystyle G=\left.{\frac {\partial \Omega }{\partial s}}\right|_{P}.}
=== G in two-dimensional cases ===
In the cases of two-dimensional problems, the change in crack growth area is simply the change in crack length times the thickness of the specimen. Namely,
∂
s
=
B
∂
a
{\displaystyle \partial s=B\partial a}
. Therefore, the equation for computing
G
{\displaystyle G}
can be modified for the 2D case:
Prescribed Displacement:
G
=
−
1
B
∂
Ω
∂
a
|
q
.
{\displaystyle G=-\left.{\frac {1}{B}}{\frac {\partial \Omega }{\partial a}}\right|_{q}.}
Prescribed Load:
G
=
−
1
B
∂
∂
a
|
P
(
Ω
−
P
q
)
.
{\displaystyle G=-\left.{\frac {1}{B}}{\frac {\partial }{\partial a}}\right|_{P}\left(\Omega -Pq\right).}
Prescribed Load, Linear Elastic:
G
=
1
B
∂
Ω
∂
a
|
P
.
{\displaystyle G=\left.{\frac {1}{B}}{\frac {\partial \Omega }{\partial a}}\right|_{P}.}
One can refer to the example calculations embedded in the next section for further information. Sometimes, the strain energy is written using
U
=
Ω
/
B
{\displaystyle U=\Omega /B}
, an energy-per-unit thickness. This gives
Prescribed Displacement:
G
=
−
∂
U
∂
a
|
q
.
{\displaystyle G=-\left.{\frac {\partial U}{\partial a}}\right|_{q}.}
Prescribed Load:
G
=
−
∂
∂
a
|
P
(
U
−
P
q
B
)
.
{\displaystyle G=-\left.{\frac {\partial }{\partial a}}\right|_{P}\left(U-{\frac {Pq}{B}}\right).}
Prescribed Load, Linear Elastic:
G
=
∂
U
∂
a
|
P
.
{\displaystyle G=\left.{\frac {\partial U}{\partial a}}\right|_{P}.}
=== Relation to stress intensity factors ===
The energy release rate is directly related to the stress intensity factor associated with a given two-dimensional loading mode (Mode-I, Mode-II, or Mode-III) when the crack grows straight ahead. This is applicable to cracks under plane stress, plane strain, and antiplane shear.
For Mode-I, the energy release rate
G
{\displaystyle G}
rate is related to the Mode-I stress intensity factor
K
I
{\displaystyle K_{I}}
for a linearly-elastic material by
G
=
K
I
2
E
′
,
{\displaystyle G={\frac {K_{I}^{2}}{E'}},}
where
E
′
{\displaystyle E'}
is related to Young's modulus
E
{\displaystyle E}
and Poisson's ratio
ν
{\displaystyle \nu }
depending on whether the material is under plane stress or plane strain:
E
′
=
{
E
,
p
l
a
n
e
s
t
r
e
s
s
,
E
1
−
ν
2
,
p
l
a
n
e
s
t
r
a
i
n
.
{\displaystyle E'={\begin{cases}E,&\mathrm {plane~stress} ,\\\\{\dfrac {E}{1-\nu ^{2}}},&\mathrm {plane~strain} .\end{cases}}}
For Mode-II, the energy release rate is similarly written as
G
=
K
I
I
2
E
′
.
{\displaystyle G={\frac {K_{II}^{2}}{E'}}.}
For Mode-III (antiplane shear), the energy release rate now is a function of the shear modulus
μ
{\displaystyle \mu }
,
G
=
K
I
I
I
2
2
μ
.
{\displaystyle G={\frac {K_{III}^{2}}{2\mu }}.}
For an arbitrary combination of all loading modes, these linear elastic solutions may be superposed as
G
=
K
I
2
E
′
+
K
I
I
2
E
′
+
K
I
I
I
2
2
μ
.
{\displaystyle G={\frac {K_{I}^{2}}{E'}}+{\frac {K_{II}^{2}}{E'}}+{\frac {K_{III}^{2}}{2\mu }}.}
==== Relation to fracture toughness ====
Crack growth is initiated when the energy release rate overcomes a critical value
G
c
{\displaystyle G_{c}}
, which is a material property,
G
≥
G
c
,
{\displaystyle G\geq G_{c},}
Under Mode-I loading, the critical energy release rate
G
c
{\displaystyle G_{c}}
is then related to the Mode-I fracture toughness
K
I
C
{\displaystyle K_{IC}}
, another material property, by
G
c
=
K
I
C
2
E
′
.
{\displaystyle G_{c}={\frac {K_{IC}^{2}}{E'}}.}
== Calculating G ==
There are a variety of methods available for calculating the energy release rate given material properties, specimen geometry, and loading conditions. Some are dependent on certain criteria being satisfied, such as the material being entirely elastic or even linearly-elastic, and/or that the crack must grow straight ahead. The only method presented that works arbitrarily is that using the total potential energy. If two methods are both applicable, they should yield identical energy release rates.
=== Total potential energy ===
The only method to calculate
G
{\displaystyle G}
for arbitrary conditions is to calculate the total potential energy and differentiate it with respect to the crack surface area. This is typically done by:
calculating the stress field resulting from the loading,
calculating the strain energy in the material resulting from the stress field,
calculating the work done by the external loads,
all in terms of the crack surface area.
=== Compliance method ===
If the material is linearly elastic, the computation of its energy release rate can be much simplified. In this case, the Load vs. Load-point Displacement curve is linear with a positive slope, and the displacement per unit force applied is defined as the compliance,
C
{\displaystyle C}
C
=
q
P
.
{\displaystyle C={\frac {q}{P}}.}
The corresponding strain energy
Ω
{\displaystyle \Omega }
(area under the curve) is equal to
Ω
=
1
2
P
q
=
1
2
q
2
C
=
1
2
P
2
C
.
{\displaystyle \Omega ={\frac {1}{2}}Pq={\frac {1}{2}}{\frac {q^{2}}{C}}={\frac {1}{2}}P^{2}C.}
Using the compliance method, one can show that the energy release rate for both cases of prescribed load and displacement come out to be
G
=
1
2
P
2
∂
C
∂
s
.
{\displaystyle G={\frac {1}{2}}P^{2}{\frac {\partial C}{\partial s}}.}
=== Multiple specimen methods for nonlinear materials ===
In the case of prescribed displacement, holding the crack length fixed, the energy release rate can be computed by
G
=
−
∫
0
q
∂
P
∂
s
d
q
,
{\displaystyle G=-\int _{0}^{q}{\frac {\partial P}{\partial s}}\,dq,}
while in the case of prescribed load,
G
=
∫
0
P
∂
q
∂
s
d
P
.
{\displaystyle G=\int _{0}^{P}{\frac {\partial q}{\partial s}}\,dP.}
As one can see, in both cases, the energy release rate
G
{\displaystyle G}
times the change in surface
d
s
{\displaystyle ds}
returns the area between curves, which indicates the energy dissipated for the new surface area as illustrated in the right figure
G
d
s
=
−
d
s
∫
0
q
∂
P
∂
s
d
q
=
d
s
∫
0
P
∂
q
∂
s
d
P
.
{\displaystyle Gds=-ds\int _{0}^{q}{\frac {\partial P}{\partial s}}\,dq=ds\int _{0}^{P}{\frac {\partial q}{\partial s}}\,dP.}
=== Crack closure integral ===
Since the energy release rate is defined as the negative derivative of the total potential energy with respect to crack surface growth, the energy release rate may be written as the difference between the potential energy before and after the crack grows. After some careful derivation, this leads one to the crack closure integral
G
=
lim
Δ
s
→
0
−
1
Δ
s
∫
Δ
s
1
2
t
i
0
(
Δ
u
i
+
−
Δ
u
i
−
)
d
S
,
{\displaystyle G=\lim _{\Delta s\to 0}-{\frac {1}{\Delta s}}\int _{\Delta s}{\frac {1}{2}}\,t_{i}^{0}\left(\Delta u_{i}^{+}-\Delta u_{i}^{-}\right)\,dS,}
where
Δ
s
{\displaystyle \Delta s}
is the new fracture surface area,
t
i
0
{\displaystyle t_{i}^{0}}
are the components of the traction released on the top fracture surface as the crack grows,
Δ
u
i
+
−
Δ
u
i
−
{\displaystyle \Delta u_{i}^{+}-\Delta u_{i}^{-}}
are the components of the crack opening displacement (the difference in displacement increments between the top and bottom crack surfaces), and the integral is over the surface of the material
S
{\displaystyle S}
.
The crack closure integral is valid only for elastic materials but is still valid for cracks that grow in any direction. Nevertheless, for a two-dimensional crack that does indeed grow straight ahead, the crack closure integral simplifies to
G
=
lim
Δ
a
→
0
1
Δ
a
∫
0
Δ
a
σ
i
2
(
x
1
,
0
)
u
i
(
Δ
a
−
x
1
,
π
)
d
x
1
,
{\displaystyle G=\lim _{\Delta a\to 0}{\frac {1}{\Delta a}}\int _{0}^{\Delta a}\sigma _{i2}(x_{1},0)u_{i}(\Delta a-x_{1},\pi )\,dx_{1},}
where
Δ
a
{\displaystyle \Delta a}
is the new crack length, and the displacement components are written as a function of the polar coordinates
r
=
Δ
a
−
x
1
{\displaystyle r=\Delta a-x_{1}}
and
θ
=
π
{\displaystyle \theta =\pi }
.
=== J-integral ===
In certain situations, the energy release rate
G
{\displaystyle G}
can be calculated using the
J-integral, i.e.
G
=
J
{\displaystyle G=J}
, using
J
=
∫
Γ
(
W
n
1
−
t
i
∂
u
i
∂
x
1
)
d
Γ
,
{\displaystyle J=\int _{\Gamma }\left(Wn_{1}-t_{i}\,{\frac {\partial u_{i}}{\partial x_{1}}}\right)\,d\Gamma ,}
where
W
{\displaystyle W}
is the elastic strain energy density,
n
1
{\displaystyle n_{1}}
is the
x
1
{\displaystyle x_{1}}
component of the unit vector normal to
Γ
{\displaystyle \Gamma }
, the curve used for the line integral,
t
i
{\displaystyle t_{i}}
are the components of the traction vector
t
=
σ
⋅
n
{\displaystyle \mathbf {t} ={\boldsymbol {\sigma }}\cdot \mathbf {n} }
, where
σ
{\displaystyle {\boldsymbol {\sigma }}}
is the stress tensor, and
u
i
{\displaystyle u_{i}}
are the components of the displacement vector.
This integral is zero over a simple closed path and is path independent, allowing any simple path starting and ending on the crack faces to be used to calculate
J
{\displaystyle J}
.
In order to equate the energy release rate to the J-integral,
G
=
J
{\displaystyle G=J}
, the following conditions must be met:
the crack must be growing straight ahead, and
the deformation near the crack (enclosed by
Γ
{\displaystyle \Gamma }
) must be elastic (not plastic).
The J-integral may be calculated with these conditions violated, but then
G
≠
J
{\displaystyle G\neq J}
. When they are not violated, one can then relate the energy release rate and the J-integral to the elastic moduli and the stress intensity factors using
G
=
J
=
K
I
2
E
′
+
K
I
I
2
E
′
+
K
I
I
I
2
2
μ
.
{\displaystyle G=J={\frac {K_{I}^{2}}{E'}}+{\frac {K_{II}^{2}}{E'}}+{\frac {K_{III}^{2}}{2\mu }}.}
== Computational methods in fracture mechanics ==
A handful of methods exist for calculating
G
{\displaystyle G}
with finite elements. Although a direct calculation of the J-integral is possible (using the strains and stresses outputted by FEA), approximate approaches for some type of crack growth exist and provide reasonable accuracy with straightforward calculations. This section will elaborate on some relatively simple methods for fracture analysis utilizing numerical simulations.
=== Nodal release method ===
If the crack is growing straight, the energy release rate can be decomposed as a sum of 3 terms
G
i
{\displaystyle G_{i}}
associated with the energy in each 3 modes. As a result, the Nodal Release method (NR) can be used to determine
G
i
{\displaystyle G_{i}}
from FEA results. The energy release rate is calculated at the nodes of the finite element mesh for the crack at an initial length and extended by a small distance
Δ
a
{\displaystyle \Delta a}
. First, we calculate the displacement variation at the node of interest
Δ
u
→
=
u
→
(
t
+
1
)
−
u
→
(
t
)
{\displaystyle \Delta {\vec {u}}={\vec {u}}^{(t+1)}-{\vec {u}}^{(t)}}
(before and after the crack tip node is released). Secondly, we keep track of the nodal force
F
→
{\displaystyle {\vec {F}}}
outputted by FEA. Finally, we can find each components of
G
{\displaystyle G}
using the following formulas:
G
1
NR
=
1
Δ
a
F
2
Δ
u
2
2
{\displaystyle G_{1}^{\text{NR}}={\frac {1}{\Delta a}}F_{2}{\frac {\Delta u_{2}}{2}}}
G
2
NR
=
1
Δ
a
F
1
Δ
u
1
2
{\displaystyle G_{2}^{\text{NR}}={\frac {1}{\Delta a}}F_{1}{\frac {\Delta u_{1}}{2}}}
G
3
NR
=
1
Δ
a
F
3
Δ
u
3
2
{\displaystyle G_{3}^{\text{NR}}={\frac {1}{\Delta a}}F_{3}{\frac {\Delta u_{3}}{2}}}
Where
Δ
a
{\displaystyle \Delta a}
is the width of the element bounding the crack tip. The accuracy of the method highly depends on the mesh refinement, both because the displacement and forces depend on it, and because
G
=
lim
Δ
a
→
0
G
NR
{\displaystyle G=\lim _{\Delta a\to 0}G^{\text{NR}}}
. Note that the equations above are derived using the crack closure integral.
If the energy release rate exceeds a critical value, the crack will grow. In this case, a new FEA simulation is performed (for the next time step) where the node at the crack tip is released. For a bounded substrate, we may simply stop enforcing fixed Dirichlet boundary conditions at the crack tip node of the previous time step (i.e. displacements are no longer restrained). For a symmetric crack, we would need to update the geometry of the domain with a longer crack opening (and therefore generate a new mesh).
=== Modified crack closure integral ===
Similar to the Nodal Release Method, the Modified Crack Closure Integral (MCCI) is a method for calculating the energy release rate utilizing FEA nodal displacements
(
u
i
j
)
{\displaystyle (u_{i}^{j})}
and forces
(
F
i
j
)
{\displaystyle (F_{i}^{j})}
. Where
i
{\displaystyle i}
represents the direction corresponding to the Cartesian basis vectors with origin at the crack tip, and
j
{\displaystyle j}
represents the nodal index. MCCI is more computationally efficient than the nodal release method because it only requires one analysis for each increment of crack growth.
A necessary condition for the MCCI method is uniform element length
(
Δ
a
)
{\displaystyle (\Delta a)}
along the crack face in the
x
1
{\displaystyle x_{1}}
-direction. Additionally, this method requires sufficient discretization such that over the length of one element stress fields are self-similar. This implies that
K
(
a
+
Δ
a
)
≈
K
(
a
)
{\displaystyle K(a+\Delta a)\approx K(a)}
as the crack propagates. Below are examples of the MCCI method with two types of common finite elements.
==== 4-node elements ====
The 4-node square linear elements seen in Figure 2 have a distance between nodes
j
{\displaystyle j}
and
j
+
1
{\displaystyle j+1}
equal to
Δ
a
.
{\displaystyle \Delta a.}
Consider a crack with its tip located at node
j
.
{\displaystyle j.}
Similar to the nodal release method, if the crack were to propagate one element length along the line of symmetry (parallel to the
x
1
{\displaystyle x_{1}}
-axis) the crack opening displacement would be the displacement at the previous crack tip, i.e.
u
j
{\displaystyle {\boldsymbol {u^{j}}}}
and the force at the new crack tip
(
j
+
1
)
{\displaystyle (j+1)}
would be
F
j
+
1
.
{\displaystyle {\boldsymbol {F}}^{j+1}.}
Since the crack growth is assumed to be self-similar the displacement at node
j
{\displaystyle j}
after the crack propagates is equal to the displacement at node
j
−
1
{\displaystyle j-1}
before the crack propagates. This same concept can be applied to the forces at node
j
+
1
{\displaystyle j+1}
and
j
.
{\displaystyle j.}
Utilizing the same method shown in the nodal release section we recover the following equations for energy release rate:
G
1
MCCI
=
1
2
Δ
a
F
2
j
Δ
u
2
j
−
1
{\displaystyle G_{1}^{\text{MCCI}}={\frac {1}{2\Delta a}}F_{2}^{j}{\Delta u_{2}^{j-1}}}
G
2
MCCI
=
1
2
Δ
a
F
1
j
Δ
u
1
j
−
1
{\displaystyle G_{2}^{\text{MCCI}}={\frac {1}{2\Delta a}}F_{1}^{j}{\Delta u_{1}^{j-1}}}
G
3
MCCI
=
1
2
Δ
a
F
3
j
Δ
u
3
j
−
1
{\displaystyle G_{3}^{\text{MCCI}}={\frac {1}{2\Delta a}}F_{3}^{j}{\Delta u_{3}^{j-1}}}
Where
Δ
u
i
j
−
1
=
u
i
(
+
)
j
−
1
−
u
i
(
−
)
j
−
1
{\displaystyle \Delta u_{i}^{j-1}=u_{i}^{(+)j-1}-u_{i}^{(-)j-1}}
(displacement above and below the crack face respectively). Because we have a line of symmetry parallel to the crack, we can assume
u
i
(
+
)
j
−
1
=
−
u
i
(
−
)
j
−
1
.
{\displaystyle u_{i}^{(+)j-1}=-u_{i}^{(-)j-1}.}
Thus,
Δ
u
i
j
−
1
=
2
u
i
(
+
)
j
−
1
.
{\displaystyle \Delta u_{i}^{j-1}=2u_{i}^{(+)j-1}.}
==== 8-node elements ====
The 8-node rectangular elements seen in Figure 3 have quadratic basis functions. The process for calculating G is the same as the 4-node elements with the exception that
Δ
a
{\displaystyle \Delta a}
(the crack growth over one element) is now the distance from node
j
{\displaystyle j}
to
j
+
2.
{\displaystyle j+2.}
Once again, making the assumption of self-similar straight crack growth the energy release rate can be calculated utilizing the following equations:
G
1
MCCI
=
1
2
Δ
a
(
F
2
j
Δ
u
2
j
−
2
+
F
2
j
+
1
Δ
u
2
j
−
1
)
{\displaystyle G_{1}^{\text{MCCI}}={\frac {1}{2\Delta a}}\left(F_{2}^{j}{\Delta u_{2}^{j-2}}+F_{2}^{j+1}{\Delta u_{2}^{j-1}}\right)}
G
2
MCCI
=
1
2
Δ
a
(
F
1
j
Δ
u
1
j
−
2
+
F
1
j
+
1
Δ
u
1
j
−
1
)
{\displaystyle G_{2}^{\text{MCCI}}={\frac {1}{2\Delta a}}\left(F_{1}^{j}{\Delta u_{1}^{j-2}}+F_{1}^{j+1}{\Delta u_{1}^{j-1}}\right)}
G
3
MCCI
=
1
2
Δ
a
(
F
3
j
Δ
u
3
j
−
2
+
F
3
j
+
1
Δ
u
3
j
−
1
)
{\displaystyle G_{3}^{\text{MCCI}}={\frac {1}{2\Delta a}}\left(F_{3}^{j}{\Delta u_{3}^{j-2}}+F_{3}^{j+1}{\Delta u_{3}^{j-1}}\right)}
Like with the nodal release method the accuracy of MCCI is highly dependent on the level of discretization along the crack tip, i.e.
G
=
lim
Δ
a
→
0
G
MCCI
.
{\displaystyle G=\lim _{\Delta a\to 0}G^{\text{MCCI}}.}
Accuracy also depends on element choice. A mesh of 8-node quadratic elements can produce more accurate results than a mesh of 4-node linear elements with the same number of degrees of freedom in the mesh.
=== Domain integral approach for J ===
The J-integral may be calculated directly using the finite element mesh and shape functions. We consider a domain contour as shown in figure 4 and choose an arbitrary smooth function
q
~
(
x
1
,
x
2
)
=
∑
i
N
i
(
x
1
,
x
2
)
q
~
i
{\displaystyle {\tilde {q}}(x_{1},x_{2})=\sum _{i}N_{i}(x_{1},x_{2}){\tilde {q}}_{i}}
such that
q
~
=
1
{\displaystyle {\tilde {q}}=1}
on
Γ
{\displaystyle \Gamma }
and
q
~
=
0
{\displaystyle {\tilde {q}}=0}
on
C
1
{\displaystyle {\mathcal {C}}_{1}}
.
For linear elastic cracks growing straight ahead,
G
=
J
{\displaystyle G=J}
. The energy release rate can then be calculated over the area bounded by the contour using an updated formulation:
J
=
∫
A
(
σ
i
j
u
i
,
1
q
~
,
j
−
W
q
~
,
1
)
d
A
{\displaystyle J=\int _{\mathcal {A}}(\sigma _{ij}u_{i,1}{\tilde {q}}_{,j}-W{\tilde {q}}_{,1})d{\mathcal {A}}}
The formula above may be applied to any annular area surrounding the crack tip (in particular, a set of neighboring elements can be used). This method is very accurate, even with a coarse mesh around the crack tip (one may choose an integration domain located far away, with stresses and displacement less sensitive to mesh refinement)
=== 2-D crack tip singular elements ===
The above-mentioned methods for calculating energy release rate asymptotically approach the actual solution with increased discretization but fail to fully capture the crack tip singularity. More accurate simulations can be performed by utilizing quarter-point elements around the crack tip. These elements have a built-in singularity which more accurately produces stress fields around the crack tip. The advantage of the quarter-point method is that it allows for coarser finite element meshes and greatly reduces computational cost. Furthermore, these elements are derived from small modifications to common finite elements without requiring special computational programs for analysis. For the purposes of this section elastic materials will be examined, although this method can be extended to elastic-plastic fracture mechanics. Assuming perfect elasticity the stress fields will experience a
1
r
{\displaystyle {\frac {1}{\sqrt {r}}}}
crack tip singularity.
==== 8-node isoparametric element ====
The 8-node quadratic element is described by Figure 5 in both parent space with local coordinates
ξ
{\displaystyle \xi }
and
η
,
{\displaystyle \eta ,}
and by the mapped element in physical/global space by
x
{\displaystyle x}
and
y
.
{\displaystyle y.}
The parent element is mapped from the local space to the physical space by the shape functions
N
i
(
ξ
,
η
)
{\displaystyle N_{i}(\xi ,\eta )}
and the degree of freedom coordinates
(
x
i
,
y
i
)
.
{\displaystyle (x_{i},y_{i}).}
The crack tip is located at
ξ
=
−
1
,
η
=
−
1
{\displaystyle \xi =-1,\eta =-1}
or
x
=
0
,
y
=
0.
{\displaystyle x=0,y=0.}
x
(
ξ
,
η
)
=
∑
i
=
1
8
N
i
(
ξ
,
η
)
x
i
{\displaystyle x(\xi ,\eta )=\sum _{i=1}^{8}N_{i}(\xi ,\eta )x_{i}}
y
(
ξ
,
η
)
=
∑
i
=
1
8
N
i
(
ξ
,
η
)
y
i
{\displaystyle y(\xi ,\eta )=\sum _{i=1}^{8}N_{i}(\xi ,\eta )y_{i}}
In a similar way, displacements (defined as
u
≡
u
1
,
v
≡
u
2
{\displaystyle u\equiv u_{1},v\equiv u_{2}}
) can also be mapped.
u
(
ξ
,
η
)
=
∑
i
=
1
8
N
i
(
ξ
,
η
)
u
i
{\displaystyle u(\xi ,\eta )=\sum _{i=1}^{8}N_{i}(\xi ,\eta )u_{i}}
v
(
ξ
,
η
)
=
∑
i
=
1
8
N
i
(
ξ
,
η
)
v
i
{\displaystyle v(\xi ,\eta )=\sum _{i=1}^{8}N_{i}(\xi ,\eta )v_{i}}
A property of shape functions in the finite element method is compact support, specifically the Kronecker delta property (i.e.
N
i
=
1
{\displaystyle N_{i}=1}
at node
i
{\displaystyle i}
and zero at all other nodes). This results in the following shape functions for the 8-node quadratic elements:
N
1
=
−
(
ξ
−
1
)
(
η
−
1
)
(
1
+
η
+
ξ
)
4
{\displaystyle N_{1}={\frac {-(\xi -1)(\eta -1)(1+\eta +\xi )}{4}}}
N
2
=
(
ξ
+
1
)
(
η
−
1
)
(
1
+
η
−
ξ
)
4
{\displaystyle N_{2}={\frac {(\xi +1)(\eta -1)(1+\eta -\xi )}{4}}}
N
3
=
(
ξ
+
1
)
(
η
+
1
)
(
−
1
+
η
+
ξ
)
4
{\displaystyle N_{3}={\frac {(\xi +1)(\eta +1)(-1+\eta +\xi )}{4}}}
N
4
=
−
(
ξ
−
1
)
(
η
+
1
)
(
−
1
+
η
−
ξ
)
4
{\displaystyle N_{4}={\frac {-(\xi -1)(\eta +1)(-1+\eta -\xi )}{4}}}
N
5
=
(
1
−
ξ
2
)
(
1
−
η
)
2
{\displaystyle N_{5}={\frac {(1-\xi ^{2})(1-\eta )}{2}}}
N
6
=
(
1
+
ξ
)
(
1
−
η
2
)
2
{\displaystyle N_{6}={\frac {(1+\xi )(1-\eta ^{2})}{2}}}
N
7
=
(
1
−
ξ
2
)
(
1
+
η
)
2
{\displaystyle N_{7}={\frac {(1-\xi ^{2})(1+\eta )}{2}}}
N
8
=
(
1
−
ξ
)
(
1
−
η
2
)
2
{\displaystyle N_{8}={\frac {(1-\xi )(1-\eta ^{2})}{2}}}
When considering a line in front of the crack that is co-linear with the
x
{\displaystyle x}
- axis (i.e.
N
i
(
ξ
,
η
=
−
1
)
{\displaystyle N_{i}(\xi ,\eta =-1)}
) all basis functions are zero except for
N
1
,
2
,
5
.
{\displaystyle N_{1,2,5}.}
N
1
(
ξ
,
−
1
)
=
−
ξ
(
1
−
ξ
)
2
{\displaystyle N_{1}(\xi ,-1)=-{\frac {\xi (1-\xi )}{2}}}
N
2
(
ξ
,
−
1
)
=
ξ
(
1
+
ξ
)
2
{\displaystyle N_{2}(\xi ,-1)={\frac {\xi (1+\xi )}{2}}}
N
5
(
ξ
,
−
1
)
=
(
1
−
ξ
2
)
{\displaystyle N_{5}(\xi ,-1)=(1-\xi ^{2})}
Calculating the normal strain involves using the chain rule to take the derivative of displacement with respect to
x
.
{\displaystyle x.}
γ
x
x
=
∂
u
∂
x
=
∑
i
=
1
,
2
,
5
∂
N
i
∂
ξ
∂
ξ
∂
x
u
i
{\displaystyle \gamma _{xx}={\frac {\partial u}{\partial x}}=\sum _{i=1,2,5}{\frac {\partial N_{i}}{\partial \xi }}{\frac {\partial \xi }{\partial x}}u_{i}}
If the nodes are spaced evenly on the rectangular element then the strain will not contain the singularity. By moving nodes 5 and 8 position to a quarter of the length
(
L
4
)
{\displaystyle ({\tfrac {L}{4}})}
of the element closer to the crack tip as seen in figure 5, the mapping from
ξ
→
x
{\displaystyle \xi \rightarrow x}
becomes:
x
(
ξ
)
=
ξ
(
1
+
ξ
)
2
L
+
(
1
−
ξ
2
)
L
4
{\displaystyle x(\xi )={\frac {\xi (1+\xi )}{2}}L+(1-\xi ^{2}){\frac {L}{4}}}
Solving for
ξ
{\displaystyle \xi }
and taking the derivative results in:
ξ
(
x
)
=
−
1
+
2
x
L
{\displaystyle \xi (x)=-1+2{\sqrt {\frac {x}{L}}}}
∂
ξ
∂
x
=
1
x
L
{\displaystyle {\frac {\partial \xi }{\partial x}}={\frac {1}{\sqrt {xL}}}}
Plugging this result into the equation for strain the final result is obtained:
γ
x
x
=
4
L
(
u
2
2
−
u
5
)
+
1
x
L
(
2
u
5
−
u
2
5
)
{\displaystyle \gamma _{xx}={\frac {4}{L}}\left({\frac {u_{2}}{2}}-u_{5}\right)+{\frac {1}{\sqrt {xL}}}\left(2u_{5}-{\frac {u_{2}}{5}}\right)}
By moving the mid-nodes to a quarter position results in the correct
1
r
{\displaystyle {\frac {1}{\sqrt {r}}}}
crack tip singularity.
==== Other element types ====
The rectangular element method does not allow for singular elements to be easily meshed around the crack tip. This impedes the ability to capture the angular dependence of the stress fields which is critical in determining the crack path. Also, except along the element edges the
1
r
{\displaystyle {\frac {1}{\sqrt {r}}}}
singularity exists in a very small region near the crack tip. Figure 6 shows another quarter-point method for modeling this singularity. The 8-node rectangular element can be mapped into a triangle. This is done by collapsing the nodes on the line
ξ
=
−
1
{\displaystyle \xi =-1}
to the mid-node location and shifting the mid-nodes on
η
=
±
1
{\displaystyle \eta =\pm 1}
to the quarter-point location. The collapsed rectangle can more easily surround the crack tip but requires that the element edges be straight or the accuracy of calculating the stress intensity factor will be reduced.
A better candidate for the quarter-point method is the natural triangle as seen in Figure 7. The element's geometry allows for the crack tip to be easily surrounded and meshing is simplified. Following the same procedure described above, the displacement and strain field for the triangular elements are:
u
=
u
3
+
x
L
[
4
u
6
−
3
u
3
−
u
1
]
+
x
L
[
2
u
1
+
2
u
3
−
4
u
6
]
{\displaystyle u=u_{3}+{\sqrt {\frac {x}{L}}}\left[4u_{6}-3u_{3}-u_{1}\right]+{\frac {x}{L}}\left[2u_{1}+2u_{3}-4u_{6}\right]}
γ
x
x
=
∂
u
∂
x
=
1
x
L
[
−
u
1
2
−
3
u
3
2
+
2
u
6
]
+
1
L
[
2
u
1
+
2
u
3
−
4
u
6
]
{\displaystyle \gamma _{xx}={\frac {\partial u}{\partial x}}={\frac {1}{\sqrt {xL}}}\left[-{\frac {u_{1}}{2}}-{\frac {3u_{3}}{2}}+2u_{6}\right]+{\frac {1}{L}}\left[2u_{1}+2u_{3}-4u_{6}\right]}
This method reproduces the first two terms of the Williams solutions with a constant and singular term.
An advantage of the quarter-point method is that it can be easily generalized to 3-dimensional models. This can greatly reduce computation when compared to other 3-dimensional methods but can lead to errors if that crack tip propagates with a large degree of curvature.
== See also ==
Fracture mechanics
Stress intensity factor
Fracture toughness
J-integral
== References ==
== External links ==
Nonlinear Fracture Mechanics Notes by Prof. John Hutchinson (from Harvard University)
Griffith's Strain Energy Release Rate on www.fracturemechanics.org | Wikipedia/Energy_release_rate |
In the case of finite deformations, the Piola–Kirchhoff stress tensors (named for Gabrio Piola and Gustav Kirchhoff) express the stress relative to the reference configuration. This is in contrast to the Cauchy stress tensor which expresses the stress relative to the present configuration. For infinitesimal deformations and rotations, the Cauchy and Piola–Kirchhoff tensors are identical.
Whereas the Cauchy stress tensor
σ
{\displaystyle {\boldsymbol {\sigma }}}
relates stresses in the current configuration, the deformation gradient and strain tensors are described by relating the motion to the reference configuration; thus not all tensors describing the state of the material are in either the reference or current configuration. Describing the stress, strain and deformation either in the reference or current configuration would make it easier to define constitutive models (for example, the Cauchy Stress tensor is variant to a pure rotation, while the deformation strain tensor is invariant; thus creating problems in defining a constitutive model that relates a varying tensor, in terms of an invariant one during pure rotation; as by definition constitutive models have to be invariant to pure rotations). The 1st Piola–Kirchhoff stress tensor,
P
{\displaystyle {\boldsymbol {P}}}
is one possible solution to this problem. It defines a family of tensors, which describe the configuration of the body in either the current or the reference state.
The first Piola–Kirchhoff stress tensor,
P
{\displaystyle {\boldsymbol {P}}}
, relates forces in the present ("spatial") configuration with areas in the reference ("material") configuration.
P
=
J
σ
F
−
T
{\displaystyle {\boldsymbol {P}}=J~{\boldsymbol {\sigma }}~{\boldsymbol {F}}^{-T}~}
where
F
{\displaystyle {\boldsymbol {F}}}
is the deformation gradient and
J
=
det
F
{\displaystyle J=\det {\boldsymbol {F}}}
is the Jacobian determinant.
In terms of components with respect to an orthonormal basis, the first Piola–Kirchhoff stress is given by
P
i
L
=
J
σ
i
k
F
L
k
−
1
=
J
σ
i
k
∂
X
L
∂
x
k
{\displaystyle P_{iL}=J~\sigma _{ik}~F_{Lk}^{-1}=J~\sigma _{ik}~{\cfrac {\partial X_{L}}{\partial x_{k}}}~\,\!}
Because it relates different coordinate systems, the first Piola–Kirchhoff stress is a two-point tensor. In general, it is not symmetric. The first Piola–Kirchhoff stress is the 3D generalization of the 1D concept of engineering stress.
If the material rotates without a change in stress state (rigid rotation), the components of the first Piola–Kirchhoff stress tensor will vary with material orientation.
The first Piola–Kirchhoff stress is energy conjugate to the deformation gradient.
It relates forces in the current configuration to areas in the reference configuration.
The second Piola–Kirchhoff stress tensor,
S
{\displaystyle {\boldsymbol {S}}}
, relates forces in the reference configuration to areas in the reference configuration. The force in the reference configuration is obtained via a mapping that preserves the relative relationship between the force direction and the area normal in the reference configuration.
S
=
J
F
−
1
⋅
σ
⋅
F
−
T
.
{\displaystyle {\boldsymbol {S}}=J~{\boldsymbol {F}}^{-1}\cdot {\boldsymbol {\sigma }}\cdot {\boldsymbol {F}}^{-T}~.}
In index notation with respect to an orthonormal basis,
S
I
L
=
J
F
I
k
−
1
F
L
m
−
1
σ
k
m
=
J
∂
X
I
∂
x
k
∂
X
L
∂
x
m
σ
k
m
{\displaystyle S_{IL}=J~F_{Ik}^{-1}~F_{Lm}^{-1}~\sigma _{km}=J~{\cfrac {\partial X_{I}}{\partial x_{k}}}~{\cfrac {\partial X_{L}}{\partial x_{m}}}~\sigma _{km}\!\,\!}
This tensor, a one-point tensor, is symmetric.
If the material rotates without a change in stress state (rigid rotation), the components of the second Piola–Kirchhoff stress tensor remain constant, irrespective of material orientation.
The second Piola–Kirchhoff stress tensor is energy conjugate to the Green–Lagrange finite strain tensor.
== References ==
J. Bonet and R. W. Wood, Nonlinear Continuum Mechanics for Finite Element Analysis, Cambridge University Press. | Wikipedia/Piola–Kirchhoff_stress_tensor |
In mathematics, in the fields of multilinear algebra and representation theory, the principal invariants of the second rank tensor
A
{\displaystyle \mathbf {A} }
are the coefficients of the characteristic polynomial
p
(
λ
)
=
det
(
A
−
λ
I
)
{\displaystyle \ p(\lambda )=\det(\mathbf {A} -\lambda \mathbf {I} )}
,
where
I
{\displaystyle \mathbf {I} }
is the identity operator and
λ
i
∈
C
{\displaystyle \lambda _{i}\in \mathbb {C} }
are the roots of the polynomial
p
{\displaystyle \ p}
and the eigenvalues of
A
{\displaystyle \mathbf {A} }
.
More broadly, any scalar-valued function
f
(
A
)
{\displaystyle f(\mathbf {A} )}
is an invariant of
A
{\displaystyle \mathbf {A} }
if and only if
f
(
Q
A
Q
T
)
=
f
(
A
)
{\displaystyle f(\mathbf {Q} \mathbf {A} \mathbf {Q} ^{T})=f(\mathbf {A} )}
for all orthogonal
Q
{\displaystyle \mathbf {Q} }
. This means that a formula expressing an invariant in terms of components,
A
i
j
{\displaystyle A_{ij}}
, will give the same result for all Cartesian bases. For example, even though individual diagonal components of
A
{\displaystyle \mathbf {A} }
will change with a change in basis, the sum of diagonal components will not change.
== Properties ==
The principal invariants do not change with rotations of the coordinate system (they are objective, or in more modern terminology, satisfy the principle of material frame-indifference) and any function of the principal invariants is also objective.
== Calculation of the invariants of rank two tensors ==
In a majority of engineering applications, the principal invariants of (rank two) tensors of dimension three are sought, such as those for the right Cauchy-Green deformation tensor
C
{\displaystyle \mathbf {C} }
which has the eigenvalues
λ
1
2
{\displaystyle \lambda _{1}^{2}}
,
λ
2
2
{\displaystyle \lambda _{2}^{2}}
, and
λ
3
2
{\displaystyle \lambda _{3}^{2}}
. Where
λ
1
{\displaystyle \lambda _{1}}
,
λ
2
{\displaystyle \lambda _{2}}
, and
λ
3
{\displaystyle \lambda _{3}}
are the principal stretches, i.e. the eigenvalues of
U
=
C
{\displaystyle \mathbf {U} ={\sqrt {\mathbf {C} }}}
.
=== Principal invariants ===
For such tensors, the principal invariants are given by:
I
1
=
t
r
(
A
)
=
A
11
+
A
22
+
A
33
=
λ
1
+
λ
2
+
λ
3
I
2
=
1
2
(
(
t
r
(
A
)
)
2
−
t
r
(
A
2
)
)
=
A
11
A
22
+
A
22
A
33
+
A
11
A
33
−
A
12
A
21
−
A
23
A
32
−
A
13
A
31
=
λ
1
λ
2
+
λ
1
λ
3
+
λ
2
λ
3
I
3
=
det
(
A
)
=
−
A
13
A
22
A
31
+
A
12
A
23
A
31
+
A
13
A
21
A
32
−
A
11
A
23
A
32
−
A
12
A
21
A
33
+
A
11
A
22
A
33
=
λ
1
λ
2
λ
3
{\displaystyle {\begin{aligned}I_{1}&=\mathrm {tr} (\mathbf {A} )=A_{11}+A_{22}+A_{33}=\lambda _{1}+\lambda _{2}+\lambda _{3}\\I_{2}&={\frac {1}{2}}\left((\mathrm {tr} (\mathbf {A} ))^{2}-\mathrm {tr} \left(\mathbf {A} ^{2}\right)\right)=A_{11}A_{22}+A_{22}A_{33}+A_{11}A_{33}-A_{12}A_{21}-A_{23}A_{32}-A_{13}A_{31}=\lambda _{1}\lambda _{2}+\lambda _{1}\lambda _{3}+\lambda _{2}\lambda _{3}\\I_{3}&=\det(\mathbf {A} )=-A_{13}A_{22}A_{31}+A_{12}A_{23}A_{31}+A_{13}A_{21}A_{32}-A_{11}A_{23}A_{32}-A_{12}A_{21}A_{33}+A_{11}A_{22}A_{33}=\lambda _{1}\lambda _{2}\lambda _{3}\end{aligned}}}
For symmetric tensors, these definitions are reduced.
The correspondence between the principal invariants and the characteristic polynomial of a tensor, in tandem with the Cayley–Hamilton theorem reveals that
A
3
−
I
1
A
2
+
I
2
A
−
I
3
I
=
0
{\displaystyle \ \mathbf {A} ^{3}-I_{1}\mathbf {A} ^{2}+I_{2}\mathbf {A} -I_{3}\mathbf {I} =0}
where
I
{\displaystyle \mathbf {I} }
is the second-order identity tensor.
=== Main invariants ===
In addition to the principal invariants listed above, it is also possible to introduce the notion of main invariants
J
1
=
λ
1
+
λ
2
+
λ
3
=
I
1
J
2
=
λ
1
2
+
λ
2
2
+
λ
3
2
=
I
1
2
−
2
I
2
J
3
=
λ
1
3
+
λ
2
3
+
λ
3
3
=
I
1
3
−
3
I
1
I
2
+
3
I
3
{\displaystyle {\begin{aligned}J_{1}&=\lambda _{1}+\lambda _{2}+\lambda _{3}=I_{1}\\J_{2}&=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}=I_{1}^{2}-2I_{2}\\J_{3}&=\lambda _{1}^{3}+\lambda _{2}^{3}+\lambda _{3}^{3}=I_{1}^{3}-3I_{1}I_{2}+3I_{3}\end{aligned}}}
which are functions of the principal invariants above. These are the coefficients of the characteristic polynomial of the deviator
A
−
(
t
r
(
A
)
/
3
)
I
{\displaystyle \mathbf {A} -(\mathrm {tr} (\mathbf {A} )/3)\mathbf {I} }
, such that it is traceless. The separation of a tensor into a component that is a multiple of the identity and a traceless component is standard in hydrodynamics, where the former is called isotropic, providing the modified pressure, and the latter is called deviatoric, providing shear effects.
=== Mixed invariants ===
Furthermore, mixed invariants between pairs of rank two tensors may also be defined.
== Calculation of the invariants of order two tensors of higher dimension ==
These may be extracted by evaluating the characteristic polynomial directly, using the Faddeev-LeVerrier algorithm for example.
== Calculation of the invariants of higher order tensors ==
The invariants of rank three, four, and higher order tensors may also be determined.
== Engineering applications ==
A scalar function
f
{\displaystyle f}
that depends entirely on the principal invariants of a tensor is objective, i.e., independent of rotations of the coordinate system. This property is commonly used in formulating closed-form expressions for the strain energy density, or Helmholtz free energy, of a nonlinear material possessing isotropic symmetry.
This technique was first introduced into isotropic turbulence by Howard P. Robertson in 1940 where he was able to derive Kármán–Howarth equation from the invariant principle. George Batchelor and Subrahmanyan Chandrasekhar exploited this technique and developed an extended treatment for axisymmetric turbulence.
=== Invariants of non-symmetric tensors ===
A real tensor
A
{\displaystyle \mathbf {A} }
in 3D (i.e., one with a 3x3 component matrix) has as many as six independent invariants, three being the invariants of its symmetric part and three characterizing the orientation of the axial vector of the skew-symmetric part relative to the principal directions of the symmetric part. For example, if the Cartesian components of
A
{\displaystyle \mathbf {A} }
are
[
A
]
=
[
931
5480
−
717
−
5120
1650
1090
1533
−
610
1169
]
,
{\displaystyle [A]={\begin{bmatrix}931&5480&-717\\-5120&1650&1090\\1533&-610&1169\end{bmatrix}},}
the first step would be to evaluate the axial vector
w
{\displaystyle \mathbf {w} }
associated with the skew-symmetric part. Specifically, the axial vector has components
w
1
=
A
32
−
A
23
2
=
−
850
w
2
=
A
13
−
A
31
2
=
−
1125
w
3
=
A
21
−
A
12
2
=
−
5300
{\displaystyle {\begin{aligned}w_{1}&={\frac {A_{32}-A_{23}}{2}}=-850\\w_{2}&={\frac {A_{13}-A_{31}}{2}}=-1125\\w_{3}&={\frac {A_{21}-A_{12}}{2}}=-5300\end{aligned}}}
The next step finds the principal values of the symmetric part of
A
{\displaystyle \mathbf {A} }
. Even though the eigenvalues of a real non-symmetric tensor might be complex, the eigenvalues of its symmetric part will always be real and therefore can be ordered from largest to smallest. The corresponding orthonormal principal basis directions can be assigned senses to ensure that the axial vector
w
{\displaystyle \mathbf {w} }
points within the first octant. With respect to that special basis, the components of
A
{\displaystyle \mathbf {A} }
are
[
A
′
]
=
[
1875
−
2500
3125
2500
1250
−
3750
−
3125
3750
625
]
,
{\displaystyle [A']={\begin{bmatrix}1875&-2500&3125\\2500&1250&-3750\\-3125&3750&625\end{bmatrix}},}
The first three invariants of
A
{\displaystyle \mathbf {A} }
are the diagonal components of this matrix:
a
1
=
A
11
′
=
1875
,
a
2
=
A
22
′
=
1250
,
a
3
=
A
33
′
=
625
{\displaystyle a_{1}=A'_{11}=1875,a_{2}=A'_{22}=1250,a_{3}=A'_{33}=625}
(equal to the ordered principal values of the tensor's symmetric part). The remaining three invariants are the axial vector's components in this basis:
w
1
′
=
A
32
′
=
3750
,
w
2
′
=
A
13
′
=
3125
,
w
3
′
=
A
21
′
=
2500
{\displaystyle w'_{1}=A'_{32}=3750,w'_{2}=A'_{13}=3125,w'_{3}=A'_{21}=2500}
. Note: the magnitude of the axial vector,
w
⋅
w
{\displaystyle {\sqrt {\mathbf {w} \cdot \mathbf {w} }}}
, is the sole invariant of the skew part of
A
{\displaystyle \mathbf {A} }
, whereas these distinct three invariants characterize (in a sense) "alignment" between the symmetric and skew parts of
A
{\displaystyle \mathbf {A} }
. Incidentally, it is a myth that a tensor is positive definite if its eigenvalues are positive. Instead, it is positive definite if and only if the eigenvalues of its symmetric part are positive.
== See also ==
Symmetric polynomial
Elementary symmetric polynomial
Newton's identities
Invariant theory
== References == | Wikipedia/Invariants_of_tensors |
In the mathematical field of differential geometry, the Riemann curvature tensor or Riemann–Christoffel tensor (after Bernhard Riemann and Elwin Bruno Christoffel) is the most common way used to express the curvature of Riemannian manifolds. It assigns a tensor to each point of a Riemannian manifold (i.e., it is a tensor field). It is a local invariant of Riemannian metrics that measures the failure of the second covariant derivatives to commute. A Riemannian manifold has zero curvature if and only if it is flat, i.e. locally isometric to the Euclidean space. The curvature tensor can also be defined for any pseudo-Riemannian manifold, or indeed any manifold equipped with an affine connection.
It is a central mathematical tool in the theory of general relativity, the modern theory of gravity. The curvature of spacetime is in principle observable via the geodesic deviation equation. The curvature tensor represents the tidal force experienced by a rigid body moving along a geodesic in a sense made precise by the Jacobi equation.
== Definition ==
Let
(
M
,
g
)
{\displaystyle (M,g)}
be a Riemannian or pseudo-Riemannian manifold, and
X
(
M
)
{\displaystyle {\mathfrak {X}}(M)}
be the space of all vector fields on
M
{\displaystyle M}
. We define the Riemann curvature tensor as a map
X
(
M
)
×
X
(
M
)
×
X
(
M
)
→
X
(
M
)
{\displaystyle {\mathfrak {X}}(M)\times {\mathfrak {X}}(M)\times {\mathfrak {X}}(M)\rightarrow {\mathfrak {X}}(M)}
by the following formula where
∇
{\displaystyle \nabla }
is the Levi-Civita connection:
R
(
X
,
Y
)
Z
=
∇
X
∇
Y
Z
−
∇
Y
∇
X
Z
−
∇
[
X
,
Y
]
Z
{\displaystyle R(X,Y)Z=\nabla _{X}\nabla _{Y}Z-\nabla _{Y}\nabla _{X}Z-\nabla _{[X,Y]}Z}
or equivalently
R
(
X
,
Y
)
=
[
∇
X
,
∇
Y
]
−
∇
[
X
,
Y
]
{\displaystyle R(X,Y)=[\nabla _{X},\nabla _{Y}]-\nabla _{[X,Y]}}
where
[
X
,
Y
]
{\displaystyle [X,Y]}
is the Lie bracket of vector fields and
[
∇
X
,
∇
Y
]
{\displaystyle [\nabla _{X},\nabla _{Y}]}
is a commutator of differential operators. It turns out that the right-hand side actually only depends on the value of the vector fields
X
,
Y
,
Z
{\displaystyle X,Y,Z}
at a given point, which is notable since the covariant derivative of a vector field also depends on the field values in a neighborhood of the point. Hence,
R
{\displaystyle R}
is a
(
1
,
3
)
{\displaystyle (1,3)}
-tensor field. For fixed
X
,
Y
{\displaystyle X,Y}
, the linear transformation
Z
↦
R
(
X
,
Y
)
Z
{\displaystyle Z\mapsto R(X,Y)Z}
is also called the curvature transformation or endomorphism. Occasionally, the curvature tensor is defined with the opposite sign.
The curvature tensor measures noncommutativity of the covariant derivative, and as such is the integrability obstruction for the existence of an isometry with Euclidean space (called, in this context, flat space).
Since the Levi-Civita connection is torsion-free, its curvature can also be expressed in terms of the second covariant derivative
∇
X
,
Y
2
Z
=
∇
X
∇
Y
Z
−
∇
∇
X
Y
Z
{\textstyle \nabla _{X,Y}^{2}Z=\nabla _{X}\nabla _{Y}Z-\nabla _{\nabla _{X}Y}Z}
which depends only on the values of
X
,
Y
{\displaystyle X,Y}
at a point.
The curvature can then be written as
R
(
X
,
Y
)
=
∇
X
,
Y
2
−
∇
Y
,
X
2
{\displaystyle R(X,Y)=\nabla _{X,Y}^{2}-\nabla _{Y,X}^{2}}
Thus, the curvature tensor measures the noncommutativity of the second covariant derivative. In abstract index notation,
R
d
c
a
b
Z
c
=
∇
a
∇
b
Z
d
−
∇
b
∇
a
Z
d
.
{\displaystyle R^{d}{}_{cab}Z^{c}=\nabla _{a}\nabla _{b}Z^{d}-\nabla _{b}\nabla _{a}Z^{d}.}
The Riemann curvature tensor is also the commutator of the covariant derivative of an arbitrary covector
A
ν
{\displaystyle A_{\nu }}
with itself:
A
ν
;
ρ
σ
−
A
ν
;
σ
ρ
=
A
β
R
β
ν
ρ
σ
.
{\displaystyle A_{\nu ;\rho \sigma }-A_{\nu ;\sigma \rho }=A_{\beta }R^{\beta }{}_{\nu \rho \sigma }.}
This formula is often called the Ricci identity. This is the classical method used by Ricci and Levi-Civita to obtain an expression for the Riemann curvature tensor. This identity can be generalized to get the commutators for two covariant derivatives of arbitrary tensors as follows
∇
δ
∇
γ
T
α
1
⋯
α
r
β
1
⋯
β
s
−
∇
γ
∇
δ
T
α
1
⋯
α
r
β
1
⋯
β
s
=
R
α
1
ρ
δ
γ
T
ρ
α
2
⋯
α
r
β
1
⋯
β
s
+
…
+
R
α
r
ρ
δ
γ
T
α
1
⋯
α
r
−
1
ρ
β
1
⋯
β
s
−
R
σ
β
1
δ
γ
T
α
1
⋯
α
r
σ
β
2
⋯
β
s
−
…
−
R
σ
β
s
δ
γ
T
α
1
⋯
α
r
β
1
⋯
β
s
−
1
σ
{\displaystyle {\begin{aligned}&\nabla _{\delta }\nabla _{\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}-\nabla _{\gamma }\nabla _{\delta }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}\\[3pt]={}&R^{\alpha _{1}}{}_{\rho \delta \gamma }T^{\rho \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}+\ldots +R^{\alpha _{r}}{}_{\rho \delta \gamma }T^{\alpha _{1}\cdots \alpha _{r-1}\rho }{}_{\beta _{1}\cdots \beta _{s}}-R^{\sigma }{}_{\beta _{1}\delta \gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\sigma \beta _{2}\cdots \beta _{s}}-\ldots -R^{\sigma }{}_{\beta _{s}\delta \gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\sigma }\end{aligned}}}
This formula also applies to tensor densities without alteration, because for the Levi-Civita (not generic) connection one gets:
∇
μ
(
g
)
≡
(
g
)
;
μ
=
0
,
{\displaystyle \nabla _{\mu }\left({\sqrt {g}}\right)\equiv \left({\sqrt {g}}\right)_{;\mu }=0,}
where
g
=
|
det
(
g
μ
ν
)
|
.
{\displaystyle g=\left|\det \left(g_{\mu \nu }\right)\right|.}
It is sometimes convenient to also define the purely covariant version of the curvature tensor by
R
σ
μ
ν
ρ
=
g
ρ
ζ
R
ζ
σ
μ
ν
.
{\displaystyle R_{\sigma \mu \nu \rho }=g_{\rho \zeta }R^{\zeta }{}_{\sigma \mu \nu }.}
== Geometric meaning ==
=== Informally ===
One can see the effects of curved space by comparing a tennis court and the Earth. Start at the lower right corner of the tennis court, with a racket held out towards north. Then while walking around the outline of the court, at each step make sure the tennis racket is maintained in the same orientation, parallel to its previous positions. Once the loop is complete the tennis racket will be parallel to its initial starting position. This is because tennis courts are built so the surface is flat. On the other hand, the surface of the Earth is curved: we can complete a loop on the surface of the Earth. Starting at the equator, point a tennis racket north along the surface of the Earth. Once again the tennis racket should always remain parallel to its previous position, using the local plane of the horizon as a reference. For this path, first walk to the north pole, then walk sideways (i.e. without turning), then down to the equator, and finally walk backwards to your starting position. Now the tennis racket will be pointing towards the west, even though when you began your journey it pointed north and you never turned your body. This process is akin to parallel transporting a vector along the path and the difference identifies how lines which appear "straight" are only "straight" locally. Each time a loop is completed the tennis racket will be deflected further from its initial position by an amount depending on the distance and the curvature of the surface. It is possible to identify paths along a curved surface where parallel transport works as it does on flat space. These are the geodesics of the space, for example any segment of a great circle of a sphere.
The concept of a curved space in mathematics differs from conversational usage. For example, if the above process was completed on a cylinder one would find that it is not curved overall as the curvature around the cylinder cancels with the flatness along the cylinder, which is a consequence of Gaussian curvature and Gauss's Theorema Egregium. A familiar example of this is a floppy pizza slice, which will remain rigid along its length if it is curved along its width.
The Riemann curvature tensor is a way to capture a measure of the intrinsic curvature. When you write it down in terms of its components (like writing down the components of a vector), it consists of a multi-dimensional array of sums and products of partial derivatives (some of those partial derivatives can be thought of as akin to capturing the curvature imposed upon someone walking in straight lines on a curved surface).
=== Formally ===
When a vector in a Euclidean space is parallel transported around a loop, it will again point in the initial direction after returning to its original position. However, this property does not hold in the general case. The Riemann curvature tensor directly measures the failure of this in a general Riemannian manifold. This failure is known as the non-holonomy of the manifold.
Let
x
t
{\displaystyle x_{t}}
be a curve in a Riemannian manifold
M
{\displaystyle M}
. Denote by
τ
x
t
:
T
x
0
M
→
T
x
t
M
{\displaystyle \tau _{x_{t}}:T_{x_{0}}M\to T_{x_{t}}M}
the parallel transport map along
x
t
{\displaystyle x_{t}}
. The parallel transport maps are related to the covariant derivative by
∇
x
˙
0
Y
=
lim
h
→
0
1
h
(
τ
x
h
−
1
(
Y
x
h
)
−
Y
x
0
)
=
d
d
t
(
τ
x
t
−
1
(
Y
x
t
)
)
|
t
=
0
{\displaystyle \nabla _{{\dot {x}}_{0}}Y=\lim _{h\to 0}{\frac {1}{h}}\left(\tau _{x_{h}}^{-1}\left(Y_{x_{h}}\right)-Y_{x_{0}}\right)=\left.{\frac {d}{dt}}\left(\tau _{x_{t}}^{-1}(Y_{x_{t}})\right)\right|_{t=0}}
for each vector field
Y
{\displaystyle Y}
defined along the curve.
Suppose that
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are a pair of commuting vector fields. Each of these fields generates a one-parameter group of diffeomorphisms in a neighborhood of
x
0
{\displaystyle x_{0}}
. Denote by
τ
t
X
{\displaystyle \tau _{tX}}
and
τ
t
Y
{\displaystyle \tau _{tY}}
, respectively, the parallel transports along the flows of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
for time
t
{\displaystyle t}
. Parallel transport of a vector
Z
∈
T
x
0
M
{\displaystyle Z\in T_{x_{0}}M}
around the quadrilateral with sides
t
Y
{\displaystyle tY}
,
s
X
{\displaystyle sX}
,
−
t
Y
{\displaystyle -tY}
,
−
s
X
{\displaystyle -sX}
is given by
τ
s
X
−
1
τ
t
Y
−
1
τ
s
X
τ
t
Y
Z
.
{\displaystyle \tau _{sX}^{-1}\tau _{tY}^{-1}\tau _{sX}\tau _{tY}Z.}
The difference between this and
Z
{\displaystyle Z}
measures the failure of parallel transport to return
Z
{\displaystyle Z}
to its original position in the tangent space
T
x
0
M
{\displaystyle T_{x_{0}}M}
. Shrinking the loop by sending
s
,
t
→
0
{\displaystyle s,t\to 0}
gives the infinitesimal description of this deviation:
d
d
s
d
d
t
τ
s
X
−
1
τ
t
Y
−
1
τ
s
X
τ
t
Y
Z
|
s
=
t
=
0
=
(
∇
X
∇
Y
−
∇
Y
∇
X
−
∇
[
X
,
Y
]
)
Z
=
R
(
X
,
Y
)
Z
{\displaystyle \left.{\frac {d}{ds}}{\frac {d}{dt}}\tau _{sX}^{-1}\tau _{tY}^{-1}\tau _{sX}\tau _{tY}Z\right|_{s=t=0}=\left(\nabla _{X}\nabla _{Y}-\nabla _{Y}\nabla _{X}-\nabla _{[X,Y]}\right)Z=R(X,Y)Z}
where
R
{\displaystyle R}
is the Riemann curvature tensor.
== Coordinate expression ==
Converting to the tensor index notation, the Riemann curvature tensor is given by
R
ρ
σ
μ
ν
=
d
x
ρ
(
R
(
∂
μ
,
∂
ν
)
∂
σ
)
{\displaystyle R^{\rho }{}_{\sigma \mu \nu }=dx^{\rho }\left(R\left(\partial _{\mu },\partial _{\nu }\right)\partial _{\sigma }\right)}
where
∂
μ
=
∂
/
∂
x
μ
{\displaystyle \partial _{\mu }=\partial /\partial x^{\mu }}
are the coordinate vector fields. The above expression can be written using Christoffel symbols:
R
ρ
σ
μ
ν
=
∂
μ
Γ
ρ
ν
σ
−
∂
ν
Γ
ρ
μ
σ
+
Γ
ρ
μ
λ
Γ
λ
ν
σ
−
Γ
ρ
ν
λ
Γ
λ
μ
σ
{\displaystyle R^{\rho }{}_{\sigma \mu \nu }=\partial _{\mu }\Gamma ^{\rho }{}_{\nu \sigma }-\partial _{\nu }\Gamma ^{\rho }{}_{\mu \sigma }+\Gamma ^{\rho }{}_{\mu \lambda }\Gamma ^{\lambda }{}_{\nu \sigma }-\Gamma ^{\rho }{}_{\nu \lambda }\Gamma ^{\lambda }{}_{\mu \sigma }}
(See also List of formulas in Riemannian geometry).
== Symmetries and identities ==
The Riemann curvature tensor has the following symmetries and identities:
where the bracket
⟨
,
⟩
{\displaystyle \langle ,\rangle }
refers to the inner product on the tangent space induced by the metric tensor and
the brackets and parentheses on the indices denote the antisymmetrization and symmetrization operators, respectively. If there is nonzero torsion, the Bianchi identities involve the torsion tensor.
The first (algebraic) Bianchi identity was discovered by Ricci, but is often called the first Bianchi identity or algebraic Bianchi identity, because it looks similar to the differential Bianchi identity.
The first three identities form a complete list of symmetries of the curvature tensor, i.e. given any tensor which satisfies the identities above, one can find a Riemannian manifold with such a curvature tensor at some point. Simple calculations show that such a tensor has
n
2
(
n
2
−
1
)
/
12
{\displaystyle n^{2}\left(n^{2}-1\right)/12}
independent components. Interchange symmetry follows from these. The algebraic symmetries are also equivalent to saying that R belongs to the image of the Young symmetrizer corresponding to the partition 2+2.
On a Riemannian manifold one has the covariant derivative
∇
u
R
{\displaystyle \nabla _{u}R}
and the Bianchi identity (often called the second Bianchi identity or differential Bianchi identity) takes the form of the last identity in the table.
== Ricci curvature ==
The Ricci curvature tensor is the contraction of the first and third indices of the Riemann tensor.
R
a
b
⏟
Ricci
≡
R
c
a
c
b
=
g
c
d
R
c
a
d
b
⏟
Riemann
{\displaystyle \underbrace {R_{ab}} _{\text{Ricci}}\equiv R^{c}{}_{acb}=g^{cd}\underbrace {R_{cadb}} _{\text{Riemann}}}
== Special cases ==
=== Surfaces ===
For a two-dimensional surface, the Bianchi identities imply that the Riemann tensor has only one independent component, which means that the Ricci scalar completely determines the Riemann tensor. There is only one valid expression for the Riemann tensor which fits the required symmetries:
R
a
b
c
d
=
f
(
R
)
(
g
a
c
g
d
b
−
g
a
d
g
c
b
)
{\displaystyle R_{abcd}=f(R)\left(g_{ac}g_{db}-g_{ad}g_{cb}\right)}
and by contracting with the metric twice we find the explicit form:
R
a
b
c
d
=
K
(
g
a
c
g
d
b
−
g
a
d
g
c
b
)
,
{\displaystyle R_{abcd}=K\left(g_{ac}g_{db}-g_{ad}g_{cb}\right),}
where
g
a
b
{\displaystyle g_{ab}}
is the metric tensor and
K
=
R
/
2
{\displaystyle K=R/2}
is a function called the Gaussian curvature and
a
{\displaystyle a}
,
b
{\displaystyle b}
,
c
{\displaystyle c}
and
d
{\displaystyle d}
take values either 1 or 2. The Riemann tensor has only one functionally independent component. The Gaussian curvature coincides with the sectional curvature of the surface. It is also exactly half the scalar curvature of the 2-manifold, while the Ricci curvature tensor of the surface is simply given by
R
a
b
=
K
g
a
b
.
{\displaystyle R_{ab}=Kg_{ab}.}
=== Space forms ===
A Riemannian manifold is a space form if its sectional curvature is equal to a constant
K
{\displaystyle K}
. The Riemann tensor of a space form is given by
R
a
b
c
d
=
K
(
g
a
c
g
d
b
−
g
a
d
g
c
b
)
.
{\displaystyle R_{abcd}=K\left(g_{ac}g_{db}-g_{ad}g_{cb}\right).}
Conversely, except in dimension 2, if the curvature of a Riemannian manifold has this form for some function
K
{\displaystyle K}
, then the Bianchi identities imply that
K
{\displaystyle K}
is constant and thus that the manifold is (locally) a space form.
== See also ==
Introduction to the mathematics of general relativity
Decomposition of the Riemann curvature tensor
Curvature of Riemannian manifolds
Ricci curvature tensor
== Citations ==
== References == | Wikipedia/Riemann–Christoffel_curvature_tensor |
In the mathematical field of differential geometry, the Riemann curvature tensor or Riemann–Christoffel tensor (after Bernhard Riemann and Elwin Bruno Christoffel) is the most common way used to express the curvature of Riemannian manifolds. It assigns a tensor to each point of a Riemannian manifold (i.e., it is a tensor field). It is a local invariant of Riemannian metrics that measures the failure of the second covariant derivatives to commute. A Riemannian manifold has zero curvature if and only if it is flat, i.e. locally isometric to the Euclidean space. The curvature tensor can also be defined for any pseudo-Riemannian manifold, or indeed any manifold equipped with an affine connection.
It is a central mathematical tool in the theory of general relativity, the modern theory of gravity. The curvature of spacetime is in principle observable via the geodesic deviation equation. The curvature tensor represents the tidal force experienced by a rigid body moving along a geodesic in a sense made precise by the Jacobi equation.
== Definition ==
Let
(
M
,
g
)
{\displaystyle (M,g)}
be a Riemannian or pseudo-Riemannian manifold, and
X
(
M
)
{\displaystyle {\mathfrak {X}}(M)}
be the space of all vector fields on
M
{\displaystyle M}
. We define the Riemann curvature tensor as a map
X
(
M
)
×
X
(
M
)
×
X
(
M
)
→
X
(
M
)
{\displaystyle {\mathfrak {X}}(M)\times {\mathfrak {X}}(M)\times {\mathfrak {X}}(M)\rightarrow {\mathfrak {X}}(M)}
by the following formula where
∇
{\displaystyle \nabla }
is the Levi-Civita connection:
R
(
X
,
Y
)
Z
=
∇
X
∇
Y
Z
−
∇
Y
∇
X
Z
−
∇
[
X
,
Y
]
Z
{\displaystyle R(X,Y)Z=\nabla _{X}\nabla _{Y}Z-\nabla _{Y}\nabla _{X}Z-\nabla _{[X,Y]}Z}
or equivalently
R
(
X
,
Y
)
=
[
∇
X
,
∇
Y
]
−
∇
[
X
,
Y
]
{\displaystyle R(X,Y)=[\nabla _{X},\nabla _{Y}]-\nabla _{[X,Y]}}
where
[
X
,
Y
]
{\displaystyle [X,Y]}
is the Lie bracket of vector fields and
[
∇
X
,
∇
Y
]
{\displaystyle [\nabla _{X},\nabla _{Y}]}
is a commutator of differential operators. It turns out that the right-hand side actually only depends on the value of the vector fields
X
,
Y
,
Z
{\displaystyle X,Y,Z}
at a given point, which is notable since the covariant derivative of a vector field also depends on the field values in a neighborhood of the point. Hence,
R
{\displaystyle R}
is a
(
1
,
3
)
{\displaystyle (1,3)}
-tensor field. For fixed
X
,
Y
{\displaystyle X,Y}
, the linear transformation
Z
↦
R
(
X
,
Y
)
Z
{\displaystyle Z\mapsto R(X,Y)Z}
is also called the curvature transformation or endomorphism. Occasionally, the curvature tensor is defined with the opposite sign.
The curvature tensor measures noncommutativity of the covariant derivative, and as such is the integrability obstruction for the existence of an isometry with Euclidean space (called, in this context, flat space).
Since the Levi-Civita connection is torsion-free, its curvature can also be expressed in terms of the second covariant derivative
∇
X
,
Y
2
Z
=
∇
X
∇
Y
Z
−
∇
∇
X
Y
Z
{\textstyle \nabla _{X,Y}^{2}Z=\nabla _{X}\nabla _{Y}Z-\nabla _{\nabla _{X}Y}Z}
which depends only on the values of
X
,
Y
{\displaystyle X,Y}
at a point.
The curvature can then be written as
R
(
X
,
Y
)
=
∇
X
,
Y
2
−
∇
Y
,
X
2
{\displaystyle R(X,Y)=\nabla _{X,Y}^{2}-\nabla _{Y,X}^{2}}
Thus, the curvature tensor measures the noncommutativity of the second covariant derivative. In abstract index notation,
R
d
c
a
b
Z
c
=
∇
a
∇
b
Z
d
−
∇
b
∇
a
Z
d
.
{\displaystyle R^{d}{}_{cab}Z^{c}=\nabla _{a}\nabla _{b}Z^{d}-\nabla _{b}\nabla _{a}Z^{d}.}
The Riemann curvature tensor is also the commutator of the covariant derivative of an arbitrary covector
A
ν
{\displaystyle A_{\nu }}
with itself:
A
ν
;
ρ
σ
−
A
ν
;
σ
ρ
=
A
β
R
β
ν
ρ
σ
.
{\displaystyle A_{\nu ;\rho \sigma }-A_{\nu ;\sigma \rho }=A_{\beta }R^{\beta }{}_{\nu \rho \sigma }.}
This formula is often called the Ricci identity. This is the classical method used by Ricci and Levi-Civita to obtain an expression for the Riemann curvature tensor. This identity can be generalized to get the commutators for two covariant derivatives of arbitrary tensors as follows
∇
δ
∇
γ
T
α
1
⋯
α
r
β
1
⋯
β
s
−
∇
γ
∇
δ
T
α
1
⋯
α
r
β
1
⋯
β
s
=
R
α
1
ρ
δ
γ
T
ρ
α
2
⋯
α
r
β
1
⋯
β
s
+
…
+
R
α
r
ρ
δ
γ
T
α
1
⋯
α
r
−
1
ρ
β
1
⋯
β
s
−
R
σ
β
1
δ
γ
T
α
1
⋯
α
r
σ
β
2
⋯
β
s
−
…
−
R
σ
β
s
δ
γ
T
α
1
⋯
α
r
β
1
⋯
β
s
−
1
σ
{\displaystyle {\begin{aligned}&\nabla _{\delta }\nabla _{\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}-\nabla _{\gamma }\nabla _{\delta }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}\\[3pt]={}&R^{\alpha _{1}}{}_{\rho \delta \gamma }T^{\rho \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}+\ldots +R^{\alpha _{r}}{}_{\rho \delta \gamma }T^{\alpha _{1}\cdots \alpha _{r-1}\rho }{}_{\beta _{1}\cdots \beta _{s}}-R^{\sigma }{}_{\beta _{1}\delta \gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\sigma \beta _{2}\cdots \beta _{s}}-\ldots -R^{\sigma }{}_{\beta _{s}\delta \gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\sigma }\end{aligned}}}
This formula also applies to tensor densities without alteration, because for the Levi-Civita (not generic) connection one gets:
∇
μ
(
g
)
≡
(
g
)
;
μ
=
0
,
{\displaystyle \nabla _{\mu }\left({\sqrt {g}}\right)\equiv \left({\sqrt {g}}\right)_{;\mu }=0,}
where
g
=
|
det
(
g
μ
ν
)
|
.
{\displaystyle g=\left|\det \left(g_{\mu \nu }\right)\right|.}
It is sometimes convenient to also define the purely covariant version of the curvature tensor by
R
σ
μ
ν
ρ
=
g
ρ
ζ
R
ζ
σ
μ
ν
.
{\displaystyle R_{\sigma \mu \nu \rho }=g_{\rho \zeta }R^{\zeta }{}_{\sigma \mu \nu }.}
== Geometric meaning ==
=== Informally ===
One can see the effects of curved space by comparing a tennis court and the Earth. Start at the lower right corner of the tennis court, with a racket held out towards north. Then while walking around the outline of the court, at each step make sure the tennis racket is maintained in the same orientation, parallel to its previous positions. Once the loop is complete the tennis racket will be parallel to its initial starting position. This is because tennis courts are built so the surface is flat. On the other hand, the surface of the Earth is curved: we can complete a loop on the surface of the Earth. Starting at the equator, point a tennis racket north along the surface of the Earth. Once again the tennis racket should always remain parallel to its previous position, using the local plane of the horizon as a reference. For this path, first walk to the north pole, then walk sideways (i.e. without turning), then down to the equator, and finally walk backwards to your starting position. Now the tennis racket will be pointing towards the west, even though when you began your journey it pointed north and you never turned your body. This process is akin to parallel transporting a vector along the path and the difference identifies how lines which appear "straight" are only "straight" locally. Each time a loop is completed the tennis racket will be deflected further from its initial position by an amount depending on the distance and the curvature of the surface. It is possible to identify paths along a curved surface where parallel transport works as it does on flat space. These are the geodesics of the space, for example any segment of a great circle of a sphere.
The concept of a curved space in mathematics differs from conversational usage. For example, if the above process was completed on a cylinder one would find that it is not curved overall as the curvature around the cylinder cancels with the flatness along the cylinder, which is a consequence of Gaussian curvature and Gauss's Theorema Egregium. A familiar example of this is a floppy pizza slice, which will remain rigid along its length if it is curved along its width.
The Riemann curvature tensor is a way to capture a measure of the intrinsic curvature. When you write it down in terms of its components (like writing down the components of a vector), it consists of a multi-dimensional array of sums and products of partial derivatives (some of those partial derivatives can be thought of as akin to capturing the curvature imposed upon someone walking in straight lines on a curved surface).
=== Formally ===
When a vector in a Euclidean space is parallel transported around a loop, it will again point in the initial direction after returning to its original position. However, this property does not hold in the general case. The Riemann curvature tensor directly measures the failure of this in a general Riemannian manifold. This failure is known as the non-holonomy of the manifold.
Let
x
t
{\displaystyle x_{t}}
be a curve in a Riemannian manifold
M
{\displaystyle M}
. Denote by
τ
x
t
:
T
x
0
M
→
T
x
t
M
{\displaystyle \tau _{x_{t}}:T_{x_{0}}M\to T_{x_{t}}M}
the parallel transport map along
x
t
{\displaystyle x_{t}}
. The parallel transport maps are related to the covariant derivative by
∇
x
˙
0
Y
=
lim
h
→
0
1
h
(
τ
x
h
−
1
(
Y
x
h
)
−
Y
x
0
)
=
d
d
t
(
τ
x
t
−
1
(
Y
x
t
)
)
|
t
=
0
{\displaystyle \nabla _{{\dot {x}}_{0}}Y=\lim _{h\to 0}{\frac {1}{h}}\left(\tau _{x_{h}}^{-1}\left(Y_{x_{h}}\right)-Y_{x_{0}}\right)=\left.{\frac {d}{dt}}\left(\tau _{x_{t}}^{-1}(Y_{x_{t}})\right)\right|_{t=0}}
for each vector field
Y
{\displaystyle Y}
defined along the curve.
Suppose that
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are a pair of commuting vector fields. Each of these fields generates a one-parameter group of diffeomorphisms in a neighborhood of
x
0
{\displaystyle x_{0}}
. Denote by
τ
t
X
{\displaystyle \tau _{tX}}
and
τ
t
Y
{\displaystyle \tau _{tY}}
, respectively, the parallel transports along the flows of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
for time
t
{\displaystyle t}
. Parallel transport of a vector
Z
∈
T
x
0
M
{\displaystyle Z\in T_{x_{0}}M}
around the quadrilateral with sides
t
Y
{\displaystyle tY}
,
s
X
{\displaystyle sX}
,
−
t
Y
{\displaystyle -tY}
,
−
s
X
{\displaystyle -sX}
is given by
τ
s
X
−
1
τ
t
Y
−
1
τ
s
X
τ
t
Y
Z
.
{\displaystyle \tau _{sX}^{-1}\tau _{tY}^{-1}\tau _{sX}\tau _{tY}Z.}
The difference between this and
Z
{\displaystyle Z}
measures the failure of parallel transport to return
Z
{\displaystyle Z}
to its original position in the tangent space
T
x
0
M
{\displaystyle T_{x_{0}}M}
. Shrinking the loop by sending
s
,
t
→
0
{\displaystyle s,t\to 0}
gives the infinitesimal description of this deviation:
d
d
s
d
d
t
τ
s
X
−
1
τ
t
Y
−
1
τ
s
X
τ
t
Y
Z
|
s
=
t
=
0
=
(
∇
X
∇
Y
−
∇
Y
∇
X
−
∇
[
X
,
Y
]
)
Z
=
R
(
X
,
Y
)
Z
{\displaystyle \left.{\frac {d}{ds}}{\frac {d}{dt}}\tau _{sX}^{-1}\tau _{tY}^{-1}\tau _{sX}\tau _{tY}Z\right|_{s=t=0}=\left(\nabla _{X}\nabla _{Y}-\nabla _{Y}\nabla _{X}-\nabla _{[X,Y]}\right)Z=R(X,Y)Z}
where
R
{\displaystyle R}
is the Riemann curvature tensor.
== Coordinate expression ==
Converting to the tensor index notation, the Riemann curvature tensor is given by
R
ρ
σ
μ
ν
=
d
x
ρ
(
R
(
∂
μ
,
∂
ν
)
∂
σ
)
{\displaystyle R^{\rho }{}_{\sigma \mu \nu }=dx^{\rho }\left(R\left(\partial _{\mu },\partial _{\nu }\right)\partial _{\sigma }\right)}
where
∂
μ
=
∂
/
∂
x
μ
{\displaystyle \partial _{\mu }=\partial /\partial x^{\mu }}
are the coordinate vector fields. The above expression can be written using Christoffel symbols:
R
ρ
σ
μ
ν
=
∂
μ
Γ
ρ
ν
σ
−
∂
ν
Γ
ρ
μ
σ
+
Γ
ρ
μ
λ
Γ
λ
ν
σ
−
Γ
ρ
ν
λ
Γ
λ
μ
σ
{\displaystyle R^{\rho }{}_{\sigma \mu \nu }=\partial _{\mu }\Gamma ^{\rho }{}_{\nu \sigma }-\partial _{\nu }\Gamma ^{\rho }{}_{\mu \sigma }+\Gamma ^{\rho }{}_{\mu \lambda }\Gamma ^{\lambda }{}_{\nu \sigma }-\Gamma ^{\rho }{}_{\nu \lambda }\Gamma ^{\lambda }{}_{\mu \sigma }}
(See also List of formulas in Riemannian geometry).
== Symmetries and identities ==
The Riemann curvature tensor has the following symmetries and identities:
where the bracket
⟨
,
⟩
{\displaystyle \langle ,\rangle }
refers to the inner product on the tangent space induced by the metric tensor and
the brackets and parentheses on the indices denote the antisymmetrization and symmetrization operators, respectively. If there is nonzero torsion, the Bianchi identities involve the torsion tensor.
The first (algebraic) Bianchi identity was discovered by Ricci, but is often called the first Bianchi identity or algebraic Bianchi identity, because it looks similar to the differential Bianchi identity.
The first three identities form a complete list of symmetries of the curvature tensor, i.e. given any tensor which satisfies the identities above, one can find a Riemannian manifold with such a curvature tensor at some point. Simple calculations show that such a tensor has
n
2
(
n
2
−
1
)
/
12
{\displaystyle n^{2}\left(n^{2}-1\right)/12}
independent components. Interchange symmetry follows from these. The algebraic symmetries are also equivalent to saying that R belongs to the image of the Young symmetrizer corresponding to the partition 2+2.
On a Riemannian manifold one has the covariant derivative
∇
u
R
{\displaystyle \nabla _{u}R}
and the Bianchi identity (often called the second Bianchi identity or differential Bianchi identity) takes the form of the last identity in the table.
== Ricci curvature ==
The Ricci curvature tensor is the contraction of the first and third indices of the Riemann tensor.
R
a
b
⏟
Ricci
≡
R
c
a
c
b
=
g
c
d
R
c
a
d
b
⏟
Riemann
{\displaystyle \underbrace {R_{ab}} _{\text{Ricci}}\equiv R^{c}{}_{acb}=g^{cd}\underbrace {R_{cadb}} _{\text{Riemann}}}
== Special cases ==
=== Surfaces ===
For a two-dimensional surface, the Bianchi identities imply that the Riemann tensor has only one independent component, which means that the Ricci scalar completely determines the Riemann tensor. There is only one valid expression for the Riemann tensor which fits the required symmetries:
R
a
b
c
d
=
f
(
R
)
(
g
a
c
g
d
b
−
g
a
d
g
c
b
)
{\displaystyle R_{abcd}=f(R)\left(g_{ac}g_{db}-g_{ad}g_{cb}\right)}
and by contracting with the metric twice we find the explicit form:
R
a
b
c
d
=
K
(
g
a
c
g
d
b
−
g
a
d
g
c
b
)
,
{\displaystyle R_{abcd}=K\left(g_{ac}g_{db}-g_{ad}g_{cb}\right),}
where
g
a
b
{\displaystyle g_{ab}}
is the metric tensor and
K
=
R
/
2
{\displaystyle K=R/2}
is a function called the Gaussian curvature and
a
{\displaystyle a}
,
b
{\displaystyle b}
,
c
{\displaystyle c}
and
d
{\displaystyle d}
take values either 1 or 2. The Riemann tensor has only one functionally independent component. The Gaussian curvature coincides with the sectional curvature of the surface. It is also exactly half the scalar curvature of the 2-manifold, while the Ricci curvature tensor of the surface is simply given by
R
a
b
=
K
g
a
b
.
{\displaystyle R_{ab}=Kg_{ab}.}
=== Space forms ===
A Riemannian manifold is a space form if its sectional curvature is equal to a constant
K
{\displaystyle K}
. The Riemann tensor of a space form is given by
R
a
b
c
d
=
K
(
g
a
c
g
d
b
−
g
a
d
g
c
b
)
.
{\displaystyle R_{abcd}=K\left(g_{ac}g_{db}-g_{ad}g_{cb}\right).}
Conversely, except in dimension 2, if the curvature of a Riemannian manifold has this form for some function
K
{\displaystyle K}
, then the Bianchi identities imply that
K
{\displaystyle K}
is constant and thus that the manifold is (locally) a space form.
== See also ==
Introduction to the mathematics of general relativity
Decomposition of the Riemann curvature tensor
Curvature of Riemannian manifolds
Ricci curvature tensor
== Citations ==
== References == | Wikipedia/Riemann-Christoffel_curvature_tensor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.