text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Parametrization (or parameterization) in an atmospheric model (either weather model or climate model) is a method of replacing processes that are too small-scale or complex to be physically represented in the model by a simplified process. This can be contrasted with other processes—e.g., large-scale flow of the atmosphere—that are explicitly resolved within the models. Associated with these parametrizations are various parameters used in the simplified processes. Examples include the descent rate of raindrops, convective clouds, simplifications of the atmospheric radiative transfer on the basis of atmospheric radiative transfer codes, and cloud microphysics. Radiative parametrizations are important to both atmospheric and oceanic modeling alike. Atmospheric emissions from different sources within individual grid boxes also need to be parametrized to determine their impact on air quality.
== Clouds ==
Weather and climate model gridboxes have sides of between 5 kilometres (3.1 mi) and 300 kilometres (190 mi). A typical cumulus cloud has a scale of less than 1 kilometre (0.62 mi), and would require a grid even finer than this to be represented physically by the equations of fluid motion. Therefore, the processes that such clouds represent are parametrized, by processes of various sophistication. In the earliest models, if a column of air in a model gridbox was unstable (i.e., the bottom warmer than the top) then it would be overturned, and the air in that vertical column mixed. More sophisticated schemes add enhancements, recognizing that only some portions of the box might convect and that entrainment and other processes occur. Weather models that have gridboxes with sides between 5 kilometres (3.1 mi) and 25 kilometres (16 mi) can explicitly represent convective clouds, although they still need to parametrize cloud microphysics.
The formation of large-scale (stratus-type) clouds is more physically based: they form when the relative humidity reaches some prescribed value. Still, sub grid scale processes need to be taken into account. Rather than assuming that clouds form at 100% relative humidity, the cloud fraction can be related to a critical relative humidity of 70% for stratus-type clouds, and at or above 80% for cumuliform clouds, reflecting the sub grid scale variation that would occur in the real world. Portions of the precipitation parametrization include the condensation rate, energy exchanges dealing with the change of state from water vapor into liquid drops, and the microphysical component which controls the rate of change from water vapor to water droplets.
== Radiation and atmosphere-surface interaction ==
The amount of solar radiation reaching ground level in rugged terrain, or due to variable cloudiness, is parametrized as this process occurs on the molecular scale. This method of parametrization is also done for the surface flux of energy between the ocean and the atmosphere in order to determine realistic sea surface temperatures and type of sea ice found near the ocean's surface. Also, the grid size of the models is large when compared to the actual size and roughness of clouds and topography. Sun angle as well as the impact of multiple cloud layers is taken into account. Soil type, vegetation type, and soil moisture all determine how much radiation goes into warming and how much moisture is drawn up into the adjacent atmosphere. Thus, they are important to parametrize.
== Air quality ==
Air quality forecasting attempts to predict when the concentrations of pollutants will attain levels that are hazardous to public health. The concentration of pollutants in the atmosphere is determined by transport, diffusion, chemical transformation, and ground deposition. Alongside pollutant source and terrain information, these models require data about the state of the fluid flow in the atmosphere to determine its transport and diffusion. Within air quality models, parametrizations take into account atmospheric emissions from multiple relatively tiny sources (e.g. roads, fields, factories) within specific grid boxes.
== Eddies ==
The ocean (and, although more variably, the atmosphere) is stratified through density. At rest, surfaces of constant density (known as isopycnals in the ocean) will be parallel to surfaces of constant pressure (isobars). However, various processes such as geostrophy and upwelling can result in isopycnals becoming tilted relative to isobars. These tilted density surfaces represent a source of potential energy and, if the slope becomes steep enough, a fluid instability known as baroclinic instability can be triggered. Eddies are generated through baroclinic instability, which act to flatten density surfaces through the slantwise exchange of fluid.
The resulting eddies are formed at a characteristic scale called the Rossby deformation radius. This scale depends on the strength of stratification and the coriolis parameter (which in turn depends on the latitude). As a result, baroclinic eddies form on scales of around 1° (~100 km) at the tropics, but less than 1/12° (~10 km) at the poles and in some shelf seas. Most climate models, such as those run as part of CMIP experiments, are run at a resolution of 1-1/4° in the ocean, and can therefore not resolve baroclinic eddies across large parts of the ocean, particularly at the poles. However, high-latitude baroclinic eddies are important for many ocean processes such as the Atlantic Meridional Overturning Circulation (AMOC), which affects global climate. As a result, the effects of eddies are parametrized in climate models, such as through the widely-used Gent-McWilliams (GM) parametrization which represents the isopycnal-flattening effects of eddies as advection (often misinterpreted as diffusion of surfaces). This parametrization is not perfect - for instance, it may overpredict the sensitivity of the Antarctic Circumpolar Current and AMOC to the strength of winds over the Southern Ocean. As a result, alternative parametrizations are being developed to improve the representation of eddies in ocean models.
== Problems with increased resolution ==
As model resolution increases, errors associated with moist convective processes are increased as assumptions which are statistically valid for larger grid boxes become questionable once the grid boxes shrink in scale towards the size of the convection itself. At resolutions greater than T639, which has a grid box dimension of about 30 kilometres (19 mi), the Arakawa-Schubert convective scheme produces minimal convective precipitation, making most precipitation unrealistically stratiform in nature.
== Calibration ==
When a physical process is parametrized, two choices have to be made: what is the structural form (for instance, two variables can be related linearly) and what is the exact value of the parameters (for instance, the constant of proportionality). The process of determining the exact values of the parameters in a parametrization is called calibration, or sometimes less precise, tuning. Calibration is a difficult process, and different strategies are used to do it. One popular method is to run a model, or a submodel, and compare it to a small set of selected metrics, such as temperature. The parameters that lead to the model run which resembles reality best are chosen.
== See also ==
General circulation model
Climate ensemble
Parametrization
== References ==
== Further reading ==
Plant, Robert S; Yano, Jun-Ichi (2015). Parameterization of Atmospheric Convection. Imperial College Press. ISBN 978-1-78326-690-6. | Wikipedia/Parametrization_(atmospheric_modeling) |
In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.
Typically, the behavior of a nonlinear system is described in mathematics by a nonlinear system of equations, which is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one.
In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it.
As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such as solitons, chaos, and singularities are hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology.
Some authors use the term nonlinear science for the study of nonlinear systems. This term is disputed by others:
Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals.
== Definition ==
In mathematics, a linear map (or linear function)
f
(
x
)
{\displaystyle f(x)}
is one which satisfies both of the following properties:
Additivity or superposition principle:
f
(
x
+
y
)
=
f
(
x
)
+
f
(
y
)
;
{\displaystyle \textstyle f(x+y)=f(x)+f(y);}
Homogeneity:
f
(
α
x
)
=
α
f
(
x
)
.
{\displaystyle \textstyle f(\alpha x)=\alpha f(x).}
Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity. For example, an antilinear map is additive but not homogeneous. The conditions of additivity and homogeneity are often combined in the superposition principle
f
(
α
x
+
β
y
)
=
α
f
(
x
)
+
β
f
(
y
)
{\displaystyle f(\alpha x+\beta y)=\alpha f(x)+\beta f(y)}
An equation written as
f
(
x
)
=
C
{\displaystyle f(x)=C}
is called linear if
f
(
x
)
{\displaystyle f(x)}
is a linear map (as defined above) and nonlinear otherwise. The equation is called homogeneous if
C
=
0
{\displaystyle C=0}
and
f
(
x
)
{\displaystyle f(x)}
is a homogeneous function.
The definition
f
(
x
)
=
C
{\displaystyle f(x)=C}
is very general in that
x
{\displaystyle x}
can be any sensible mathematical object (number, vector, function, etc.), and the function
f
(
x
)
{\displaystyle f(x)}
can literally be any mapping, including integration or differentiation with associated constraints (such as boundary values). If
f
(
x
)
{\displaystyle f(x)}
contains differentiation with respect to
x
{\displaystyle x}
, the result will be a differential equation.
== Nonlinear systems of equations ==
A nonlinear system of equations consists of a set of equations in several variables such that at least one of them is not a linear equation.
For a single equation of the form
f
(
x
)
=
0
,
{\displaystyle f(x)=0,}
many methods have been designed; see Root-finding algorithm. In the case where f is a polynomial, one has a polynomial equation such as
x
2
+
x
−
1
=
0.
{\displaystyle x^{2}+x-1=0.}
The general root-finding algorithms apply to polynomial roots, but, generally they do not find all the roots, and when they fail to find a root, this does not imply that there is no roots. Specific methods for polynomials allow finding all roots or the real roots; see real-root isolation.
Solving systems of polynomial equations, that is finding the common zeros of a set of several polynomials in several variables is a difficult problem for which elaborate algorithms have been designed, such as Gröbner base algorithms.
For the general case of system of equations formed by equating to zero several differentiable functions, the main method is Newton's method and its variants. Generally they may provide a solution, but do not provide any information on the number of solutions.
== Nonlinear recurrence relations ==
A nonlinear recurrence relation defines successive terms of a sequence as a nonlinear function of preceding terms. Examples of nonlinear recurrence relations are the logistic map and the relations that define the various Hofstadter sequences. Nonlinear discrete models that represent a wide class of nonlinear recurrence relationships include the NARMAX (Nonlinear Autoregressive Moving Average with eXogenous inputs) model and the related nonlinear system identification and analysis procedures. These approaches can be used to study a wide class of complex nonlinear behaviors in the time, frequency, and spatio-temporal domains.
== Nonlinear differential equations ==
A system of differential equations is said to be nonlinear if it is not a system of linear equations. Problems involving nonlinear differential equations are extremely diverse, and methods of solution or analysis are problem dependent. Examples of nonlinear differential equations are the Navier–Stokes equations in fluid dynamics and the Lotka–Volterra equations in biology.
One of the greatest difficulties of nonlinear problems is that it is not generally possible to combine known solutions into new solutions. In linear problems, for example, a family of linearly independent solutions can be used to construct general solutions through the superposition principle. A good example of this is one-dimensional heat transport with Dirichlet boundary conditions, the solution of which can be written as a time-dependent linear combination of sinusoids of differing frequencies; this makes solutions very flexible. It is often possible to find several very specific solutions to nonlinear equations, however the lack of a superposition principle prevents the construction of new solutions.
=== Ordinary differential equations ===
First order ordinary differential equations are often exactly solvable by separation of variables, especially for autonomous equations. For example, the nonlinear equation
d
u
d
x
=
−
u
2
{\displaystyle {\frac {du}{dx}}=-u^{2}}
has
u
=
1
x
+
C
{\displaystyle u={\frac {1}{x+C}}}
as a general solution (and also the special solution
u
=
0
,
{\displaystyle u=0,}
corresponding to the limit of the general solution when C tends to infinity). The equation is nonlinear because it may be written as
d
u
d
x
+
u
2
=
0
{\displaystyle {\frac {du}{dx}}+u^{2}=0}
and the left-hand side of the equation is not a linear function of
u
{\displaystyle u}
and its derivatives. Note that if the
u
2
{\displaystyle u^{2}}
term were replaced with
u
{\displaystyle u}
, the problem would be linear (the exponential decay problem).
Second and higher order ordinary differential equations (more generally, systems of nonlinear equations) rarely yield closed-form solutions, though implicit solutions and solutions involving nonelementary integrals are encountered.
Common methods for the qualitative analysis of nonlinear ordinary differential equations include:
Examination of any conserved quantities, especially in Hamiltonian systems
Examination of dissipative quantities (see Lyapunov function) analogous to conserved quantities
Linearization via Taylor expansion
Change of variables into something easier to study
Bifurcation theory
Perturbation methods (can be applied to algebraic equations too)
Existence of solutions of Finite-Duration, which can happen under specific conditions for some non-linear ordinary differential equations.
=== Partial differential equations ===
The most common basic approach to studying nonlinear partial differential equations is to change the variables (or otherwise transform the problem) so that the resulting problem is simpler (possibly linear). Sometimes, the equation may be transformed into one or more ordinary differential equations, as seen in separation of variables, which is always useful whether or not the resulting ordinary differential equation(s) is solvable.
Another common (though less mathematical) tactic, often exploited in fluid and heat mechanics, is to use scale analysis to simplify a general, natural equation in a certain specific boundary value problem. For example, the (very) nonlinear Navier-Stokes equations can be simplified into one linear partial differential equation in the case of transient, laminar, one dimensional flow in a circular pipe; the scale analysis provides conditions under which the flow is laminar and one dimensional and also yields the simplified equation.
Other methods include examining the characteristics and using the methods outlined above for ordinary differential equations.
=== Pendula ===
A classic, extensively studied nonlinear problem is the dynamics of a frictionless pendulum under the influence of gravity. Using Lagrangian mechanics, it may be shown that the motion of a pendulum can be described by the dimensionless nonlinear equation
d
2
θ
d
t
2
+
sin
(
θ
)
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\sin(\theta )=0}
where gravity points "downwards" and
θ
{\displaystyle \theta }
is the angle the pendulum forms with its rest position, as shown in the figure at right. One approach to "solving" this equation is to use
d
θ
/
d
t
{\displaystyle d\theta /dt}
as an integrating factor, which would eventually yield
∫
d
θ
C
0
+
2
cos
(
θ
)
=
t
+
C
1
{\displaystyle \int {\frac {d\theta }{\sqrt {C_{0}+2\cos(\theta )}}}=t+C_{1}}
which is an implicit solution involving an elliptic integral. This "solution" generally does not have many uses because most of the nature of the solution is hidden in the nonelementary integral (nonelementary unless
C
0
=
2
{\displaystyle C_{0}=2}
).
Another way to approach the problem is to linearize any nonlinearity (the sine function term in this case) at the various points of interest through Taylor expansions. For example, the linearization at
θ
=
0
{\displaystyle \theta =0}
, called the small angle approximation, is
d
2
θ
d
t
2
+
θ
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\theta =0}
since
sin
(
θ
)
≈
θ
{\displaystyle \sin(\theta )\approx \theta }
for
θ
≈
0
{\displaystyle \theta \approx 0}
. This is a simple harmonic oscillator corresponding to oscillations of the pendulum near the bottom of its path. Another linearization would be at
θ
=
π
{\displaystyle \theta =\pi }
, corresponding to the pendulum being straight up:
d
2
θ
d
t
2
+
π
−
θ
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\pi -\theta =0}
since
sin
(
θ
)
≈
π
−
θ
{\displaystyle \sin(\theta )\approx \pi -\theta }
for
θ
≈
π
{\displaystyle \theta \approx \pi }
. The solution to this problem involves hyperbolic sinusoids, and note that unlike the small angle approximation, this approximation is unstable, meaning that
|
θ
|
{\displaystyle |\theta |}
will usually grow without limit, though bounded solutions are possible. This corresponds to the difficulty of balancing a pendulum upright, it is literally an unstable state.
One more interesting linearization is possible around
θ
=
π
/
2
{\displaystyle \theta =\pi /2}
, around which
sin
(
θ
)
≈
1
{\displaystyle \sin(\theta )\approx 1}
:
d
2
θ
d
t
2
+
1
=
0.
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+1=0.}
This corresponds to a free fall problem. A very useful qualitative picture of the pendulum's dynamics may be obtained by piecing together such linearizations, as seen in the figure at right. Other techniques may be used to find (exact) phase portraits and approximate periods.
== Types of nonlinear dynamic behaviors ==
Amplitude death – any oscillations present in the system cease due to some kind of interaction with other system or feedback by the same system
Chaos – values of a system cannot be predicted indefinitely far into the future, and fluctuations are aperiodic
Multistability – the presence of two or more stable states
Solitons – self-reinforcing solitary waves
Limit cycles – asymptotic periodic orbits to which destabilized fixed points are attracted.
Self-oscillations – feedback oscillations taking place in open dissipative physical systems.
== Examples of nonlinear equations ==
== See also ==
== References ==
== Further reading ==
== External links ==
Command and Control Research Program (CCRP)
New England Complex Systems Institute: Concepts in Complex Systems
Nonlinear Dynamics I: Chaos at MIT's OpenCourseWare
Nonlinear Model Library – (in MATLAB) a Database of Physical Systems
The Center for Nonlinear Studies at Los Alamos National Laboratory | Wikipedia/Nonlinear_systems |
John Wiley & Sons, Inc., commonly known as Wiley (), is an American multinational publishing company that focuses on academic publishing and instructional materials. The company was founded in 1807 and produces books, journals, and encyclopedias, in print and electronically, as well as online products and services, training materials, and educational materials for undergraduate, graduate, and continuing education students.
== History ==
The company was established in 1807 when Charles Wiley opened a print shop in Manhattan. The company was the publisher of 19th century American literary figures like James Fenimore Cooper, Washington Irving, Herman Melville, and Edgar Allan Poe, as well as of legal, religious, and other non-fiction titles. The firm took its current name in 1865. Wiley later shifted its focus to scientific, technical, and engineering subject areas, abandoning its literary interests.
Wiley's son John (born in Flatbush, New York, October 4, 1808; died in East Orange, New Jersey, February 21, 1891) took over the business when Charles Wiley died in 1826. The firm was successively named Wiley, Lane & Co., then Wiley & Putnam, and then John Wiley. The company acquired its present name in 1876, when John's second son William H. Wiley joined his brother Charles in the business.
Through the 20th century, the company expanded its publishing activities, the sciences, and higher education.
In 1960 Wiley set up a European branch in London, which later moved to Chichester.
In 1982, Wiley acquired the publishing operations of the British firm Heyden & Son.
In 1989, Wiley acquired the life science publisher Liss.
In 1996, Wiley acquired the German technical publisher VCH.
In 1997, Wiley acquired the professional publisher Van Nostrand Reinhold (the successor to the company started by David Van Nostrand) from Thomson Learning.
In 1999, Wiley acquired the professional publisher Jossey-Bass from Pearson.
In 2001, Wiley acquired the publisher Hungry Minds (formerly IDG Books, including most titles formerly published by Macmillan General Reference) from International Data Group.
In 2005, Wiley acquired the British medical publisher Whurr.
Wiley marked its bicentennial in 2007. In conjunction with the anniversary, the company published Knowledge for Generations: Wiley and the Global Publishing Industry, 1807–2007, depicting Wiley's role in the evolution of publishing against a social, cultural, and economic backdrop. Wiley has also created an online community called Wiley Living History, offering excerpts from Knowledge for Generations and a forum for visitors and Wiley employees to post their comments and anecdotes.
In 2021, Wiley acquired Hindawi and J&J Editorial.
In 2023, Academic Partnerships acquired Wiley's online education business for $150 million.
=== High-growth and emerging markets ===
In December 2010, Wiley opened an office in Dubai. Wiley established publishing operations in India in 2006 (though it has had a sales presence since 1966), and has established a presence in North Africa through sales contracts with academic institutions in Tunisia, Libya, and Egypt. On April 16, 2012, the company announced the establishment of Wiley Brasil Editora LTDA in São Paulo, Brazil, effective May 1, 2012.
=== Strategic acquisition and divestiture ===
Wiley's scientific, technical, and medical business was expanded by the acquisition of Blackwell Publishing in February 2007 for US$1.12 billion, its largest purchase to that time. The combined business, named Scientific, Technical, Medical, and Scholarly (also known as Wiley-Blackwell), publishes, in print and online, 1,600 scholarly peer-reviewed journals and an extensive collection of books, reference works, databases, and laboratory manuals in the life and physical sciences, medicine and allied health, engineering, the humanities, and the social sciences.
Through a backfile initiative completed in 2007, 8.2 million pages of journal content have been made available online, a collection dating back to 1799. Wiley-Blackwell also publishes on behalf of about 700 professional and scholarly societies; among them are the American Cancer Society (ACS), for which it publishes Cancer, the flagship ACS journal; the Sigma Theta Tau International Honor Society of Nursing; and the American Anthropological Association. Other journals published include Angewandte Chemie, Advanced Materials, Hepatology, International Finance and Liver Transplantation.
Launched as a pilot in 1997 with fifty journals and expanded through 1998, Wiley Interscience provided online access to Wiley journals, reference works, and books, including backfile content. Journals previously from Blackwell Publishing were available online from Blackwell Synergy until they were integrated into Wiley Interscience on June 30, 2008. In December 2007, Wiley also began distributing its technical titles through the Safari Books Online e-reference service. Interscience was supplanted by Wiley Online Library in 2010.
On February 17, 2012, Wiley announced the acquisition of Inscape Holdings Inc., which provides DISC assessments and training for interpersonal business skills. A month later, Wiley announced its intention to divest assets in the areas of travel (including the Frommer's brand), culinary, general interest, nautical, pets, and crafts, as well as the Webster's New World and CliffsNotes brands. The planned divestiture was aligned with Wiley's "increased strategic focus on content and services for research, learning, and professional practices, and on lifelong learning through digital technology". In May 2012, the company acquired publishing company Harlan Davidson, Inc., which is a family-owned business based in Illinois. On August 13 of the same year, Wiley announced it entered into a definitive agreement to sell all of its travel assets, including all of its interests in the Frommer's brand, to Google Inc. On November 6, 2012, Houghton Mifflin Harcourt acquired Wiley's cookbooks, dictionaries and study guides. In 2013, Wiley sold its pets, crafts and general interest lines to Turner Publishing Company and its nautical line to Fernhurst Books. HarperCollins acquired parts of Wiley Canada's trade operations in 2013; the remaining Canadian trade operations were merged into Wiley U.S.
In 2021, Wiley acquired the Hindawi publishing firm for $298 million in cash to expand its open access journals portfolio.
Wiley stated it would keep the Hindawi journals under their previous brand and continue developing the open source publishing platform Phenom. In 2023 and after over 7000 article retractions in Hindawi journals related to the publication of articles originating from paper mills, Wiley announced that it will cease using the Hindawi brand and will integrate Hindawi's 200 remaining journals into its main portfolio. The Wiley CEO who initiated the Hindawi acquisition stepped down in the wake of those announcements.
In 2021, Wiley announced the acquisition of eJournalPress (EJP), a provider of web-based technology solutions for scholarly publishing companies.
== Products ==
=== Brands and partnerships ===
Wiley's Professional Development brands include For Dummies, Jossey-Bass, Pfeiffer, Wrox Press, J.K. Lasser, Sybex, Fisher Investments Press, and Bloomberg Press. The STMS business is also known as Wiley-Blackwell, formed following the acquisition of Blackwell Publishing in February 2007. Brands include The Cochrane Library and more than 1,500 journals.
Wiley has publishing alliances with partners including Microsoft, CFA Institute, the Culinary Institute of America, the American Institute of Architects, the National Geographic Society, and the Institute of Electrical and Electronics Engineers (IEEE). Wiley-Blackwell also publishes journals on behalf of more than 700 professional and scholarly society partners including the New York Academy of Sciences, American Cancer Society, The Physiological Society, British Ecological Society, American Association of Anatomists, Society for the Psychological Study of Social Issues and The London School of Economics and Political Science, making it the world's largest society publisher.
Wiley partners with GreyCampus to provide professional learning solutions around big data and digital literacy. Wiley has also partnered with five other higher-education publishers to create CourseSmart, a company developed to sell college textbooks in eTextbook format on a common platform. In 2002, Wiley created a partnership with French publisher Anuman Interactive in order to launch a series of e-books adapted from the For Dummies collection. In 2013, Wiley partnered with American Graphics Institute to create an online education video and e-book subscription service called The Digital Classroom.
In 2016, Wiley launched a worldwide partnership with Christian H. Cooper to create a program for candidates taking the Financial Risk Manager exam offered by the Global Association of Risk Professionals. The program will be built on the existing Wiley efficient learning platform and Christian's legacy Financial Risk Manager product. The partnership is built on the view the FRM designation will rapidly grow to be one of the premier financial designations for practitioners that will track the growth of the Chartered Financial Analyst designation. The program will serve tens of thousands of FRM candidates worldwide and is based on the adaptive learning technology of Wiley's efficient learning platform and Christian's unique writing style and legacy book series.
With the integration of digital technology and the traditional print medium, Wiley has stated that in the near future its customers will be able to search across all its content regardless of original medium and assemble a custom product in the format of choice. Web resources are also enabling new types of publisher-customer interactions within the company's various businesses.
=== Open access ===
In 2016, Wiley started a collaboration with the open access publisher Hindawi to help convert nine Wiley journals to full open access. In 2018 a further announcement was made indicating that the Wiley-Hindawi collaboration would launch an additional four new fully open access journals.
On January 18, 2019, Wiley signed a contract with Project DEAL to begin open access to its academic journals for more than 700 academic institutions. It is the first contract between a publisher and a leading research nation (Germany) toward open access to scientific research.
=== Higher education ===
Higher Education's "WileyPLUS" is an online product that combines electronic versions of texts with media resources and tools for instructors and students. It is intended to provide a single source from which instructors can manage their courses, create presentations, and assign and grade homework and tests; students can receive hints and explanations as they work on homework, and link back to relevant sections of the text.
"Wiley Custom Select" launched in February 2009 as a custom textbook system allowing instructors to combine content from different Wiley textbooks and lab manuals and add in their own material. The company has begun to make content from its STMS business available to instructors through the system, with content from its Professional/Trade business to follow.
In September 2019, Wiley entered into a collaboration with IIM Lucknow to offer analytics courses for finance executives.
=== Online Program Management ===
In November 2011, Wiley Education Services announced the purchase Deltak for $220 million. Wiley later acquired The Learning House in 2018. This made Wiley one of the largest OPM providers at the time, with 60 university partners and more than 700 online programs.
In June 2023, Wiley announced they would divest several business units, including Wiley University Services. Wiley's 2023 full year revenue was $208 million, an 8% reduction from the prior year. In 2020, Wiley reported $232 million in OPM revenue with organic growth of 11% compared to prior year.
In November 2023, Academic Partnerships announced they would purchase Wiley's OPM business for $110 million.
=== Medicine ===
In January 2008, Wiley launched a new version of its evidence-based medicine (EBM) product, InfoPOEMs with InfoRetriever, under the name Essential Evidence Plus, providing primary-care clinicians with point-of-care access to the most extensive source of EBM information via their PDAs/handheld devices and desktop computers. Essential Evidence Plus includes the InfoPOEMs daily EBM content alerting service and two new content resources—EBM Guidelines, a collection of practice guidelines, evidence summaries, and images, and e-Essential Evidence, a reference for general practitioners, nurses, and physician assistants providing first-contact care.
=== Architecture and design ===
In October 2008, Wiley launched a new online service providing continuing education units (CEU) and professional development hour (PDH) credits to architects and designers. The initial courses are adapted from Wiley books, extending their reach into the digital space. Wiley is an accredited AIA continuing education provider.
=== Wiley Online Library ===
Wiley Online Library is a subscription-based library of John Wiley & Sons that launched on August 7, 2010, replacing Wiley Interscience. It is a collection of online resources covering life, health, and physical sciences as well as social science and the humanities. To its members, Wiley Online Library delivers access to over 4 million articles from 1,600 journals, more than 22,000 books, and hundreds of reference works, laboratory protocols, and databases from John Wiley & Sons and its imprints, including Wiley-Blackwell, Wiley-VCH, and Jossey-Bass. The online library is implemented on top of the Literatum platform, developed by Atypon which Wiley acquired in 2016.
== Corporate structure ==
=== Governance and operations ===
While the company is led by an independent management team and Board of Directors, the involvement of the Wiley family is ongoing, with sixth-generation members (and siblings) Peter Booth Wiley as the non-executive chairman of the board and Bradford Wiley II as a Director and past chairman of the board. Seventh-generation members Jesse and Nate Wiley work in the company's Professional/Trade and Scientific, Technical, Medical, and Scholarly businesses, respectively.
Wiley has been publicly owned since 1962, and listed on the New York Stock Exchange since 1995; its stock is traded under the symbols NYSE: WLY (for its Class A stock) and NYSE: WLYB (for its class B stock).
Wiley's operations are organized into three business divisions:
Scientific, Technical, Medical, and Scholarly (STMS), also known as Wiley-Blackwell
Professional Development
Global Education
The company has approximately 10,000 employees worldwide, with headquarters in Hoboken, New Jersey, since 2002.
=== Corporate culture ===
In 2008, Wiley was named for the second consecutive year to Forbes magazine's annual list of the "400 Best Big Companies in America". In 2007, Book Business magazine cited Wiley as "One of the 20 Best Book Publishing Companies to Work For". For two consecutive years, 2006 and 2005, Fortune magazine named Wiley one of the "100 Best Companies to Work For". Wiley Canada was named to Canadian Business magazine's 2006 list of "Best Workplaces in Canada", and Wiley Australia has received the Australian government's "Employer of Choice for Women" citation every year since its inception in 2001. In 2004, Wiley was named to the U.S. Environmental Protection Agency's "Best Workplaces for Commuters" list. Working Mother magazine in 2003 listed Wiley as one of the "100 Best Companies for Working Mothers", and that same year, the company received the Enterprise Award from the New Jersey Business & Industry Association in recognition of its contribution to the state's economic growth. In 1998, Financial Times selected Wiley as one of the "most respected companies" with a "strong and well thought out strategy" in its global survey of CEOs.
In August 2009, the company announced a proposed reduction of Wiley-Blackwell staff in content management operations in the UK and Australia by approximately 60, in conjunction with an increase of staff in Asia. In March 2010, it announced a similar reorganization of its Wiley-Blackwell central marketing operations that would lay off approximately 40 employees. The company's position was that the primary goal of this restructuring was to increase workflow efficiency. In June 2012, it announced the proposed closing of its Edinburgh facility in June 2013 with the intention of relocating journal content management activities currently performed there to Oxford and Asia. The move would lay off approximately 50 employees.
Wiley is a signatory of the SDG Publishers Compact, and has taken steps to support the achievement of the Sustainable Development Goals (SDGs) in the publishing industry. These include becoming carbon neutral and supporting reforestation.
Wiley's Natural Resources Forum was one of six out of 100 journals to receive the highest possible "Five Wheel" impact rating from an SDG Impact Intensity journal rating system analyzing data from 2016 to 2020.
=== Gender pay gap ===
Wiley reported a mean 2017 gender pay gap of 21.1% for its UK workforce, while the median was 21.5%. The gender bonus gaps are far higher, at 50.7% for the median measure and 42.3% for the mean. Wiley said: "Our mean and median bonus gaps are driven by our highest earners, who are predominantly male."
== Controversies ==
=== Forced inclusion of authors into AI LLM's ===
In August 2024, it was reported that Wiley was projected to earn $44 million (£33 million) from partnerships with Artificial Intelligence (AI) firms that utilize authors' content to train Large Language Models (LLMs). Authors are not provided with an opt-out option for these deals.
=== Journal protests ===
In 2020, the entire editorial board of the European Law Journal resigned over a dispute about contract terms and the behavior of its publisher, Wiley. Wiley did not allow the editorial board members to decide over editorial appointments and decisions.
A majority of the editorial board of the journal Diversity & Distributions resigned in 2018 after Wiley allegedly blocked the publication of a letter protesting the publisher's decision to make the journal entirely open access.
=== Publication practices ===
According to Retraction Watch, Wiley makes some articles disappear from their journals without any explanation.
=== Manipulation of bibliometrics ===
According to Goodhart's law and concerned academics like the signatories of the San Francisco Declaration on Research Assessment, commercial academic publishers benefit from manipulation of bibliometrics and scientometrics like the journal impact factor, which is often used as proxy of prestige and can influence revenues, including public subsidies in the form of subscriptions and free work from academics.
Five Wiley journals, which exhibited unusual levels of self-citation, had their journal impact factor of 2019 suspended from Journal Citation Reports in 2020, a sanction which hit 34 journals in total.
=== Publication of "Paper Mill" generated papers ===
In April 2022, the journal Science revealed that a Ukrainian company, International Publisher Ltd., run by Ksenia Badziun, operates a Russian website where academics can purchase authorships in soon-to-be-published academic papers. Over a two-year period, researchers found that at least 419 articles "appeared to match manuscripts that later appeared in dozens of different journals" and that "more than 100 of these identified papers were published in 68 journals run by established publishers, including Elsevier, Oxford University Press, Springer Nature, Taylor & Francis, Wolters Kluwer, and Wiley-Blackwell." Wiley-Blackwell claimed that they were examining the specific papers that were identified and brought to their attention.
In 2024, Wiley closed down 19 of the about 250 journals it had acquired in the Hindawi deal, after retracting "more than 11,300 'compromised' studies over the past two years"; Wiley had earlier shuttered four journals for publishing fake articles coming from paper mills.
=== COI between climate research and fossil fuel industry ===
Wiley is a publisher of climate change research, but also publishes a journal dedicated to fossil fuel exploration. Climate scientists are concerned that this conflict of interest could undermine the credibility of climate science because they believe that fossil fuel extraction and climate action are incompatible.
== Copyright cases ==
=== Hindawi case ===
In 2021, Wiley purchased another Open Access company named Hindawi. Shortly after, many articles published by Hindawi were retracted and Scopus disconnected all of them from their database.
=== Photographer copyrights ===
A 2013 lawsuit brought by a stock photo agency for alleged violation of a 1997 license was dismissed for procedural reasons.
A 2014 ruling by the District Court for the Southern District of New York, later affirmed by the Second Circuit, says that Wiley infringed on the copyright of photographer Tom Bean by using his photos beyond the scope of the license it had purchased. The case was connected to a larger set of copyright infringement cases brought by photo agency DRK against various publishers.
A 2015 9th Circuit Court of Appeals opinion established that another photo agency had standing to sue Wiley for its usage of photos beyond the scope of the license acquired.
=== Used books ===
In 2018, a Southern District of New York court upheld the award of over $39 million to Wiley and other textbook publishers in a vast litigation against Book Dog Books, a re-seller of used books which was found to hold and distribute counterfeit copies. The Court found that circumstantial evidence was sufficient to establish distribution of 116 titles for which counterfeit copies had been presented and of other 5 titles. It also found that unchallenged testimony on how the publishers usually acquired licenses from authors was sufficient to establish the publishers' copyright on the books in question.
=== Kirtsaeng v. John Wiley & Sons ===
In 2008, John Wiley & Sons filed suit against Thailand native Supap Kirtsaeng over the sale of textbooks made outside of the United States and then imported into the country. In 2013, the U.S. Supreme Court held 6–3 that the first-sale doctrine applied to copies of copyrighted works made and sold abroad at lower prices, reversing the Second Circuit decision which had favored Wiley.
=== Internet Archive lawsuit ===
In June 2020, Wiley was one of a group of publishers who sued the Internet Archive, arguing that its collection of e-books was denying authors and publishers revenue and accusing the library of "willful mass copyright infringement".
== Antitrust cases ==
In September 2024, Lucina Uddin, a neuroscience professor at UCLA, sued John Wiley & Sons along with five other academic journal publishers in a proposed class-action lawsuit, alleging that the publishers violated antitrust law by agreeing not to compete against each other for manuscripts and by denying scholars payment for peer review services.
== References ==
== Further reading ==
The First One Hundred and Fifty Years: A History of John Wiley and Sons Incorporated 1807–1957. New York: John Wiley & Sons. 1957.
Moore, John Hammond (1982). Wiley: One Hundred and Seventy Five Years of Publishing. New York: John Wiley & Sons. ISBN 978-0-471-86082-2.
Munroe, Mary H. (2004). "John Wiley Timeline". The Academic Publishing Industry: A Story of Merger and Acquisition. Archived from the original on October 20, 2014 – via Northern Illinois University.
Wiley, Peter Booth; Chaves, Frances; Grolier Club (2010). John Wiley & Sons: 200 years of publishing (PDF). Hoboken, NJ: John Wiley & Sons.
Wright, Robert E.; Jacobson, Timothy C.; Smith, George David (2007). Knowledge for Generations: Wiley and the Global Publishing Industry, 1807–2007. Hoboken, New Jersey: John Wiley & Sons. ISBN 978-0-471-75721-4.
== External links ==
Official website | Wikipedia/Interscience_Publishers |
In mathematics, a function space is a set of functions between two fixed sets. Often, the domain and/or codomain will have additional structure which is inherited by the function space. For example, the set of functions from any set X into a vector space has a natural vector space structure given by pointwise addition and scalar multiplication. In other scenarios, the function space might inherit a topological or metric structure, hence the name function space.
== In linear algebra ==
Let F be a field and let X be any set. The functions X → F can be given the structure of a vector space over F where the operations are defined pointwise, that is, for any f, g : X → F, any x in X, and any c in F, define
(
f
+
g
)
(
x
)
=
f
(
x
)
+
g
(
x
)
(
c
⋅
f
)
(
x
)
=
c
⋅
f
(
x
)
{\displaystyle {\begin{aligned}(f+g)(x)&=f(x)+g(x)\\(c\cdot f)(x)&=c\cdot f(x)\end{aligned}}}
When the domain X has additional structure, one might consider instead the subset (or subspace) of all such functions which respect that structure. For example, if V and also X itself are vector spaces over F, the set of linear maps X → V form a vector space over F with pointwise operations (often denoted Hom(X,V)). One such space is the dual space of X: the set of linear functionals X → F with addition and scalar multiplication defined pointwise.
The cardinal dimension of a function space with no extra structure can be found by the Erdős–Kaplansky theorem.
== Examples ==
Function spaces appear in various areas of mathematics:
In set theory, the set of functions from X to Y may be denoted {X → Y} or YX.
As a special case, the power set of a set X may be identified with the set of all functions from X to {0, 1}, denoted 2X.
The set of bijections from X to Y is denoted
X
↔
Y
{\displaystyle X\leftrightarrow Y}
. The factorial notation X! may be used for permutations of a single set X.
In functional analysis, the same is seen for continuous linear transformations, including topologies on the vector spaces in the above, and many of the major examples are function spaces carrying a topology; the best known examples include Hilbert spaces and Banach spaces.
In functional analysis, the set of all functions from the natural numbers to some set X is called a sequence space. It consists of the set of all possible sequences of elements of X.
In topology, one may attempt to put a topology on the space of continuous functions from a topological space X to another one Y, with utility depending on the nature of the spaces. A commonly used example is the compact-open topology, e.g. loop space. Also available is the product topology on the space of set theoretic functions (i.e. not necessarily continuous functions) YX. In this context, this topology is also referred to as the topology of pointwise convergence.
In algebraic topology, the study of homotopy theory is essentially that of discrete invariants of function spaces;
In the theory of stochastic processes, the basic technical problem is how to construct a probability measure on a function space of paths of the process (functions of time);
In category theory, the function space is called an exponential object or map object. It appears in one way as the representation canonical bifunctor; but as (single) functor, of type
[
X
,
−
]
{\displaystyle [X,-]}
, it appears as an adjoint functor to a functor of type
−
×
X
{\displaystyle -\times X}
on objects;
In functional programming and lambda calculus, function types are used to express the idea of higher-order functions
In programming more generally, many higher-order function concepts occur with or without explicit typing, such as closures.
In domain theory, the basic idea is to find constructions from partial orders that can model lambda calculus, by creating a well-behaved Cartesian closed category.
In the representation theory of finite groups, given two finite-dimensional representations V and W of a group G, one can form a representation of G over the vector space of linear maps Hom(V,W) called the Hom representation.
== Functional analysis ==
Functional analysis is organized around adequate techniques to bring function spaces as topological vector spaces within reach of the ideas that would apply to normed spaces of finite dimension. Here we use the real line as an example domain, but the spaces below exist on suitable open subsets
Ω
⊆
R
n
{\displaystyle \Omega \subseteq \mathbb {R} ^{n}}
C
(
R
)
{\displaystyle C(\mathbb {R} )}
continuous functions endowed with the uniform norm topology
C
c
(
R
)
{\displaystyle C_{c}(\mathbb {R} )}
continuous functions with compact support
B
(
R
)
{\displaystyle B(\mathbb {R} )}
bounded functions
C
0
(
R
)
{\displaystyle C_{0}(\mathbb {R} )}
continuous functions which vanish at infinity
C
r
(
R
)
{\displaystyle C^{r}(\mathbb {R} )}
continuous functions that have r continuous derivatives.
C
∞
(
R
)
{\displaystyle C^{\infty }(\mathbb {R} )}
smooth functions
C
c
∞
(
R
)
{\displaystyle C_{c}^{\infty }(\mathbb {R} )}
smooth functions with compact support (i.e. the set of bump functions)
C
ω
(
R
)
{\displaystyle C^{\omega }(\mathbb {R} )}
real analytic functions
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
, for
1
≤
p
≤
∞
{\displaystyle 1\leq p\leq \infty }
, is the Lp space of measurable functions whose p-norm
‖
f
‖
p
=
(
∫
R
|
f
|
p
)
1
/
p
{\textstyle \|f\|_{p}=\left(\int _{\mathbb {R} }|f|^{p}\right)^{1/p}}
is finite
S
(
R
)
{\displaystyle {\mathcal {S}}(\mathbb {R} )}
, the Schwartz space of rapidly decreasing smooth functions and its continuous dual,
S
′
(
R
)
{\displaystyle {\mathcal {S}}'(\mathbb {R} )}
tempered distributions
D
(
R
)
{\displaystyle D(\mathbb {R} )}
compact support in limit topology
W
k
,
p
{\displaystyle W^{k,p}}
Sobolev space of functions whose weak derivatives up to order k are in
L
p
{\displaystyle L^{p}}
O
U
{\displaystyle {\mathcal {O}}_{U}}
holomorphic functions
linear functions
piecewise linear functions
continuous functions, compact open topology
all functions, space of pointwise convergence
Hardy space
Hölder space
Càdlàg functions, also known as the Skorokhod space
Lip
0
(
R
)
{\displaystyle {\text{Lip}}_{0}(\mathbb {R} )}
, the space of all Lipschitz functions on
R
{\displaystyle \mathbb {R} }
that vanish at zero.
== Uniform Norm ==
If y is an element of the function space
C
(
a
,
b
)
{\displaystyle {\mathcal {C}}(a,b)}
of all continuous functions that are defined on a closed interval [a, b], the norm
‖
y
‖
∞
{\displaystyle \|y\|_{\infty }}
defined on
C
(
a
,
b
)
{\displaystyle {\mathcal {C}}(a,b)}
is the maximum absolute value of y (x) for a ≤ x ≤ b,
‖
y
‖
∞
≡
max
a
≤
x
≤
b
|
y
(
x
)
|
where
y
∈
C
(
a
,
b
)
{\displaystyle \|y\|_{\infty }\equiv \max _{a\leq x\leq b}|y(x)|\qquad {\text{where}}\ \ y\in {\mathcal {C}}(a,b)}
is called the uniform norm or supremum norm ('sup norm').
== Bibliography ==
Kolmogorov, A. N., & Fomin, S. V. (1967). Elements of the theory of functions and functional analysis. Courier Dover Publications.
Stein, Elias; Shakarchi, R. (2011). Functional Analysis: An Introduction to Further Topics in Analysis. Princeton University Press.
== See also ==
List of mathematical functions
Clifford algebra
Tensor field
Spectral theory
Functional determinant
== References == | Wikipedia/Space_of_functions |
Rod calculus or rod calculation was the mechanical method of algorithmic computation with counting rods in China from the Warring States to Ming dynasty before the counting rods were increasingly replaced by the more convenient and faster abacus. Rod calculus played a key role in the development of Chinese mathematics to its height in the Song dynasty and Yuan dynasty, culminating in the invention
of polynomial equations of up to four unknowns in the work of Zhu Shijie.
== Hardware ==
The basic equipment for carrying out rod calculus is a bundle of counting rods and a counting board. The counting rods are usually made of bamboo sticks, about 12 cm- 15 cm in length, 2mm to 4 mm diameter, sometimes from animal bones, or ivory and jade (for well-heeled merchants). A counting board could be a table top, a wooden board with or without grid, on the floor or on sand.
In 1971 Chinese archaeologists unearthed a bundle of well-preserved animal bone counting rods stored in a silk pouch from a tomb in Qian Yang county in Shanxi province, dated back to the first half of Han dynasty (206 BC – 8AD). In 1975 a bundle of bamboo counting rods was unearthed.
The use of counting rods for rod calculus flourished in the Warring States, although no archaeological artefacts were found earlier than the Western Han dynasty (the first half of Han dynasty; however, archaeologists did unearth software artefacts of rod calculus dated back to the Warring States); since the rod calculus software must have gone along with rod calculus hardware, there is no doubt that rod calculus was already flourishing during the Warring States more than 2,200 years ago.
== Software ==
The key software required for rod calculus was a simple 45 phrase positional decimal multiplication table used in China since antiquity, called the nine-nine table, which were learned by heart by pupils, merchants, government officials and mathematicians alike.
== Rod numerals ==
=== Displaying numbers ===
Rod numerals is the only numeric system that uses different placement combination of a single symbol to convey any number or fraction in the Decimal System. For numbers in the units place, every vertical rod represent 1. Two vertical rods represent 2, and so on, until 5 vertical rods, which represents 5. For number between 6 and 9, a biquinary system is used, in which a horizontal bar on top of the vertical bars represent 5. The first row are the number 1 to 9 in rod numerals, and the second row is the same numbers in horizontal form.
For numbers larger than 9, a decimal system is used. Rods placed one place to the left of the units place represent 10 times that number. For the hundreds place, another set of rods is placed to the left which represents 100 times of that number, and so on. As shown in the adjacent image, the number 231 is represented in rod numerals in the top row, with one rod in the units place representing 1, three rods in the tens place representing 30, and two rods in the hundreds place representing 200, with a sum of 231.
When doing calculation, usually there was no grid on the surface. If rod numerals two, three, and one is placed consecutively in the vertical form, there's a possibility of it being mistaken for 51 or 24, as shown in the second and third row of the adjacent image. To avoid confusion, number in consecutive places are placed in alternating vertical and horizontal form, with the units place in vertical form, as shown in the bottom row on the right.
=== Displaying zeroes ===
In Rod numerals, zeroes are represented by a space, which serves both as a number and a place holder value. Unlike in Hindu-Arabic numerals, there is no specific symbol to represent zero. Before the introduction of a written zero, in addition to a space to indicate no units, the character in the subsequent unit column would be rotated by 90°, to reduce the ambiguity of a single zero. For example 107 (𝍠 𝍧) and 17 (𝍩𝍧) would be distinguished by rotation, in addition to the space, though multiple zero units could lead to ambiguity, e.g. 1007 (𝍩 𝍧), and 10007 (𝍠 𝍧). In the adjacent image, the number zero is merely represented with a space.
=== Negative and positive numbers ===
Song mathematicians used red to represent positive numbers and black for negative numbers. However, another way is to add a slash to the last place to show that the number is negative.
=== Decimal fraction ===
The Mathematical Treatise of Sunzi used decimal fraction metrology. The unit of length was 1 chi,
1 chi = 10 cun, 1 cun = 10 fen, 1 fen = 10 li, 1 li = 10 hao, 10 hao = 1 shi, 1 shi = 10 hu.
1 chi 2 cun 3 fen 4 li 5 hao 6 shi 7 hu is laid out on counting board as
where is the unit measurement chi.
Southern Song dynasty mathematician Qin Jiushao extended the use of decimal fraction beyond metrology. In his book Mathematical Treatise in Nine Sections, he formally expressed 1.1446154 day as
日
He marked the unit with a word “日” (day) underneath it.
== Addition ==
Rod calculus works on the principle of addition. Unlike Arabic numerals, digits represented by counting rods have additive properties. The process of addition involves mechanically moving the rods without the need of memorising an addition table. This is the biggest difference with Arabic numerals, as one cannot mechanically put 1 and 2 together to form 3, or 2 and 3 together to form 5.
The adjacent image presents the steps in adding 3748 to 289:
Place the augend 3748 in the first row, and the addend 289 in the second.
Calculate from LEFT to RIGHT, from the 2 of 289 first.
Take away two rods from the bottom add to 7 on top to make 9.
Move 2 rods from top to bottom 8, carry one to forward to 9, which becomes zero and carries to 3 to make 4, remove 8 from bottom row.
Move one rod from 8 on top row to 9 on bottom to form a carry one to next rank and add one rod to 2 rods on top row to make 3 rods, top row left 7.
Result 3748+289=4037
The rods in the augend change throughout the addition, while the rods in the addend at the bottom "disappear".
== Subtraction ==
=== Without borrowing ===
In situation in which no borrowing is needed, one only needs to take the number of rods in the subtrahend from the minuend. The result of the calculation is the difference. The adjacent image shows the steps in subtracting 23 from 54.
=== Borrowing ===
In situations in which borrowing is needed such as 4231–789, one need use a more complicated procedure. The steps for this example are shown on the left.
Place the minuend 4231 on top, the subtrahend 789 on the bottom. Calculate from the left to the right.
Borrow 1 from the thousands place for a ten in the hundreds place, minus 7 from the row below, the difference 3 is added to the 2 on top to form 5. The 7 on the bottom is subtracted, shown by the space.
Borrow 1 from the hundreds place, which leaves 4. The 10 in the tens place minus the 8 below results in 2, which is added to the 3 above to form 5. The top row now is 3451, the bottom 9.
Borrow 1 from the 5 in the tens place on top, which leaves 4. The 1 borrowed from the tens is 10 in the units place, subtracting 9 which results in 1, which are added to the top to form 2. With all rods in the bottom row subtracted, the 3442 in the top row is then, the result of the calculation
== Multiplication ==
Sunzi Suanjing described in detail the algorithm of multiplication. On the left are the steps to calculate 38×76:
Place the multiplicand on top, the multiplier on bottom. Line up the units place of the multiplier with the highest place of the multiplicand. Leave room in the middle for recording.
Start calculating from the highest place of the multiplicand (in the example, calculate 30×76, and then 8×76). Using the multiplication table 3 times 7 is 21. Place 21 in rods in the middle, with 1 aligned with the tens place of the multiplier (on top of 7). Then, 3 times 6 equals 18, place 18 as it is shown in the image. With the 3 in the multiplicand multiplied totally, take the rods off.
Move the multiplier one place to the right. Change 7 to horizontal form, 6 to vertical.
8×7 = 56, place 56 in the second row in the middle, with the units place aligned with the digits multiplied in the multiplier. Take 7 out of the multiplier since it has been multiplied.
8×6 = 48, 4 added to the 6 of the last step makes 10, carry 1 over. Take off 8 of the units place in the multiplicand, and take off 6 in the units place of the multiplier.
Sum the 2380 and 508 in the middle, which results in 2888: the product.
== Division ==
The animation on the left shows the steps for calculating 309/7 = 441/7.
Place the dividend, 309, in the middle row and the divisor, 7, in the bottom row. Leave space for the top row.
Move the divisor, 7, one place to the left, changing it to horizontal form.
Using the Chinese multiplication table and division, 30÷7 equals 4 remainder 2. Place the quotient, 4, in the top row and the remainder, 2, in the middle row.
Move the divisor one place to the right, changing it to vertical form. 29÷7 equals 4 remainder 1. Place the quotient, 4, on top, leaving the divisor in place. Place the remainder in the middle row in place of the dividend in this step. The result is the quotient is 44 with a remainder of 1
The Sunzi algorithm for division was transmitted in toto by al Khwarizmi to Islamic country from Indian sources in 825AD. Al Khwarizmi's book was translated into Latin in the 13th century, The Sunzi division algorithm later evolved into Galley division in Europe. The division algorithm in Abu'l-Hasan al-Uqlidisi's 925AD book Kitab al-Fusul fi al-Hisab al-Hindi and in 11th century Kushyar ibn Labban's Principles of Hindu Reckoning were identical to Sunzu's division algorithm.
== Fractions ==
If there is a remainder in a place value decimal rod calculus division, both the remainder and the divisor must be left in place with one on top of another. In Liu Hui's notes to Jiuzhang suanshu (2nd century BCE), the number on top is called "shi" (实), while the one at bottom is called "fa" (法). In Sunzi Suanjing, the number on top is called "zi" (子) or "fenzi" (lit., son of fraction), and the one on the bottom is called "mu" (母) or "fenmu" (lit., mother of fraction). Fenzi and Fenmu are also the modern Chinese name for numerator and denominator, respectively. As shown on the right, 1 is the numerator remainder, 7 is the denominator divisor, formed a fraction 1/7. The quotient of the division 309/7 is 44 + 1/7.
Liu Hui used a lot of calculations with fractions in Haidao Suanjing.
This form of fraction with numerator on top and denominator at bottom without a horizontal bar in between, was transmitted to Arabic country in an 825AD book by al Khwarizmi via India, and in use by 10th century Abu'l-Hasan al-Uqlidisi and 15th century Jamshīd al-Kāshī's work "Arithematic Key".
=== Addition ===
1/3 + 2/5
Put the two numerators 1 and 2 on the left side of counting board, put the two denominators 3 and 5 at the right hand side
Cross multiply 1 with 5, 2 with 3 to get 5 and 6, replace the numerators with the corresponding cross products.
Multiply the two denominators 3 × 5 = 15, put at bottom right
Add the two numerators 5 and 6 = 11 put on top right of counting board.
Result: 1/3 + 2/5 = 11/15
=== Subtraction ===
8/9 − 1/5
Put down the rod numeral for numerators 1 and 8 at left hand side of a counting board
Put down the rods for denominators 5 and 9 at the right hand side of a counting board
Cross multiply 1 × 9 = 9, 5 × 8 = 40, replace the corresponding numerators
Multiply the denominators 5 × 9 = 45, put 45 at the bottom right of counting board, replace the denominator 5
Subtract 40 − 9 = 31, put on top right.
Result: 8/9 − 1/5 = 31/45
=== Multiplication ===
31/3 × 52/5
Arrange the counting rods for 31/3 and 52/5 on the counting board as shang, shi, fa tabulation format.
shang times fa add to shi: 3 × 3 + 1 = 10; 5 × 5 + 2 = 27
shi multiplied by shi:10 × 27 = 270
fa multiplied by fa:3 × 5 = 15
shi divided by fa: 31/3 × 52/5 = 18
=== Highest common factor and fraction reduction ===
The algorithm for finding the highest common factor of two numbers and reduction of
fraction was laid out in Jiuzhang suanshu.
The highest common factor is found by successive division with remainders until
the last two remainders are identical.
The animation on the right illustrates the algorithm for finding the highest common factor of 32,450,625/59,056,400 and reduction of a fraction.
In this case the hcf is 25.
Divide the numerator and denominator by 25. The reduced fraction is 1,298,025/2,362,256.
=== Interpolation ===
Calendarist and mathematician He Chengtian (何承天) used fraction interpolation method, called "harmonisation of the divisor of the day" (调日法) to obtain a better approximate value than the old one by iteratively adding the numerators and denominators a "weaker" fraction with a "stronger fraction". Zu Chongzhi's legendary π = 355/113 could be obtained with He Chengtian's method
== System of linear equations ==
Chapter Eight Rectangular Arrays of Jiuzhang suanshu provided an algorithm for solving System of linear equations by method of elimination:
Problem 8-1: Suppose we have 3 bundles of top quality cereals, 2 bundles of medium quality cereals, and a bundle of low quality cereal with accumulative weight of 39 dou. We also have 2, 3 and 1 bundles of respective cereals amounting to 34 dou; we also have 1,2 and 3 bundles of respective cereals, totaling 26 dou.
Find the quantity of top, medium, and poor quality cereals.
In algebra, this problem can be expressed in three system equations with three unknowns.
{
3
x
+
2
y
+
z
=
39
2
x
+
3
y
+
z
=
34
x
+
2
y
+
3
z
=
26
{\displaystyle {\begin{cases}3x+2y+z=39\\2x+3y+z=34\\x+2y+3z=26\end{cases}}}
This problem was solved in Jiuzhang suanshu with counting rods laid out on a counting board in a tabular format similar to a 3x4 matrix:
Algorithm:
Multiply the center column with right column top quality number.
Repeatedly subtract right column from center column, until the top number of center column=0.
multiply the left column with the value of top row of right column.
Repeatedly subtract right column from left column, until the top number of left column=0.
After applying above elimination algorithm to the reduced center column and left column, the matrix was reduced to triangular shape.
The amount of one bundle of low quality cereal
=
99
36
=
2
3
4
{\displaystyle ={\frac {99}{36}}=2{\frac {3}{4}}}
From which the amount of one bundle of top and medium quality cereals can be found easily:
One bundle of top quality cereals=9 dou
1
4
{\displaystyle {\frac {1}{4}}}
One bundle of medium cereal=4 dou
1
4
{\displaystyle {\frac {1}{4}}}
== Extraction of Square root ==
Algorithm for extraction of square root was described in Jiuzhang suanshu and with minor difference in terminology in Sunzi Suanjing.
The animation shows the algorithm for rod calculus extraction of an approximation of the square root
234567
≈
484
311
968
{\displaystyle {\sqrt {234567}}\approx 484{\tfrac {311}{968}}}
from the algorithm in chap 2 problem 19 of Sunzi Suanjing:
Now there is a square area 234567, find one side of the square.
The algorithm is as follows:
Set up 234567 on the counting board, on the second row from top, named shi
Set up a marker 1 at 10000 position at the 4th row named xia fa
Estimate the first digit of square root to be counting rod numeral 4, put on the top row (shang) hundreds position,
Multiply the shang 4 with xiafa 1, put the product 4 on 3rd row named fang fa
Multiply shang with fang fa deduct the product 4x4=16 from shi: 23-16=7, remain numeral 7.
double up the fang fa 4 to become 8, shift one position right, and change the vertical 8 into horizontal 8 after moved right.
Move xia fa two position right.
Estimate second digit of shang as 8: put numeral 8 at tenth position on top row.
Multiply xia fa with the new digit of shang, add to fang fa
.
8 calls 8 =64, subtract 64 from top row numeral "74", leaving one rod at the most significant digit.
double the last digit of fang fa 8, add to 80 =96
Move fang fa96 one position right, change convention;move xia fa "1" two position right.
Estimate 3rd digit of shang to be 4.
Multiply new digit of shang 4 with xia fa 1, combined with fang fa to make 964.
subtract successively 4*9=36,4*6=24,4*4=16 from the shi, leaving 311
double the last digit 4 of fang fa into 8 and merge with fang fa
result
234567
≈
484
311
968
{\displaystyle {\sqrt {234567}}\approx 484{\tfrac {311}{968}}}
North Song dynasty mathematician Jia Xian developed an additive multiplicative algorithm for square root extraction, in which he replaced the traditional "doubling" of "fang fa" by adding
shang digit to fang fa digit, with same effect.
== Extraction of cubic root ==
Jiuzhang suanshu vol iv "shaoguang" provided algorithm for extraction of cubic root.
〔一九〕今有積一百八十六萬八百六十七尺。問為立方幾何?答曰:一百二十三尺。
problem 19: We have a 1860867 cubic chi, what is the length of a side ? Answer:123 chi.
1860867
3
=
123
{\displaystyle {\sqrt[{3}]{1860867}}=123}
North Song dynasty mathematician Jia Xian invented a method similar to simplified form of Horner scheme for extraction of cubic root.
The animation at right shows Jia Xian's algorithm for solving problem 19 in Jiuzhang suanshu vol 4.
== Polynomial equation ==
North Song dynasty mathematician Jia Xian invented Horner scheme for solving simple 4th order equation of the form
x
4
=
a
{\displaystyle x^{4}=a}
South Song dynasty mathematician Qin Jiushao improved Jia Xian's Horner method to solve polynomial equation up to 10th order.
The following is algorithm for solving
−
x
4
+
15245
x
2
−
6262506.25
=
0
{\displaystyle -x^{4}+15245x^{2}-6262506.25=0}
in his Mathematical Treatise in Nine Sections vol 6 problem 2.
This equation was arranged bottom up with counting rods on counting board in tabular form
Algorithm:
Arrange the coefficients in tabular form, constant at shi, coeffienct of x at shang lian, the coeffiecnt of
x
4
{\displaystyle x^{4}}
at yi yu;align the numbers at unit rank.
Advance shang lian two ranks
Advance yi yu three ranks
Estimate shang=20
let xia lian =shang * yi yu
let fu lian=shang *yi yu
merge fu lian with shang lian
let fang=shang * shang lian
subtract shang*fang from shi
add shang * yi yu to xia lian
retract xia lian 3 ranks, retract yi yu 4 ranks
The second digit of shang is 0
merge shang lian into fang
merge yi yu into xia lian
Add yi yu to fu lian, subtract the result from fang, let the result be denominator
find the highest common factor =25 and simplify the fraction
32450625
59056400
{\displaystyle {\frac {32450625}{59056400}}}
solution
x
=
20
1298205
2362256
{\displaystyle x=20{\frac {1298205}{2362256}}}
== Tian Yuan shu ==
Yuan dynasty mathematician Li Zhi developed rod calculus into Tian yuan shu
Example Li Zhi Ceyuan haijing vol II, problem 14 equation of one unknown:
−
x
2
−
680
x
+
96000
=
0
{\displaystyle -x^{2}-680x+96000=0}
元
== Polynomial equations of four unknowns ==
Mathematician Zhu Shijie further developed rod calculus to include polynomial equations of 2 to four unknowns.
For example, polynomials of three unknowns:
Equation 1:
−
y
−
z
−
y
2
∗
x
−
x
+
x
y
z
=
0
{\displaystyle -y-z-y^{2}*x-x+xyz=0}
太
Equation 2:
−
y
−
z
+
x
−
x
2
+
x
z
=
0
{\displaystyle -y-z+x-x^{2}+xz=0}
Equation 3:
y
2
−
z
2
+
x
2
=
0
;
{\displaystyle y^{2}-z^{2}+x^{2}=0;}
太
After successive elimination of two unknowns, the polynomial equations of three unknowns was reduced to
a polynomial equation of one unknown:
x
4
−
6
x
3
+
4
x
2
+
6
x
−
5
=
0
{\displaystyle x^{4}-6x^{3}+4x^{2}+6x-5=0}
Solved x=5;
Which ignores 3 other answers, 2 are repeated.
== See also ==
Chinese mathematics
Counting rods
== References ==
Lam Lay Yong (蓝丽蓉) Ang Tian Se (洪天赐), Fleeting Footsteps, World Scientific ISBN 981-02-3696-4
Jean Claude Martzloff, A History of Chinese Mathematics ISBN 978-3-540-33782-9 | Wikipedia/Rod_calculus |
The term "thermal energy" is often used ambiguously in physics and engineering. It can denote several different physical concepts, including:
Internal energy: The energy contained within a body of matter or radiation, excluding the potential energy of the whole system.
Heat: Energy in transfer between a system and its surroundings by mechanisms other than thermodynamic work and transfer of matter.
The characteristic energy kBT associated with a single microscopic degree of freedom, where T denotes temperature and kB denotes the Boltzmann constant.
Mark Zemansky (1970) has argued that the term "thermal energy" is best avoided due to its ambiguity. He suggests using more precise terms such as "internal energy" and "heat" to avoid confusion. The term is, however, used in some textbooks.
== Relation between heat and internal energy ==
In thermodynamics, heat is energy in transfer to or from a thermodynamic system by mechanisms other than thermodynamic work or transfer of matter, such as conduction, radiation, and friction. Heat refers to a quantity in transfer between systems, not to a property of any one system, or "contained" within it; on the other hand, internal energy and enthalpy are properties of a single system. Heat and work depend on the way in which an energy transfer occurs. In contrast, internal energy is a property of the state of a system and can thus be understood without knowing how the energy got there.
== Macroscopic thermal energy ==
In addition to the microscopic kinetic energies of its molecules, the internal energy of a body includes chemical energy belonging to distinct molecules, and the global joint potential energy involved in the interactions between molecules and suchlike. Thermal energy may be viewed as contributing to internal energy or to enthalpy.
=== Chemical internal energy ===
The internal energy of a body can change in a process in which chemical potential energy is converted into non-chemical energy. In such a process, the thermodynamic system can change its internal energy by doing work on its surroundings, or by gaining or losing energy as heat. It is not quite lucid to merely say that "the converted chemical potential energy has simply become internal energy". It is, however, sometimes convenient to say that "the chemical potential energy has been converted into thermal energy". This is expressed in ordinary traditional language by talking of 'heat of reaction'.
=== Potential energy of internal interactions ===
In a body of material, especially in condensed matter, such as a liquid or a solid, in which the constituent particles, such as molecules or ions, interact strongly with one another, the energies of such interactions contribute strongly to the internal energy of the body. Still, they are not immediately apparent in the kinetic energies of molecules, as manifest in temperature. Such energies of interaction may be thought of as contributions to the global internal microscopic potential energies of the body.
== Microscopic thermal energy ==
In a statistical mechanical account of an ideal gas, in which the molecules move independently between instantaneous collisions, the internal energy is just the sum total of the gas's independent particles' kinetic energies, and it is this kinetic motion that is the source and the effect of the transfer of heat across a system's boundary. For a gas that does not have particle interactions except for instantaneous collisions, the term "thermal energy" is effectively synonymous with "internal energy".
In many statistical physics texts, "thermal energy" refers to
k
T
{\displaystyle kT}
, the product of the Boltzmann constant and the absolute temperature, also written as
k
B
T
{\displaystyle k_{\text{B}}T}
.
== Thermal current density ==
When there is no accompanying flow of matter, the term "thermal energy" is also applied to the energy carried by a heat flow.
== See also ==
Geothermal energy
Geothermal heating
Geothermal power
Heat transfer
Ocean thermal energy conversion
Orders of magnitude (temperature)
Thermal energy storage
== References == | Wikipedia/Thermal_energy |
In mathematics, the composition operator
takes two functions,
f
{\displaystyle f}
and
g
{\displaystyle g}
, and returns a new function
h
(
x
)
:=
(
g
∘
f
)
(
x
)
=
g
(
f
(
x
)
)
{\displaystyle h(x):=(g\circ f)(x)=g(f(x))}
. Thus, the function g is applied after applying f to x.
(
g
∘
f
)
{\displaystyle (g\circ f)}
is pronounced "the composition of g and f".
Reverse composition, sometimes denoted
, applies the operation in the opposite order, applying
f
{\displaystyle f}
first and
g
{\displaystyle g}
second. Intuitively, reverse composition is a chaining process in which the output of function f feeds the input of function g.
The composition of functions is a special case of the composition of relations, sometimes also denoted by
∘
{\displaystyle \circ }
. As a result, all properties of composition of relations are true of composition of functions, such as associativity.
== Examples ==
Composition of functions on a finite set: If f = {(1, 1), (2, 3), (3, 1), (4, 2)}, and g = {(1, 2), (2, 3), (3, 1), (4, 2)}, then g ∘ f = {(1, 2), (2, 1), (3, 2), (4, 3)}, as shown in the figure.
Composition of functions on an infinite set: If f: R → R (where R is the set of all real numbers) is given by f(x) = 2x + 4 and g: R → R is given by g(x) = x3, then:
If an airplane's altitude at time t is a(t), and the air pressure at altitude x is p(x), then (p ∘ a)(t) is the pressure around the plane at time t.
Function defined on finite sets which change the order of their elements such as permutations can be composed on the same set, this being composition of permutations.
== Properties ==
The composition of functions is always associative—a property inherited from the composition of relations. That is, if f, g, and h are composable, then f ∘ (g ∘ h) = (f ∘ g) ∘ h. Since the parentheses do not change the result, they are generally omitted.
In a strict sense, the composition g ∘ f is only meaningful if the codomain of f equals the domain of g; in a wider sense, it is sufficient that the former be an improper subset of the latter.
Moreover, it is often convenient to tacitly restrict the domain of f, such that f produces only values in the domain of g. For example, the composition g ∘ f of the functions f : R → (−∞,+9] defined by f(x) = 9 − x2 and g : [0,+∞) → R defined by
g
(
x
)
=
x
{\displaystyle g(x)={\sqrt {x}}}
can be defined on the interval [−3,+3].
The functions g and f are said to commute with each other if g ∘ f = f ∘ g. Commutativity is a special property, attained only by particular functions, and often in special circumstances. For example, |x| + 3 = |x + 3| only when x ≥ 0. The picture shows another example.
The composition of one-to-one (injective) functions is always one-to-one. Similarly, the composition of onto (surjective) functions is always onto. It follows that the composition of two bijections is also a bijection. The inverse function of a composition (assumed invertible) has the property that (f ∘ g)−1 = g−1∘ f−1.
Derivatives of compositions involving differentiable functions can be found using the chain rule. Higher derivatives of such functions are given by Faà di Bruno's formula.
Composition of functions is sometimes described as a kind of multiplication on a function space, but has very different properties from pointwise multiplication of functions (e.g. composition is not commutative).
== Composition monoids ==
Suppose one has two (or more) functions f: X → X, g: X → X having the same domain and codomain; these are often called transformations. Then one can form chains of transformations composed together, such as f ∘ f ∘ g ∘ f. Such chains have the algebraic structure of a monoid, called a transformation monoid or (much more seldom) a composition monoid. In general, transformation monoids can have remarkably complicated structure. One particular notable example is the de Rham curve. The set of all functions f: X → X is called the full transformation semigroup or symmetric semigroup on X. (One can actually define two semigroups depending how one defines the semigroup operation as the left or right composition of functions.)
If the given transformations are bijective (and thus invertible), then the set of all possible combinations of these functions forms a transformation group (also known as a permutation group); and one says that the group is generated by these functions.
The set of all bijective functions f: X → X (called permutations) forms a group with respect to function composition. This is the symmetric group, also sometimes called the composition group. A fundamental result in group theory, Cayley's theorem, essentially says that any group is in fact just a subgroup of a symmetric group (up to isomorphism).
In the symmetric semigroup (of all transformations) one also finds a weaker, non-unique notion of inverse (called a pseudoinverse) because the symmetric semigroup is a regular semigroup.
== Functional powers ==
If Y ⊆ X, then
f
:
X
→
Y
{\displaystyle f:X\to Y}
may compose with itself; this is sometimes denoted as
f
2
{\displaystyle f^{2}}
. That is:
More generally, for any natural number n ≥ 2, the nth functional power can be defined inductively by f n = f ∘ f n−1 = f n−1 ∘ f, a notation introduced by Hans Heinrich Bürmann and John Frederick William Herschel. Repeated composition of such a function with itself is called function iteration.
By convention, f 0 is defined as the identity map on f 's domain, idX.
If Y = X and f: X → X admits an inverse function f −1, negative functional powers f −n are defined for n > 0 as the negated power of the inverse function: f −n = (f −1)n.
Note: If f takes its values in a ring (in particular for real or complex-valued f ), there is a risk of confusion, as f n could also stand for the n-fold product of f, e.g. f 2(x) = f(x) · f(x). For trigonometric functions, usually the latter is meant, at least for positive exponents. For example, in trigonometry, this superscript notation represents standard exponentiation when used with trigonometric functions:
sin2(x) = sin(x) · sin(x).
However, for negative exponents (especially −1), it nevertheless usually refers to the inverse function, e.g., tan−1 = arctan ≠ 1/tan.
In some cases, when, for a given function f, the equation g ∘ g = f has a unique solution g, that function can be defined as the functional square root of f, then written as g = f 1/2.
More generally, when gn = f has a unique solution for some natural number n > 0, then f m/n can be defined as gm.
Under additional restrictions, this idea can be generalized so that the iteration count becomes a continuous parameter; in this case, such a system is called a flow, specified through solutions of Schröder's equation. Iterated functions and flows occur naturally in the study of fractals and dynamical systems.
To avoid ambiguity, some mathematicians choose to use ∘ to denote the compositional meaning, writing f∘n(x) for the n-th iterate of the function f(x), as in, for example, f∘3(x) meaning f(f(f(x))). For the same purpose, f[n](x) was used by Benjamin Peirce whereas Alfred Pringsheim and Jules Molk suggested nf(x) instead.
== Alternative notations ==
Many mathematicians, particularly in group theory, omit the composition symbol, writing gf for g ∘ f.
During the mid-20th century, some mathematicians adopted postfix notation, writing xf for f(x) and (xf)g for g(f(x)). This can be more natural than prefix notation in many cases, such as in linear algebra when x is a row vector and f and g denote matrices and the composition is by matrix multiplication. The order is important because function composition is not necessarily commutative. Having successive transformations applying and composing to the right agrees with the left-to-right reading sequence.
Mathematicians who use postfix notation may write "fg", meaning first apply f and then apply g, in keeping with the order the symbols occur in postfix notation, thus making the notation "fg" ambiguous. Computer scientists may write "f ; g" for this, thereby disambiguating the order of composition. To distinguish the left composition operator from a text semicolon, in the Z notation the ⨾ character is used for left relation composition. Since all functions are binary relations, it is correct to use the [fat] semicolon for function composition as well (see the article on composition of relations for further details on this notation).
== Composition operator ==
Given a function g, the composition operator Cg is defined as that operator which maps functions to functions as
C
g
f
=
f
∘
g
.
{\displaystyle C_{g}f=f\circ g.}
Composition operators are studied in the field of operator theory.
== In programming languages ==
Function composition appears in one form or another in numerous programming languages.
== Multivariate functions ==
Partial composition is possible for multivariate functions. The function resulting when some argument xi of the function f is replaced by the function g is called a composition of f and g in some computer engineering contexts, and is denoted f |xi = g
f
|
x
i
=
g
=
f
(
x
1
,
…
,
x
i
−
1
,
g
(
x
1
,
x
2
,
…
,
x
n
)
,
x
i
+
1
,
…
,
x
n
)
.
{\displaystyle f|_{x_{i}=g}=f(x_{1},\ldots ,x_{i-1},g(x_{1},x_{2},\ldots ,x_{n}),x_{i+1},\ldots ,x_{n}).}
When g is a simple constant b, composition degenerates into a (partial) valuation, whose result is also known as restriction or co-factor.
f
|
x
i
=
b
=
f
(
x
1
,
…
,
x
i
−
1
,
b
,
x
i
+
1
,
…
,
x
n
)
.
{\displaystyle f|_{x_{i}=b}=f(x_{1},\ldots ,x_{i-1},b,x_{i+1},\ldots ,x_{n}).}
In general, the composition of multivariate functions may involve several other functions as arguments, as in the definition of primitive recursive function. Given f, a n-ary function, and n m-ary functions g1, ..., gn, the composition of f with g1, ..., gn, is the m-ary function
h
(
x
1
,
…
,
x
m
)
=
f
(
g
1
(
x
1
,
…
,
x
m
)
,
…
,
g
n
(
x
1
,
…
,
x
m
)
)
.
{\displaystyle h(x_{1},\ldots ,x_{m})=f(g_{1}(x_{1},\ldots ,x_{m}),\ldots ,g_{n}(x_{1},\ldots ,x_{m})).}
This is sometimes called the generalized composite or superposition of f with g1, ..., gn. The partial composition in only one argument mentioned previously can be instantiated from this more general scheme by setting all argument functions except one to be suitably chosen projection functions. Here g1, ..., gn can be seen as a single vector/tuple-valued function in this generalized scheme, in which case this is precisely the standard definition of function composition.
A set of finitary operations on some base set X is called a clone if it contains all projections and is closed under generalized composition. A clone generally contains operations of various arities. The notion of commutation also finds an interesting generalization in the multivariate case; a function f of arity n is said to commute with a function g of arity m if f is a homomorphism preserving g, and vice versa, that is:
f
(
g
(
a
11
,
…
,
a
1
m
)
,
…
,
g
(
a
n
1
,
…
,
a
n
m
)
)
=
g
(
f
(
a
11
,
…
,
a
n
1
)
,
…
,
f
(
a
1
m
,
…
,
a
n
m
)
)
.
{\displaystyle f(g(a_{11},\ldots ,a_{1m}),\ldots ,g(a_{n1},\ldots ,a_{nm}))=g(f(a_{11},\ldots ,a_{n1}),\ldots ,f(a_{1m},\ldots ,a_{nm})).}
A unary operation always commutes with itself, but this is not necessarily the case for a binary (or higher arity) operation. A binary (or higher arity) operation that commutes with itself is called medial or entropic.
== Generalizations ==
Composition can be generalized to arbitrary binary relations.
If R ⊆ X × Y and S ⊆ Y × Z are two binary relations, then their composition amounts to
R
∘
S
=
{
(
x
,
z
)
∈
X
×
Z
:
(
∃
y
∈
Y
)
(
(
x
,
y
)
∈
R
∧
(
y
,
z
)
∈
S
)
}
{\displaystyle R\circ S=\{(x,z)\in X\times Z:(\exists y\in Y)((x,y)\in R\,\land \,(y,z)\in S)\}}
.
Considering a function as a special case of a binary relation (namely functional relations), function composition satisfies the definition for relation composition. A small circle R∘S has been used for the infix notation of composition of relations, as well as functions. When used to represent composition of functions
(
g
∘
f
)
(
x
)
=
g
(
f
(
x
)
)
{\displaystyle (g\circ f)(x)\ =\ g(f(x))}
however, the text sequence is reversed to illustrate the different operation sequences accordingly.
The composition is defined in the same way for partial functions and Cayley's theorem has its analogue called the Wagner–Preston theorem.
The category of sets with functions as morphisms is the prototypical category. The axioms of a category are in fact inspired from the properties (and also the definition) of function composition. The structures given by composition are axiomatized and generalized in category theory with the concept of morphism as the category-theoretical replacement of functions. The reversed order of composition in the formula (f ∘ g)−1 = (g−1 ∘ f −1) applies for composition of relations using converse relations, and thus in group theory. These structures form dagger categories.The standard "foundation" for mathematics starts with sets and their elements. It is possible to start differently, by axiomatising not elements of sets but functions between sets. This can be done by using the language of categories and universal constructions.
. . . the membership relation for sets can often be replaced by the composition operation for functions. This leads to an alternative foundation for Mathematics upon categories -- specifically, on the category of all functions. Now much of Mathematics is dynamic, in that it deals with morphisms of an object into another object of the same kind. Such morphisms (like functions) form categories, and so the approach via categories fits well with the objective of organizing and understanding Mathematics. That, in truth, should be the goal of a proper philosophy of Mathematics.
- Saunders Mac Lane, Mathematics: Form and Function
== Typography ==
The composition symbol ∘ is encoded as U+2218 ∘ RING OPERATOR (∘, ∘); see the Degree symbol article for similar-appearing Unicode characters. In TeX, it is written \circ.
== See also ==
Cobweb plot – a graphical technique for functional composition
Combinatory logic
Composition ring, a formal axiomatization of the composition operation
Flow (mathematics)
Function composition (computer science)
Function of random variable, distribution of a function of a random variable
Functional decomposition
Functional square root
Functional equation
Higher-order function
Infinite compositions of analytic functions
Iterated function
Lambda calculus
== Notes ==
== References ==
== External links ==
"Composite function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"Composition of Functions" by Bruce Atwood, the Wolfram Demonstrations Project, 2007. | Wikipedia/Composite_function |
In mathematical logic, abstract algebraic logic is the study of the algebraization of deductive systems
arising as an abstraction of the well-known Lindenbaum–Tarski algebra, and how the resulting algebras are related to logical systems.
== History ==
The archetypal association of this kind, one fundamental to the historical origins of algebraic logic and lying at the heart of all subsequently developed subtheories, is the association between the class of Boolean algebras and classical propositional calculus. This association was discovered by George Boole in the 1850s, and then further developed and refined by others, especially C. S. Peirce and Ernst Schröder, from the 1870s to the 1890s. This work culminated in Lindenbaum–Tarski algebras, devised by Alfred Tarski and his student Adolf Lindenbaum in the 1930s. Later, Tarski and his American students (whose ranks include Don Pigozzi) went on to discover cylindric algebra, whose representable instances algebraize all of classical first-order logic, and revived relation algebra, whose models include all well-known axiomatic set theories.
Classical algebraic logic, which comprises all work in algebraic logic until about 1960, studied the properties of specific classes of algebras used to "algebraize" specific logical systems of particular interest to specific logical investigations. Generally, the algebra associated with a logical system was found to be a type of lattice, possibly enriched with one or more unary operations other than lattice complementation.
Abstract algebraic logic is a modern subarea of algebraic logic that emerged in Poland during the 1950s and 60s with the work of Helena Rasiowa, Roman Sikorski, Jerzy Łoś, and Roman Suszko (to name but a few). It reached maturity in the 1980s with the seminal publications of the Polish logician Janusz Czelakowski, the Dutch logician Wim Blok and the American logician Don Pigozzi. The focus of abstract algebraic logic shifted from the study of specific classes of algebras associated with specific logical systems (the focus of classical algebraic logic), to the study of:
Classes of algebras associated with classes of logical systems whose members all satisfy certain abstract logical properties;
The process by which a class of algebras becomes the "algebraic counterpart" of a given logical system;
The relation between metalogical properties satisfied by a class of logical systems, and the corresponding algebraic properties satisfied by their algebraic counterparts.
The passage from classical algebraic logic to abstract algebraic logic may be compared to the passage from "modern" or abstract algebra (i.e., the study of groups, rings, modules, fields, etc.) to universal algebra (the study of classes of algebras of arbitrary similarity types (algebraic signatures) satisfying specific abstract properties).
The two main motivations for the development of abstract algebraic logic are closely connected to (1) and (3) above. With respect to (1), a critical step in the transition was initiated by the work of Rasiowa. Her goal was to abstract results and methods known to hold for the classical propositional calculus and Boolean algebras and some other closely related logical systems, in such a way that these results and methods could be applied to a much wider variety of propositional logics.
(3) owes much to the joint work of Blok and Pigozzi exploring the different forms that the well-known deduction theorem of classical propositional calculus and first-order logic takes on in a wide variety of logical systems. They related these various forms of the deduction theorem to the properties of the algebraic counterparts of these logical systems.
Abstract algebraic logic has become a well established subfield of algebraic logic, with many deep and interesting results. These results explain many properties of different classes of logical systems previously explained only on a case-by-case basis or shrouded in mystery. Perhaps the most important achievement of abstract algebraic logic has been the classification of propositional logics in a hierarchy, called the abstract algebraic hierarchy or Leibniz hierarchy, whose different levels roughly reflect the strength of the ties between a logic at a particular level and its associated class of algebras. The position of a logic in this hierarchy determines the extent to which that logic may be studied using known algebraic methods and techniques. Once a logic is assigned to a level of this hierarchy, one may draw on the powerful arsenal of results, accumulated over the past 30-odd years, governing the algebras situated at the same level of the hierarchy.
The similar terms 'general algebraic logic' and 'universal algebraic logic' refer the approach of the Hungarian School including Hajnal Andréka, István Németi and others.
== Examples ==
== See also ==
Abstract algebra
Algebraic logic
Abstract model theory
Hierarchy (mathematics)
Model theory
Variety (universal algebra)
Universal logic
== Notes ==
== References ==
Blok, W., Pigozzi, D, 1989. Algebraizable logics. Memoirs of the AMS, 77(396). Also available for download from Pigozzi's home page
Czelakowski, J., 2001. Protoalgebraic Logics. Kluwer. ISBN 0-7923-6940-8. Considered "an excellent and very readable introduction to the area of abstract algebraic logic" by Mathematical Reviews
Czelakowski, J. (editor), 2018, Don Pigozzi on Abstract Algebraic Logic, Universal Algebra, and Computer Science, Outstanding Contributions to Logic Volume 16, Springer International Publishing, ISBN 978-3-319-74772-9
Font, J. M., 2003. An Abstract Algebraic Logic view of some multiple-valued logics. In M. Fitting & E. Orlowska (eds.), Beyond two: theory and applications of multiple-valued logic, Springer-Verlag, pp. 25–57.
Font, J. M., Jansana, R., 1996. A General Algebraic Semantics for Sentential Logics. Lecture Notes in Logic 7, Springer-Verlag. (2nd edition published by ASL in 2009) Also open access at Project Euclid
--------, and Pigozzi, D., 2003, A survey of abstract algebraic logic, Studia Logica 74: 13-79.
Ryszard Wójcicki (1988). Theory of logical calculi: basic theory of consequence operations. Springer. ISBN 978-90-277-2785-5.
Andréka, H., Németi, I.: General algebraic logic: A perspective on "what is logic", in D. Gabbay (ed.): What is a logical system?, Clarendon Press, 1994, pp. 485–569.
D. Pigozzi (2001). "Abstract algebraic logic". In M. Hazewinkel (ed.). Encyclopaedia of Mathematics: Supplement Volume III. Springer. pp. 2–13. ISBN 1-4020-0198-3. online at "Abstract algebraic logic", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
== External links ==
Stanford Encyclopedia of Philosophy: "Algebraic Propositional Logic"—by Ramon Jansana. | Wikipedia/Abstract_algebraic_logic |
In propositional calculus, a propositional function or a predicate is a sentence expressed in a way that would assume the value of true or false, except that within the sentence there is a variable (x) that is not defined or specified (thus being a free variable), which leaves the statement undetermined. The sentence may contain several such variables (e.g. n variables, in which case the function takes n arguments).
== Overview ==
As a mathematical function, A(x) or A(x1, x2, ..., xn), the propositional function is abstracted from predicates or propositional forms. As an example, consider the predicate scheme, "x is hot". The substitution of any entity for x will produce a specific proposition that can be described as either true or false, even though "x is hot" on its own has no value as either a true or false statement. However, when a value is assigned to x, such as lava, the function then has the value true; while one assigns to x a value like ice, the function then has the value false.
Propositional functions are useful in set theory for the formation of sets. For example, in 1903 Bertrand Russell wrote in The Principles of Mathematics (page 106):
"...it has become necessary to take propositional function as a primitive notion.
Later Russell examined the problem of whether propositional functions were predicative or not, and he proposed two theories to try to get at this question: the zig-zag theory and the ramified theory of types.
A Propositional Function, or a predicate, in a variable x is an open formula p(x) involving x that becomes a proposition when one gives x a definite value from the set of values it can take.
According to Clarence Lewis, "A proposition is any expression which is either true or false; a propositional function is an expression, containing one or more variables, which becomes a proposition when each of the variables is replaced by some one of its values from a discourse domain of individuals." Lewis used the notion of propositional functions to introduce relations, for example, a propositional function of n variables is a relation of arity n. The case of n = 2 corresponds to binary relations, of which there are homogeneous relations (both variables from the same set) and heterogeneous relations.
== See also ==
Propositional formula
Boolean-valued function
Formula (logic)
Sentence (logic)
Truth function
Open sentence
== References == | Wikipedia/Propositional_function |
The Socratic method (also known as the method of Elenchus or Socratic debate) is a form of argumentative dialogue between individuals based on asking and answering questions. Socratic dialogues feature in many of the works of the ancient Greek philosopher Plato, where his teacher Socrates debates various philosophical issues with an "interlocutor" or "partner".
In Plato's dialogue "Theaetetus", Socrates describes his method as a form of "midwifery" because it is employed to help his interlocutors develop their understanding in a way analogous to a child developing in the womb. The Socratic method begins with commonly held beliefs and scrutinizes them by way of questioning to determine their internal consistency and their coherence with other beliefs and so to bring everyone closer to the truth.
In modified forms, it is employed today in a variety of pedagogical contexts.
== Development ==
In the second half of the 5th century BC, sophists were teachers who specialized in using the tools of philosophy and rhetoric to entertain, impress, or persuade an audience to accept the speaker's point of view. Socrates promoted an alternative method of teaching, which came to be called the Socratic method.
Socrates began to engage in such discussions with his fellow Athenians after his friend from youth, Chaerephon, visited the Oracle of Delphi, which asserted that no man in Greece was wiser than Socrates. Socrates saw this as a paradox, and began using the Socratic method to answer his conundrum. Diogenes Laërtius, however, wrote that Protagoras invented the "Socratic" method.
Plato famously formalized the Socratic elenctic style in prose—presenting Socrates as the curious questioner of some prominent Athenian interlocutor—in some of his early dialogues, such as Euthyphro and Ion, and the method is most commonly found within the so-called "Socratic dialogues", which generally portray Socrates engaging in the method and questioning his fellow citizens about moral and epistemological issues. But in his later dialogues, such as Theaetetus or Sophist, Plato had a different method to philosophical discussions, namely dialectic.
== Method ==
Elenchus (Ancient Greek: ἔλεγχος, romanized: elenkhos, lit. 'argument of disproof or refutation; cross-examining, testing, scrutiny esp. for purposes of refutation') is the central technique of the Socratic method. The Latin form elenchus (plural elenchi) is used in English as the technical philosophical term. The most common adjectival form in English is elenctic; elenchic and elenchtic are also current. This was also very important in Plato's early dialogues.
Socrates (as depicted by Plato) generally applied his method of examination to concepts such as the virtues of piety, wisdom, temperance, courage, and justice. Such an examination challenged the implicit moral beliefs of the interlocutors, bringing out inadequacies and inconsistencies in their beliefs, and usually resulting in aporia. In view of such inadequacies, Socrates himself professed ignorance. Socrates said that his awareness of his ignorance made him wiser than those who, though ignorant, still claimed knowledge. This claim was based on a reported Delphic oracular pronouncement that no man was wiser than Socrates. While this belief seems paradoxical at first glance, in fact it allowed Socrates to discover his own errors.
Socrates used this claim of wisdom as the basis of moral exhortation. He claimed that the chief goodness consists in the caring of the soul concerned with moral truth and moral understanding, that "wealth does not bring goodness, but goodness brings wealth and every other blessing, both to the individual and to the state", and that "life without examination [dialogue] is not worth living".
Socrates rarely used the method to actually develop consistent theories, and he even made frequent use of creative myths and allegories. The Parmenides dialogue shows Parmenides using the Socratic method to point out the flaws in the Platonic theory of forms, as presented by Socrates; it is not the only dialogue in which theories normally expounded by Plato's Socrates are broken down through dialectic. Instead of arriving at answers, the method breaks down the theories we hold, to go "beyond" the axioms and postulates we take for granted. Therefore, myth and the Socratic method are not meant by Plato to be incompatible; they have different purposes, and are often described as the "left hand" and "right hand" paths to good and wisdom.
=== Scholarly debate ===
In Plato's early dialogues, the elenchus is the technique Socrates uses to investigate, for example, the nature or definition of ethical concepts such as justice or virtue. According to Gregory Vlastos, it has the following steps:
Socrates' interlocutor asserts a thesis, for example "Courage is endurance of the soul".
Socrates decides whether the thesis is false and targets for refutation.
Socrates secures his interlocutor's agreement to further premises, for example "Courage is a fine thing" and "Ignorant endurance is not a fine thing".
Socrates then argues, and the interlocutor agrees, these further premises imply the contrary of the original thesis; in this case, it leads to: "courage is not endurance of the soul".
Socrates then claims he has shown his interlocutor's thesis is false and its negation is true.
One elenctic examination can lead to a new, more refined, examination of the concept being considered, in this case it invites an examination of the claim: "Courage is wise endurance of the soul". Most Socratic inquiries consist of a series of elenchi and typically end in puzzlement known as aporia.
Michael Frede points out Vlastos' conclusion in step No. 5 above makes nonsense of the aporetic nature of the early dialogues. Having shown a proposed thesis is false is insufficient to conclude some other competing thesis must be true. Rather, the interlocutors have reached aporia, an improved state of still not knowing what to say about the subject under discussion.
The exact nature of the elenchus is subject to a great deal of debate, in particular concerning whether it is a positive method, leading to knowledge, or a negative method used solely to refute false claims to knowledge. Some qualitative research shows that the use of the Socratic method within a traditional Yeshiva education setting helps students succeed in law school, although it remains an open question as to whether that relationship is causal or merely correlative.
Yet, W. K. C. Guthrie in The Greek Philosophers sees it as an error to regard the Socratic method as a means by which one seeks the answer to a problem, or knowledge. Guthrie claims that the Socratic method actually aims to demonstrate one's ignorance. Socrates, unlike the Sophists, did believe that knowledge was possible, but believed that the first step to knowledge was recognition of one's ignorance. Guthrie writes, "[Socrates] was accustomed to say that he did not himself know anything, and that the only way in which he was wiser than other men was that he was conscious of his own ignorance, while they were not. The essence of the Socratic method is to convince the interlocutor that whereas he thought he knew something, in fact he does not."
== Modern applications ==
=== Socratic seminar ===
A Socratic seminar (also known as a Socratic circle) is a pedagogical approach based on the Socratic method and uses a dialogic approach to understand information in a text. Its systematic procedure is used to examine a text through questions and answers founded on the beliefs that all new knowledge is connected to prior knowledge, that all thinking comes from asking questions, and that asking one question should lead to asking further questions. A Socratic seminar is not a debate. The goal of this activity is to have participants work together to construct meaning and arrive at an answer, not for one student or one group to "win the argument".
This approach is based on the belief that participants seek and gain deeper understanding of concepts in the text through thoughtful dialogue rather than memorizing information that has been provided for them. While Socratic seminars can differ in structure, and even in name, they typically involve a passage of text that students must read beforehand and facilitate dialogue. Sometimes, a facilitator will structure two concentric circles of students: an outer circle and an inner circle. The inner circle focuses on exploring and analysing the text through the act of questioning and answering. During this phase, the outer circle remains silent. Students in the outer circle are much like scientific observers watching and listening to the conversation of the inner circle. When the text has been fully discussed and the inner circle is finished talking, the outer circle provides feedback on the dialogue that took place. This process alternates with the inner circle students going to the outer circle for the next meeting and vice versa. The length of this process varies depending on the text used for the discussion. The teacher may decide to alternate groups within one meeting, or they may alternate at each separate meeting.
The most significant difference between this activity and most typical classroom activities involves the role of the teacher. In Socratic seminar, the students lead the discussion and questioning. The teacher's role is to ensure the discussion advances regardless of the particular direction the discussion takes.
==== Various approaches to Socratic seminar ====
Teachers use Socratic seminar in different ways. The structure it takes may look different in each classroom. While this is not an exhaustive list, teachers may use one of the following structures to administer Socratic seminar:
Inner/outer circle or fishbowl: Students need to be arranged in inner and outer circles. The inner circle engages in discussion about the text. The outer circle observes the inner circle, while taking notes. The outer circle shares their observations and questions the inner circle with guidance from the teacher/facilitator. Students use constructive criticism as opposed to making judgements. The students on the outside keep track of topics they would like to discuss as part of the debrief. Participants in the outer circle can use an observation checklist or notes form to monitor the participants in the inner circle. These tools will provide structure for listening and give the outside members specific details to discuss later in the seminar. The teacher may also sit in the circle but at the same height as the students.
Triad: Students are arranged so that each participant (called a "pilot") in the inner circle has two "co-pilots" sitting behind them on either side. Pilots are the speakers because they are in the inner circle; co-pilots are in the outer circle and only speak during consultation. The seminar proceeds as any other seminar. At a point in the seminar, the facilitator pauses the discussion and instructs the triad to talk to each other. Conversation will be about topics that need more in-depth discussion or a question posed by the leader. Sometimes triads will be asked by the facilitator to come up with a new question. Any time during a triad conversation, group members can switch seats and one of the co-pilots can sit in the pilot's seat. Only during that time is the switching of seats allowed. This structure allows for students to speak, who may not yet have the confidence to speak in the large group. This type of seminar involves all students instead of just the students in the inner and outer circles.
Simultaneous seminars: Students are arranged in multiple small groups and placed as far as possible from each other. Following the guidelines of the Socratic seminar, students engage in small group discussions. Simultaneous seminars are typically done with experienced students who need little guidance and can engage in a discussion without assistance from a teacher/facilitator. According to the literature, this type of seminar is beneficial for teachers who want students to explore a variety of texts around a main issue or topic. Each small group may have a different text to read/view and discuss. A larger Socratic seminar can then occur as a discussion about how each text corresponds with one another. Simultaneous Seminars can also be used for a particularly difficult text. Students can work through different issues and key passages from the text.
No matter what structure the teacher employs, the basic premise of the seminar/circles is to turn partial control and direction of the classroom over to the students. The seminars encourage students to work together, creating meaning from the text and to stay away from trying to find a correct interpretation. The emphasis is on critical and creative thinking.
==== Text selection ====
===== Socratic seminar texts =====
A Socratic seminar text is a tangible document that creates a thought-provoking discussion.
The text ought to be appropriate for the participants' current level of intellectual and social development. It provides the anchor for dialogue whereby the facilitator can bring the participants back to the text if they begin to digress. Furthermore, the seminar text enables the participants to create a level playing field – ensuring that the dialogical tone within the classroom remains consistent and pure to the subject or topic at hand. Some practitioners argue that "texts" do not have to be confined to printed texts, but can include artifacts such as objects, physical spaces, and the like.
===== Pertinent elements of an effective Socratic text =====
Socratic seminar texts are able to challenge participants' thinking skills by having these characteristics:
Ideas and values: The text must introduce ideas and values that are complex and difficult to summarize. Powerful discussions arise from personal connections to abstract ideas and from implications to personal values.
Complexity and challenge: The text must be rich in ideas and complexity and open to interpretation. Ideally it should require multiple readings, but should be neither far above the participants' intellectual level nor very long.
Relevance to participants' curriculum: An effective text has identifiable themes that are recognizable and pertinent to the lives of the participants. Themes in the text should relate to the curriculum.
Ambiguity: The text must be approachable from a variety of different perspectives, including perspectives that seem mutually exclusive, thus provoking critical thinking and raising important questions. The absence of right and wrong answers promotes a variety of discussion and encourages individual contributions.
===== Two different ways to select a text =====
Socratic texts can be divided into two main categories:
Print texts (e.g., short stories, poems, and essays) and non-print texts (e.g. photographs, sculptures, and maps); and
Subject area, which can draw from print or non-print artifacts. As examples, language arts can be approached through poems, history through written or oral historical speeches, science through policies on environmental issues, math through mathematical proofs, health through nutrition labels, and physical education through fitness guidelines.
==== Questioning methods ====
Socratic seminars are based upon the interaction of peers. The focus is to explore multiple perspectives on a given issue or topic. Socratic questioning is used to help students apply the activity to their learning. The pedagogy of Socratic questions is open-ended, focusing on broad, general ideas rather than specific, factual information. The questioning technique emphasizes a level of questioning and thinking where there is no single right answer.
Socratic seminars generally start with an open-ended question proposed either by the leader or by another participant. There is no designated first speaker; as individuals participate in Socratic dialogue, they gain experience that enables them to be effective in this role of initial questioner.
The leader keeps the topic focused by asking a variety of questions about the text itself, as well as questions to help clarify positions when arguments become confused. The leader also seeks to coax reluctant participants into the discussion, and to limit contributions from those who tend to dominate. She or he prompts participants to elaborate on their responses and to build on what others have said. The leader guides participants to deepen, clarify, and paraphrase, and to synthesize a variety of different views.
The participants share the responsibility with the leader to maintain the quality of the Socratic circle. They listen actively to respond effectively to what others have contributed. This teaches the participants to think and speak persuasively using the discussion to support their position. Participants must demonstrate respect for different ideas, thoughts and values, and must not interrupt each other.
Questions can be created individually or in small groups. All participants are given the opportunity to take part in the discussion. Socratic circles specify three types of questions to prepare:
Opening questions generate discussion at the beginning of the seminar in order to elicit dominant themes.
Guiding questions help deepen and elaborate the discussion, keeping contributions on topic and encouraging a positive atmosphere and consideration for others.
Closing questions lead participants to summarize their thoughts and learning and personalize what they've discussed.
=== Challenges and disadvantages ===
Scholars such as Peter Boghossian suggest that although the method improves creative and critical thinking, there is a flip side to the method. He states that the teachers who use this method wait for the students to make mistakes, thus creating negative feelings in the class, exposing the student to possible ridicule and humiliation.
Some have countered this thought by stating that the humiliation and ridicule is not caused by the method, rather it is due to the lack of knowledge of the student. Boghossian mentions that even though the questions may be perplexing, they are not originally meant for it, in fact such questions provoke the students and can be countered by employing counterexamples.
== Psychotherapy ==
The Socratic method, in the form of Socratic questioning, has been adapted for psychotherapy, most prominently in classical Adlerian psychotherapy, logotherapy, rational emotive behavior therapy, cognitive therapy and reality therapy. It can be used to clarify meaning, feeling, and consequences, as well as to gradually unfold insight, or explore alternative actions.
The Socratic method has also recently inspired a new form of applied philosophy: Socratic dialogue, also called philosophical counseling. In Europe Gerd B. Achenbach is probably the best known practitioner, and Michel Weber has also proposed another variant of the practice.
== See also ==
Devil's advocate
Harkness table – a teaching method based on the Socratic method
Marva Collins
Pedagogy
The Paper Chase – 1973 film based on a 1971 novel of the same name, dramatizing the use of the Socratic method in law school classes
Socrates Cafe
Socratic questioning
Socratic irony
== References ==
== External links ==
Robinson, Richard, Plato's Earlier Dialectic, 2nd edition (Clarendon Press, Oxford, 1953).
Ch. 2: Elenchus;
Ch. 3: Elenchus: Direct and Indirect
Philosopher.org – 'Tips on Starting your own Socrates Cafe', Christopher Phillips, Cecilia Phillips
Socraticmethod.net Socratic Method Research Portal
How to Use the Socratic Method
UChicago.edu – 'The Socratic Method' by Elizabeth Garrett (1998)
Teaching by Asking Instead of by Telling, an example from Rick Garlikov
Project Gutenberg: Works by Plato
Project Gutenberg: Works by Xenophon (includes some Socratic works)
Project Gutenberg: Works by Cicero (includes some works in the "Socratic dialogue" format)
The Socratic Club
Socratic and Scientific Method | Wikipedia/Socratic_method |
In mathematical logic, the Lindenbaum–Tarski algebra (or Lindenbaum algebra) of a logical theory T consists of the equivalence classes of sentences of the theory (i.e., the quotient, under the equivalence relation ~ defined such that p ~ q exactly when p and q are provably equivalent in T). That is, two sentences are equivalent if the theory T proves that each implies the other. The Lindenbaum–Tarski algebra is thus the quotient algebra obtained by factoring the algebra of formulas by this congruence relation.
The algebra is named for logicians Adolf Lindenbaum and Alfred Tarski.
Starting in the academic year 1926-1927, Lindenbaum pioneered his method in Jan Łukasiewicz's mathematical logic seminar, and the method was popularized and generalized in subsequent decades through work
by Tarski.
The Lindenbaum–Tarski algebra is considered the origin of the modern algebraic logic.
== Operations ==
The operations in a Lindenbaum–Tarski algebra A are inherited from those in the underlying theory T. These typically include conjunction and disjunction, which are well-defined on the equivalence classes. When negation is also present in T, then A is a Boolean algebra, provided the logic is classical. If the theory T consists of the propositional tautologies, the Lindenbaum–Tarski algebra is the free Boolean algebra generated by the propositional variables.
If T is closed for deduction, then the embedding of T/~ in A is a filter. Moreover, an ultrafilter in A corresponds to a complete consistent theory, establishing the equivalence between Lindenbaum's Lemma and the Ultrafilter Lemma.
== Related algebras ==
Heyting algebras and interior algebras are the Lindenbaum–Tarski algebras for intuitionistic logic and the modal logic S4, respectively.
A logic for which Tarski's method is applicable, is called algebraizable. There are however a number of logics where this is not the case, for instance the modal logics S1, S2, or S3, which lack the rule of necessitation (⊢φ implying ⊢□φ), so ~ (defined above) is not a congruence (because ⊢φ→ψ does not imply ⊢□φ→□ψ). Another type of logic where Tarski's method is inapplicable is relevance logics, because given two theorems an implication from one to the other may not itself be a theorem in a relevance logic. The study of the algebraization process (and notion) as topic of interest by itself, not necessarily by Tarski's method, has led to the development of abstract algebraic logic.
== See also ==
Algebraic semantics (mathematical logic)
Leibniz operator
List of Boolean algebra topics
== References ==
Hinman, P. (2005). Fundamentals of Mathematical Logic. A K Peters. ISBN 1-56881-262-0. | Wikipedia/Lindenbaum–Tarski_algebra |
In abstract algebra, a monadic Boolean algebra is an algebraic structure A with signature
⟨·, +, ', 0, 1, ∃⟩ of type ⟨2,2,1,0,0,1⟩,
where ⟨A, ·, +, ', 0, 1⟩ is a Boolean algebra.
The monadic/unary operator ∃ denotes the existential quantifier, which satisfies the identities (using the received prefix notation for ∃):
∃0 = 0
∃x ≥ x
∃(x + y) = ∃x + ∃y
∃x∃y = ∃(x∃y).
∃x is the existential closure of x. Dual to ∃ is the unary operator ∀, the universal quantifier, defined as ∀x := (∃x′)′.
A monadic Boolean algebra has a dual definition and notation that take ∀ as primitive and ∃ as defined, so that ∃x := (∀x′)′. (Compare this with the definition of the dual Boolean algebra.) Hence, with this notation, an algebra A has signature ⟨·, +, ', 0, 1, ∀⟩, with ⟨A, ·, +, ', 0, 1⟩ a Boolean algebra, as before. Moreover, ∀ satisfies the following dualized version of the above identities:
∀1 = 1
∀x ≤ x
∀(xy) = ∀x∀y
∀x + ∀y = ∀(x + ∀y).
∀x is the universal closure of x.
== Discussion ==
Monadic Boolean algebras have an important connection to topology. If ∀ is interpreted as the interior operator of topology, (1)–(3) above plus the axiom ∀(∀x) = ∀x make up the axioms for an interior algebra. But ∀(∀x) = ∀x can be proved from (1)–(4). Moreover, an alternative axiomatization of monadic Boolean algebras consists of the (reinterpreted) axioms for an interior algebra, plus ∀(∀x)' = (∀x)' (Halmos 1962: 22). Hence monadic Boolean algebras are the semisimple interior/closure algebras such that:
The universal (dually, existential) quantifier interprets the interior (closure) operator;
All open (or closed) elements are also clopen.
A more concise axiomatization of monadic Boolean algebra is (1) and (2) above, plus ∀(x∨∀y) = ∀x∨∀y (Halmos 1962: 21). This axiomatization obscures the connection to topology.
Monadic Boolean algebras form a variety. They are to monadic predicate logic what Boolean algebras are to propositional logic, and what polyadic algebras are to first-order logic. Paul Halmos discovered monadic Boolean algebras while working on polyadic algebras; Halmos (1962) reprints the relevant papers. Halmos and Givant (1998) includes an undergraduate treatment of monadic Boolean algebra.
Monadic Boolean algebras also have an important connection to modal logic. The modal logic S5, viewed as a theory in S4, is a model of monadic Boolean algebras in the same way that S4 is a model of interior algebra. Likewise, monadic Boolean algebras supply the algebraic semantics for S5. Hence S5-algebra is a synonym for monadic Boolean algebra.
== See also ==
Clopen set
Cylindric algebra
Interior algebra
Kuratowski closure axioms
Łukasiewicz–Moisil algebra
Modal logic
Monadic logic
== References ==
Paul Halmos, 1962. Algebraic Logic. New York: Chelsea.
------ and Steven Givant, 1998. Logic as Algebra. Mathematical Association of America. | Wikipedia/Monadic_Boolean_algebra |
In mathematical logic, a Boolean-valued model is a generalization of the ordinary Tarskian notion of structure from model theory. In a Boolean-valued model, the truth values of propositions are not limited to "true" and "false", but instead take values in some fixed complete Boolean algebra.
Boolean-valued models were introduced by Dana Scott, Robert M. Solovay, and Petr Vopěnka in the 1960s in order to help understand Paul Cohen's method of forcing. They are also related to Heyting algebra semantics in intuitionistic logic.
== Definition ==
Fix a complete Boolean algebra B and a first-order language L; the signature of L will consist of a collection of constant symbols, function symbols, and relation symbols.
A Boolean-valued model for the language L consists of a universe M, which is a set of elements (or names), together with interpretations for the symbols. Specifically, the model must assign to each constant symbol of L an element of M, and to each n-ary function symbol f of L and each n-tuple ⟨a0,...,an-1⟩ of elements of M, the model must assign an element of M to the term f(a0,...,an-1).
Interpretation of the atomic formulas of L is more complicated. To each pair a and b of elements of M, the model must assign a truth value ‖a = b‖ to the expression a = b; this truth value is taken from the Boolean algebra B. Similarly, for each n-ary relation symbol R of L and each n-tuple ⟨a0,...,an-1⟩ of elements of M, the model must assign an element of B to be the truth value ‖R(a0,...,an-1)‖.
== Interpretation of other formulas and sentences ==
The truth values of the atomic formulas can be used to reconstruct the truth values of more complicated formulas, using the structure of the Boolean algebra. For propositional connectives, this is easy; one simply applies the corresponding Boolean operators to the truth values of the subformulae. For example, if φ(x) and ψ(y,z) are formulas with one and two free variables, respectively, and if a, b, c are elements of the model's universe to be substituted for x, y, and z, then the truth value of
ϕ
(
a
)
∧
ψ
(
b
,
c
)
{\displaystyle \phi (a)\land \psi (b,c)}
is simply
‖
ϕ
(
a
)
∧
ψ
(
b
,
c
)
‖
=
‖
ϕ
(
a
)
‖
∧
‖
ψ
(
b
,
c
)
‖
{\displaystyle \|\phi (a)\land \psi (b,c)\|=\|\phi (a)\|\ \land \ \|\psi (b,c)\|}
The completeness of the Boolean algebra is required to define truth values for quantified formulas. If φ(x) is a formula with free variable x (and possibly other free variables that are suppressed), then
‖
∃
x
ϕ
(
x
)
‖
=
⋁
a
∈
M
‖
ϕ
(
a
)
‖
,
{\displaystyle \|\exists x\phi (x)\|=\bigvee _{a\in M}\|\phi (a)\|,}
where the right-hand side is to be understood as the supremum in B of the set of all truth values ||φ(a)|| as a ranges over M.
The truth value of a formula is an element of the complete Boolean algebra B.
== Boolean-valued models of set theory ==
Given a complete Boolean algebra B there is a Boolean-valued model denoted by VB, which is the Boolean-valued analogue of the von Neumann universe V. (Strictly speaking, VB is a proper class, so we need to reinterpret what it means to be a model appropriately.) Informally, the elements of VB are "Boolean-valued sets". Given an ordinary set A, every set either is or is not a member of A; but given a Boolean-valued set, every set has a certain, fixed membership degree in A.
The elements of the Boolean-valued set, in turn, are also Boolean-valued sets, whose elements are also Boolean-valued sets, and so on. In order to obtain a non-circular definition of Boolean-valued set, they are defined inductively in a hierarchy similar to the cumulative hierarchy. For each ordinal α of V, the set VBα is defined as follows.
VB0 is the empty set.
VBα+1 is the set of all functions from VBα to B. (Such a function represents a subset of VBα; if f is such a function, then for any x ∈ VBα, the value f(x) is the membership degree of x in the set.)
If α is a limit ordinal, VBα is the union of VBβ for β < α.
The class VB is defined to be the union of all sets VBα.
It is also possible to relativize this entire construction to some transitive model M of ZF (or sometimes a fragment thereof). The Boolean-valued model MB is obtained by applying the above construction inside M. The restriction to transitive models is not serious, as the Mostowski collapsing theorem implies that every "reasonable" (well-founded, extensional) model is isomorphic to a transitive one. (If the model M is not transitive things get messier, as M′s interpretation of what it means to be a "function" or an "ordinal" may differ from the "external" interpretation.)
Once the elements of VB have been defined as above, it is necessary to define B-valued relations of equality and membership on VB. Here a B-valued relation on VB is a function from VB × VB to B. To avoid confusion with the usual equality and membership, these are denoted by ‖x = y‖ and ‖x ∈ y‖ for x and y in VB. They are defined as follows:
‖x ∈ y‖ is defined to be Σt ∈ Dom(y) ‖x = t‖ ∧ y(t) ("x is in y if it is equal to something in y").
‖x = y‖ is defined to be ‖x ⊆ y‖∧‖y ⊆ x‖ ("x equals y if x and y are both subsets of each other"), where
‖x ⊆ y‖ is defined to be Πt ∈ Dom(x) x(t) ⇒ ‖t ∈ y‖ ("x is a subset of y if all elements of x are in y")
The symbols Σ and Π denote the least upper bound and greatest lower bound operations, respectively, in the complete Boolean algebra B. At first sight the definitions above appear to be circular: ‖∈‖ depends on ‖=‖, which depends on ‖⊆‖, which depends on ‖∈‖. However, a close examination shows that the definition of ‖∈‖ only depends on ‖∈‖ for elements of smaller rank, so ‖∈‖ and ‖=‖ are well defined functions from VB×VB to B.
It can be shown that the B-valued relations ‖∈‖ and ‖=‖ on VB make VB into a Boolean-valued model of set theory. Each sentence of first-order set theory with no free variables has a truth value in B; it must be shown that the axioms for equality and all the axioms of ZF set theory (written without free variables) have truth value 1 (the largest element of B). This proof is straightforward, but it is long because there are many different axioms that need to be checked.
== Relationship to forcing ==
Set theorists use a technique called forcing to obtain independence results and to construct models of set theory for other purposes. The method was originally developed by Paul Cohen but has been greatly extended since then. In one form, forcing "adds to the universe" a generic subset of a poset, the poset being designed to impose interesting properties on the newly added object. The wrinkle is that (for interesting posets) it can be proved that there simply is no such generic subset of the poset. There are three usual ways of dealing with this:
syntactic forcing A forcing relation
p
⊩
ϕ
{\displaystyle p\Vdash \phi }
is defined between elements p of the poset and formulas φ of the forcing language. This relation is defined syntactically and has no semantics; that is, no model is ever produced. Rather, starting with the assumption that ZFC (or some other axiomatization of set theory) proves the independent statement, one shows that ZFC must also be able to prove a contradiction. However, the forcing is "over V"; that is, it is not necessary to start with a countable transitive model. See Kunen (1980) for an exposition of this method.
countable transitive models One starts with a countable transitive model M of as much of set theory as is needed for the desired purpose, and that contains the poset. Then there do exist filters on the poset that are generic over M; that is, that meet all dense open subsets of the poset that happen also to be elements of M.
fictional generic objects Commonly, set theorists will simply pretend that the poset has a subset that is generic over all of V. This generic object, in nontrivial cases, cannot be an element of V, and therefore "does not really exist". (Of course, it is a point of philosophical contention whether any sets "really exist", but that is outside the scope of the current discussion.) With a little practice this method is useful and reliable, but it can be philosophically unsatisfying.
=== Boolean-valued models and syntactic forcing ===
Boolean-valued models can be used to give semantics to syntactic forcing; the price paid is that the semantics is not 2-valued ("true or false"), but assigns truth values from some complete Boolean algebra. Given a forcing poset P, there is a corresponding complete Boolean algebra B, often obtained as the collection of regular open subsets of P, where the topology on P is defined by declaring all lower sets open (and all upper sets closed). (Other approaches to constructing B are discussed below.)
Now the order on B (after removing the zero element) can replace P for forcing purposes, and the forcing relation can be interpreted semantically by saying that, for p an element of B and φ a formula of the forcing language,
p
⊩
ϕ
⟺
p
≤
|
|
ϕ
|
|
{\displaystyle p\Vdash \phi \iff p\leq ||\phi ||}
where ||φ|| is the truth value of φ in VB.
This approach succeeds in assigning a semantics to forcing over V without resorting to fictional generic objects. The disadvantages are that the semantics is not 2-valued, and that the combinatorics of B are often more complicated than those of the underlying poset P.
=== Boolean-valued models and generic objects over countable transitive models ===
One interpretation of forcing starts with a countable transitive model M of ZF set theory, a partially ordered set P, and a "generic" subset G of P, and constructs a new model of ZF set theory from these objects. (The conditions that the model be countable and transitive simplify some technical problems, but are not essential.) Cohen's construction can be carried out using Boolean-valued models as follows.
Construct a complete Boolean algebra B as the complete Boolean algebra "generated by" the poset P.
Construct an ultrafilter U on B (or equivalently a homomorphism from B to the Boolean algebra {true, false}) from the generic subset G of P.
Use the homomorphism from B to {true, false} to turn the Boolean-valued model MB of the section above into an ordinary model of ZF.
We now explain these steps in more detail.
For any poset P there is a complete Boolean algebra B and a map e from P to B+ (the non-zero elements of B) such that the image is dense, e(p)≤e(q) whenever p≤q, and e(p)e(q)=0 whenever p and q are incompatible. This Boolean algebra is unique up to isomorphism. It can be constructed as the algebra of regular open sets in the topological space of P (with underlying set P, and a base given by the sets Up of elements q with q≤p).
The map from the poset P to the complete Boolean algebra B is not injective in general. The map is injective if and only if P has the following property: if every r≤p is compatible with q, then p≤q.
The ultrafilter U on B is defined to be the set of elements b of B that are greater than some element of (the image of) G. Given an ultrafilter U on a Boolean algebra, we get a homomorphism to {true, false}
by mapping U to true and its complement to false. Conversely, given such a homomorphism, the inverse image of true is an ultrafilter, so ultrafilters are essentially the same as homomorphisms to {true, false}. (Algebraists might prefer to use maximal ideals instead of ultrafilters: the complement of an ultrafilter is a maximal ideal, and conversely the complement of a maximal ideal is an ultrafilter.)
If g is a homomorphism from a Boolean algebra B to a Boolean algebra C and MB is any
B-valued model of ZF (or of any other theory for that matter) we can turn MB into a C-valued model by applying the homomorphism g to the value of all formulas. In particular if C is {true, false} we get a {true, false}-valued model. This is almost the same as an ordinary model: in fact we get an ordinary model on the set of equivalence classes under || = || of a {true, false}-valued model. So we get an ordinary model of ZF set theory by starting from M, a Boolean algebra B, and an ultrafilter U on B.
(The model of ZF constructed like this is not transitive. In practice one applies the Mostowski collapsing theorem to turn this into a transitive model.)
We have seen that forcing can be done using Boolean-valued models, by constructing a Boolean algebra with ultrafilter from a poset with a generic subset. It is also possible to go back the other way: given a Boolean algebra B, we can form a poset P of all the nonzero elements of B, and a generic ultrafilter on B restricts to a generic set on P. So the techniques of forcing and Boolean-valued models are essentially equivalent.
== Notes ==
== References ==
Bell, J. L. (1985) Boolean-Valued Models and Independence Proofs in Set Theory, Oxford. ISBN 0-19-853241-5
Grishin, V.N. (2001) [1994], "Boolean-valued model", Encyclopedia of Mathematics, EMS Press
Jech, Thomas (2002). Set theory, third millennium edition (revised and expanded). Springer. ISBN 3-540-44085-2. OCLC 174929965.
Kunen, Kenneth (1980). Set Theory: An Introduction to Independence Proofs. North-Holland. ISBN 0-444-85401-0. OCLC 12808956.
Kusraev, A. G. and S. S. Kutateladze (1999). Boolean Valued Analysis. Kluwer Academic Publishers. ISBN 0-7923-5921-6. OCLC 41967176. Contains an account of Boolean-valued models and applications to Riesz spaces, Banach spaces and algebras.
Manin, Yu. I. (1977). A Course in Mathematical Logic. Springer. ISBN 0-387-90243-0. OCLC 2797938. Contains an account of forcing and Boolean-valued models written for mathematicians who are not set theorists.
Rosser, J. Barkley (1969). Simplified Independence Proofs, Boolean valued models of set theory. Academic Press. | Wikipedia/Boolean-valued_model |
In mathematics and abstract algebra, a relation algebra is a residuated Boolean algebra expanded with an involution called converse, a unary operation. The motivating example of a relation algebra is the algebra 2X 2 of all binary relations on a set X, that is, subsets of the cartesian square X2, with R•S interpreted as the usual composition of binary relations R and S, and with the converse of R as the converse relation.
Relation algebra emerged in the 19th-century work of Augustus De Morgan and Charles Peirce, which culminated in the algebraic logic of Ernst Schröder. The equational form of relation algebra treated here was developed by Alfred Tarski and his students, starting in the 1940s. Tarski and Givant (1987) applied relation algebra to a variable-free treatment of axiomatic set theory, with the implication that mathematics founded on set theory could itself be conducted without variables.
== Definition ==
A relation algebra (L, ∧, ∨, −, 0, 1, •, I, ˘) is an algebraic structure equipped with the Boolean operations of conjunction x∧y, disjunction x∨y, and negation x−, the Boolean constants 0 and 1, the relational operations of composition x•y and converse x˘, and the relational constant I, such that these operations and constants satisfy certain equations constituting an axiomatization of a calculus of relations. Roughly, a relation algebra is to a system of binary relations on a set containing the empty (0), universal (1), and identity (I) relations and closed under these five operations as a group is to a system of permutations of a set containing the identity permutation and closed under composition and inverse. However, the first-order theory of relation algebras is not complete for such systems of binary relations.
Following Jónsson and Tsinakis (1993) it is convenient to define additional operations x ◁ y = x • y˘, and, dually, x ▷ y = x˘ • y. Jónsson and Tsinakis showed that I ◁ x = x ▷ I, and that both were equal to x˘. Hence a relation algebra can equally well be defined as an algebraic structure (L, ∧, ∨, −, 0, 1, •, I, ◁, ▷). The advantage of this signature over the usual one is that a relation algebra can then be defined in full simply as a residuated Boolean algebra for which I ◁ x is an involution, that is, I ◁ (I ◁ x) = x. The latter condition can be thought of as the relational counterpart of the equation 1/(1/x) = x for ordinary arithmetic reciprocal, and some authors use reciprocal as a synonym for converse.
Since residuated Boolean algebras are axiomatized with finitely many identities, so are relation algebras. Hence the latter form a variety, the variety RA of relation algebras. Expanding the above definition as equations yields the following finite axiomatization.
=== Axioms ===
The axioms B1-B10 below are adapted from Givant (2006: 283), and were first set out by Tarski in 1948.
L is a Boolean algebra under binary disjunction, ∨, and unary complementation ()−:
B1: A ∨ B = B ∨ A
B2: A ∨ (B ∨ C) = (A ∨ B) ∨ C
B3: (A− ∨ B)− ∨ (A− ∨ B−)− = A
This axiomatization of Boolean algebra is due to Huntington (1933). Note that the meet of the implied Boolean algebra is not the • operator (even though it distributes over ∨ like a meet does), nor is the 1 of the Boolean algebra the I constant.
L is a monoid under binary composition (•) and nullary identity I:
B4: A • (B • C) = (A • B) • C
B5: A • I = A
Unary converse ()˘ is an involution with respect to composition:
B6: A˘˘ = A
B7: (A • B)˘ = B˘ • A˘
Axiom B6 defines conversion as an involution, whereas B7 expresses the antidistributive property of conversion relative to composition.
Converse and composition distribute over disjunction:
B8: (A ∨ B)˘ = A˘ ∨ B˘
B9: (A ∨ B) • C = (A • C) ∨ (B • C)
B10 is Tarski's equational form of the fact, discovered by Augustus De Morgan, that A • B ≤ C−
↔
{\displaystyle \leftrightarrow }
A˘ • C ≤ B−
↔
{\displaystyle \leftrightarrow }
C • B˘ ≤ A−.
B10: (A˘ • (A • B)−) ∨ B− = B−
These axioms are ZFC theorems; for the purely Boolean B1-B3, this fact is trivial. After each of the following axioms is shown the number of the corresponding theorem in Chapter 3 of Suppes (1960), an exposition of ZFC: B4 27, B5 45, B6 14, B7 26, B8 16, B9 23.
== Expressing properties of binary relations in RA ==
The following table shows how many of the usual properties of binary relations can be expressed as succinct RA equalities or inequalities. Below, an inequality of the form A ≤ B is shorthand for the Boolean equation A∨B = B.
The most complete set of results of this nature is Chapter C of Carnap (1958), where the notation is rather distant from that of this entry. Chapter 3.2 of Suppes (1960) contains fewer results, presented as ZFC theorems and using a notation that more resembles that of this entry. Neither Carnap nor Suppes formulated their results using the RA of this entry, or in an equational manner.
== Expressive power ==
The metamathematics of RA are discussed at length in Tarski and Givant (1987), and more briefly in Givant (2006).
RA consists entirely of equations manipulated using nothing more than uniform replacement and the substitution of equals for equals. Both rules are wholly familiar from school mathematics and from abstract algebra generally. Hence RA proofs are carried out in a manner familiar to all mathematicians, unlike the case in mathematical logic generally.
RA can express any (and up to logical equivalence, exactly the) first-order logic (FOL) formulas containing no more than three variables. (A given variable can be quantified multiple times and hence quantifiers can be nested arbitrarily deeply by "reusing" variables.) Surprisingly, this fragment of FOL suffices to express Peano arithmetic and almost all axiomatic set theories ever proposed. Hence RA is, in effect, a way of algebraizing nearly all mathematics, while dispensing with FOL and its connectives, quantifiers, turnstiles, and modus ponens. Because RA can express Peano arithmetic and set theory, Gödel's incompleteness theorems apply to it; RA is incomplete, incompletable, and undecidable. (N.B. The Boolean algebra fragment of RA is complete and decidable.)
The representable relation algebras, forming the class RRA, are those relation algebras isomorphic to some relation algebra consisting of binary relations on some set, and closed under the intended interpretation of the RA operations. It is easily shown, e.g. using the method of pseudoelementary classes, that RRA is a quasivariety, that is, axiomatizable by a universal Horn theory. In 1950, Roger Lyndon proved the existence of equations holding in RRA that did not hold in RA. Hence the variety generated by RRA is a proper subvariety of the variety RA. In 1955, Alfred Tarski showed that RRA is itself a variety. In 1964, Donald Monk showed that RRA has no finite axiomatization, unlike RA which is finitely axiomatized by definition.
=== Q-relation algebras ===
An RA is a Q-relation algebra (QRA) if, in addition to B1-B10, there exist some A and B such that (Tarski and Givant 1987: §8.4):
Q0: A˘ • A ≤ I
Q1: B˘ • B ≤ I
Q2: A˘ • B = 1
Essentially these axioms imply that the universe has a (non-surjective) pairing relation whose projections are A and B. It is a theorem that every QRA is a RRA (Proof by Maddux, see Tarski & Givant 1987: 8.4(iii)).
Every QRA is representable (Tarski and Givant 1987). That not every relation algebra is representable is a fundamental way RA differs from QRA and Boolean algebras, which, by Stone's representation theorem for Boolean algebras, are always representable as sets of subsets of some set, closed under union, intersection, and complement.
== Examples ==
Any Boolean algebra can be turned into a RA by interpreting conjunction as composition (the monoid multiplication •), i.e. x • y is defined as x∧y. This interpretation requires that converse interpret identity (ў = y), and that both residuals y\x and x/y interpret the conditional y → x (i.e., ¬y ∨ x).
The motivating example of a relation algebra depends on the definition of a binary relation R on a set X as any subset R ⊆ X 2, where X 2 is the cartesian square of X. The power set 2X 2 consisting of all binary relations on X is a Boolean algebra. While 2X 2 can be made a relation algebra by taking R • S = R ∧ S, as per example (1) above, the standard interpretation of • is instead x(R • S)z = ∃y : xRy.ySz. That is, the ordered pair (x, z) belongs to the relation R • S just when there exists y in X such that (x, y) ∈ R and (y, z) ∈ S. This interpretation uniquely determines R \S as consisting of all pairs (y, z) such that for all x ∈ X, if xRy then xSz. Dually, S/R consists of all pairs (x, y) such that for all z in X, if yRz then xSz. The translation ў = ¬(y\¬I) then establishes the converse R˘ of R as consisting of all pairs (y, x) such that (x, y) ∈ R.
An important generalization of the previous example is the power set 2E where E ⊆ X 2 is any equivalence relation on the set X. This is a generalization because X 2 is itself an equivalence relation, namely the complete relation consisting of all pairs. While 2E is not a subalgebra of 2X 2 when E ≠ X 2 (since in that case it does not contain the relation X 2, the top element 1 being E instead of X 2), it is nevertheless turned into a relation algebra using the same definitions of the operations. Its importance resides in the definition of a representable relation algebra as any relation algebra isomorphic to a subalgebra of the relation algebra 2E for some equivalence relation E on some set. The previous section says more about the relevant metamathematics.
Let G be a group. Then the power set
2
G
{\displaystyle 2^{G}}
is a relation algebra with the obvious Boolean algebra operations, composition given by the product of group subsets, the converse by the inverse subset (
A
−
1
=
{
a
−
1
:
a
∈
A
}
{\displaystyle A^{-1}=\{a^{-1}\!:a\in A\}}
), and the identity by the singleton subset
{
e
}
{\displaystyle \{e\}}
. There is a relation algebra homomorphism embedding
2
G
{\displaystyle 2^{G}}
in
2
G
×
G
{\displaystyle 2^{G\times G}}
which sends each subset
A
⊂
G
{\displaystyle A\subset G}
to the relation
R
A
=
{
(
g
,
h
)
∈
G
×
G
:
h
∈
A
g
}
{\displaystyle R_{A}=\{(g,h)\in G\times G:h\in Ag\}}
. The image of this homomorphism is the set of all right-invariant relations on G.
If group sum or product interprets composition, group inverse interprets converse, group identity interprets I, and if R is a one-to-one correspondence, so that R˘ • R = R • R˘ = I, then L is a group as well as a monoid. B4-B7 become well-known theorems of group theory, so that RA becomes a proper extension of group theory as well as of Boolean algebra.
== Historical remarks ==
De Morgan founded RA in 1860, but C. S. Peirce took it much further and became fascinated with its philosophical power. The work of DeMorgan and Peirce came to be known mainly in the extended and definitive form Ernst Schröder gave it in Vol. 3 of his Vorlesungen (1890–1905). Principia Mathematica drew strongly on Schröder's RA, but acknowledged him only as the inventor of the notation. In 1912, Alwin Korselt proved that a particular formula in which the quantifiers were nested four deep had no RA equivalent. This fact led to a loss of interest in RA until Tarski (1941) began writing about it. His students have continued to develop RA down to the present day. Tarski returned to RA in the 1970s with the help of Steven Givant; this collaboration resulted in the monograph by Tarski and Givant (1987), the definitive reference for this subject. For more on the history of RA, see Maddux (1991, 2006).
== Software ==
RelMICS / Relational Methods in Computer Science maintained by Wolfram Kahl
Carsten Sinz: ARA / An Automatic Theorem Prover for Relation Algebras
Stef Joosten, Relation Algebra as programming language using the Ampersand compiler, Journal of Logical and Algebraic Methods in Programming, Volume 100, April 2018, Pages 113–129. (see also https://ampersandtarski.github.io/)
== See also ==
== Footnotes ==
== References ==
Carnap, Rudolf (1958). Introduction to Symbolic Logic and its Applications. Dover Publications.
Givant, Steven (2006). "The calculus of relations as a foundation for mathematics". Journal of Automated Reasoning. 37 (4): 277–322. doi:10.1007/s10817-006-9062-x. S2CID 26324546.
Halmos, P. R. (1960). Naive Set Theory. Van Nostrand.
Henkin, Leon; Tarski, Alfred; Monk, J. D. (1971). Cylindric Algebras, Part 1. North Holland.
Henkin, Leon; Tarski, Alfred; Monk, J. D. (1985). Cylindric Algebras, Part 2. North Holland.
Hirsch, R.; Hodkinson, I. (2002). Relation Algebra by Games. Studies in Logic and the Foundations of Mathematics. Vol. 147. Elsevier Science.
Jónsson, Bjarni; Tsinakis, Constantine (1993). "Relation algebras as residuated Boolean algebras". Algebra Universalis. 30 (4): 469–78. doi:10.1007/BF01195378. S2CID 120642402.
Maddux, Roger (1991). "The Origin of Relation Algebras in the Development and Axiomatization of the Calculus of Relations" (PDF). Studia Logica. 50 (3–4): 421–455. CiteSeerX 10.1.1.146.5668. doi:10.1007/BF00370681. S2CID 12165812.
Maddux, Roger (2006). Relation Algebras. Studies in Logic and the Foundations of Mathematics. Vol. 150. Elsevier Science. ISBN 9780444520135.
Schein, Boris M. (1970) "Relation algebras and function semigroups", Semigroup Forum 1: 1–62
Schmidt, Gunther (2010). Relational Mathematics. Cambridge University Press.
Suppes, Patrick (1972) [1960]. "Chapter 3". Axiomatic Set Theory (Dover reprint ed.). Van Nostrand.
Tarski, Alfred (1941). "On the calculus of relations". Journal of Symbolic Logic. 6 (3): 73–89. doi:10.2307/2268577. JSTOR 2268577. S2CID 11899579.
Tarski, Alfred; Givant, Steven (1987). A Formalization of Set Theory without Variables. Providence RI: American Mathematical Society. ISBN 9780821810415.
== External links ==
Yohji AKAMA, Yasuo Kawahara, and Hitoshi Furusawa, "Constructing Allegory from Relation Algebra and Representation Theorems."
Richard Bird, Oege de Moor, Paul Hoogendijk, "Generic Programming with Relations and Functors."
R.P. de Freitas and Viana, "A Completeness Result for Relation Algebra with Binders."
Peter Jipsen:
Relation algebras
"Foundations of Relations and Kleene Algebra."
"Computer Aided Investigations of Relation Algebras."
"A Gentzen System And Decidability For Residuated Lattices."
Vaughan Pratt:
"Origins of the Calculus of Binary Relations." A historical treatment.
"The Second Calculus of Binary Relations."
Priss, Uta:
"An FCA interpretation of Relation Algebra."
"Relation Algebra and FCA" Links to publications and software
Kahl, Wolfram and Gunther Schmidt: Exploring (Finite) Relation Algebras Using Tools Written in Haskell. and Relation Algebra Tools with Haskell from McMaster University. | Wikipedia/Relation_algebra |
Metaphysics is the branch of philosophy that examines the basic structure of reality. It is traditionally seen as the study of mind-independent features of the world, but some theorists view it as an inquiry into the conceptual framework of human understanding. Some philosophers, including Aristotle, designate metaphysics as first philosophy to suggest that it is more fundamental than other forms of philosophical inquiry.
Metaphysics encompasses a wide range of general and abstract topics. It investigates the nature of existence, the features all entities have in common, and their division into categories of being. An influential division is between particulars and universals. Particulars are individual unique entities, like a specific apple. Universals are general features that different particulars have in common, like the color red. Modal metaphysics examines what it means for something to be possible or necessary. Metaphysicians also explore the concepts of space, time, and change, and their connection to causality and the laws of nature. Other topics include how mind and matter are related, whether everything in the world is predetermined, and whether there is free will.
Metaphysicians use various methods to conduct their inquiry. Traditionally, they rely on rational intuitions and abstract reasoning but have recently included empirical approaches associated with scientific theories. Due to the abstract nature of its topic, metaphysics has received criticisms questioning the reliability of its methods and the meaningfulness of its theories. Metaphysics is relevant to many fields of inquiry that often implicitly rely on metaphysical concepts and assumptions.
The roots of metaphysics lie in antiquity with speculations about the nature and origin of the universe, like those found in the Upanishads in ancient India, Daoism in ancient China, and pre-Socratic philosophy in ancient Greece. During the subsequent medieval period in the West, discussions about the nature of universals were influenced by the philosophies of Plato and Aristotle. The modern period saw the emergence of various comprehensive systems of metaphysics, many of which embraced idealism. In the 20th century, traditional metaphysics in general and idealism in particular faced various criticisms, which prompted new approaches to metaphysical inquiry.
== Definition ==
Metaphysics is the study of the most general features of reality, including existence, objects and their properties, possibility and necessity, space and time, change, causation, and the relation between matter and mind. It is one of the oldest branches of philosophy.
The precise nature of metaphysics is disputed and its characterization has changed in the course of history. Some approaches see metaphysics as a unified field and give a wide-sweeping definition by understanding it as the study of "fundamental questions about the nature of reality" or as an inquiry into the essences of things. Another approach doubts that the different areas of metaphysics share a set of underlying features and provides instead a fine-grained characterization by listing all the main topics investigated by metaphysicians. Some definitions are descriptive by providing an account of what metaphysicians do while others are normative and prescribe what metaphysicians ought to do.
Two historically influential definitions in ancient and medieval philosophy understand metaphysics as the science of the first causes and as the study of being qua being, that is, the topic of what all beings have in common and to what fundamental categories they belong. In the modern period, the scope of metaphysics expanded to include topics such as the distinction between mind and body and free will. Some philosophers follow Aristotle in describing metaphysics as "first philosophy", suggesting that it is the most basic inquiry upon which all other branches of philosophy depend in some way.
Metaphysics is traditionally understood as a study of mind-independent features of reality. Starting with Immanuel Kant's critical philosophy, an alternative conception gained prominence that focuses on conceptual schemes rather than external reality. Kant distinguishes transcendent metaphysics, which aims to describe the objective features of reality beyond sense experience, from the critical perspective on metaphysics, which outlines the aspects and principles underlying all human thought and experience. Philosopher P. F. Strawson further explored the role of conceptual schemes, contrasting descriptive metaphysics, which articulates conceptual schemes commonly used to understand the world, with revisionary metaphysics, which aims to produce better conceptual schemes.
Metaphysics differs from the individual sciences by studying the most general and abstract aspects of reality. The individual sciences, by contrast, examine more specific and concrete features and restrict themselves to certain classes of entities, such as the focus on physical things in physics, living entities in biology, and cultures in anthropology. It is disputed to what extent this contrast is a strict dichotomy rather than a gradual continuum.
=== Etymology ===
The word metaphysics has its origin in the ancient Greek words metá (μετά, meaning 'after', 'above', and 'beyond') and phusiká (φυσικά), as a short form of ta metá ta phusiká, meaning 'what comes after the physics'. This is often interpreted to mean that metaphysics discusses topics that, due to their generality and comprehensiveness, lie beyond the realm of physics and its focus on empirical observation. Metaphysics may have received its name by a historical accident when Aristotle's book on this subject was published. Aristotle did not use the term metaphysics but his editor (likely Andronicus of Rhodes) may have coined it for its title to indicate that this book should be studied after Aristotle's book published on physics: literally 'after physics'. The term entered the English language through the Latin word metaphysica.
=== Branches ===
The nature of metaphysics can also be characterized in relation to its main branches. An influential division from early modern philosophy distinguishes between general and special or specific metaphysics. General metaphysics, also called ontology, takes the widest perspective and studies the most fundamental aspects of being. It investigates the features that all entities share and how entities can be divided into different categories. Categories are the most general kinds, such as substance, property, relation, and fact. Ontologists research which categories there are, how they depend on one another, and how they form a system of categories that provides a comprehensive classification of all entities.
Special metaphysics considers being from more narrow perspectives and is divided into subdisciplines based on the perspective they take. Metaphysical cosmology examines changeable things and investigates how they are connected to form a world as a totality extending through space and time. Rational psychology focuses on metaphysical foundations and problems concerning the mind, such as its relation to matter and the freedom of the will. Natural theology studies the divine and its role as the first cause. The scope of special metaphysics overlaps with other philosophical disciplines, making it unclear whether a topic belongs to it or to areas like philosophy of mind and theology.
Starting in the second half of the 20th century, applied metaphysics was conceived as the area of applied philosophy examining the implications and uses of metaphysics, both within philosophy and other fields of inquiry. In areas like ethics and philosophy of religion, it addresses topics like the ontological foundations of moral claims and religious doctrines. Beyond philosophy, its applications include the use of ontologies in artificial intelligence, economics, and sociology to classify entities. In psychiatry and medicine, it examines the metaphysical status of diseases.
Meta-metaphysics is the metatheory of metaphysics and investigates the nature and methods of metaphysics. It examines how metaphysics differs from other philosophical and scientific disciplines and assesses its relevance to them. Even though discussions of these topics have a long history in metaphysics, meta-metaphysics has only recently developed into a systematic field of inquiry.
== Topics ==
=== Existence and categories of being ===
Metaphysicians often regard existence or being as one of the most basic and general concepts. To exist means to be part of reality, distinguishing real entities from imaginary ones. According to a traditionally influential view, existence is a property of properties: if an entity exists then its properties are instantiated. A different position states that existence is a property of individuals, meaning that it is similar to other properties, such as shape or size. It is controversial whether all entities have this property. According to philosopher Alexius Meinong, there are nonexistent objects, including merely possible objects like Santa Claus and Pegasus. A related question is whether existence is the same for all entities or whether there are different modes or degrees of existence. For instance, Plato held that Platonic forms, which are perfect and immutable ideas, have a higher degree of existence than matter, which can only imperfectly reflect Platonic forms.
Another key concern in metaphysics is the division of entities into distinct groups based on underlying features they share. Theories of categories provide a system of the most fundamental kinds or the highest genera of being by establishing a comprehensive inventory of everything. One of the earliest theories of categories was proposed by Aristotle, who outlined a system of 10 categories. He argued that substances (e.g., man and horse), are the most important category since all other categories like quantity (e.g., four), quality (e.g., white), and place (e.g., in Athens) are said of substances and depend on them. Kant understood categories as fundamental principles underlying human understanding and developed a system of 12 categories, divided into the four classes: quantity, quality, relation, and modality. More recent theories of categories were proposed by C. S. Peirce, Edmund Husserl, Samuel Alexander, Roderick Chisholm, and E. J. Lowe. Many philosophers rely on the contrast between concrete and abstract objects. According to a common view, concrete objects, like rocks, trees, and human beings, exist in space and time, undergo changes, and impact each other as cause and effect. They contrast with abstract objects, like numbers and sets, which do not exist in space and time, are immutable, and do not engage in causal relations.
=== Particulars ===
Particulars are individual entities and include both concrete objects, like Aristotle, the Eiffel Tower, or a specific apple, and abstract objects, like the number 2 or a specific set in mathematics. They are unique, non-repeatable entities and contrast with universals, like the color red, which can at the same time exist in several places and characterize several particulars. A widely held view is that particulars instantiate universals but are not themselves instantiated by something else, meaning that they exist in themselves while universals exist in something else. Substratum theory, associated with John Locke's philosophy, analyzes each particular as a substratum, also called bare particular, together with various properties. The substratum confers individuality to the particular while the properties express its qualitative features or what it is like. This approach is rejected by bundle theorists. Inspired by David Hume's philosophy, they state that particulars are only bundles of properties without an underlying substratum. Some bundle theorists include in the bundle an individual essence, called haecceity following scholastic terminology, to ensure that each bundle is unique. Another proposal for concrete particulars is that they are individuated by their space-time location.
Concrete particulars encountered in everyday life, like rocks, tables, and organisms, are complex entities composed of various parts. For example, a table consists of a tabletop and legs, each of which is itself made up of countless particles. The relation between parts and wholes is studied by mereology. The problem of the many is a philosophical question about the conditions under which several individual things compose a larger whole. For example, a cloud comprises many droplets without a clear boundary, raising the question of which droplets form part of the cloud. According to mereological universalists, every collection of entities forms a whole. This means that what seems to be a single cloud is an overlay of countless clouds, one for each cloud-like collection of water droplets. Mereological moderatists hold that certain conditions must be met for a group of entities to compose a whole, for example, that the entities touch one another. Mereological nihilists reject the idea of wholes altogether, claiming that there are no clouds or tables but only particles that are arranged cloud-wise or table-wise. A related mereological problem is whether there are simple entities that have no parts, as atomists claim, or whether everything can be endlessly subdivided into smaller parts, as continuum theorists contend.
=== Universals ===
Universals are general entities, encompassing both properties and relations, that express what particulars are like and how they resemble one another. They are repeatable, meaning that they are not limited to a unique existent but can be instantiated by different particulars at the same time. For example, the particulars Nelson Mandela and Mahatma Gandhi instantiate the universal humanity, similar to how a strawberry and a ruby instantiate the universal red.
A topic discussed since ancient philosophy, the problem of universals consists in the challenge of characterizing the ontological status of universals. Realists argue that universals are real, mind-independent entities that exist in addition to particulars. According to Platonic realists, universals exist independently of particulars, which implies that the universal red would continue to exist even if there were no red things. A more moderate form of realism, inspired by Aristotle, states that universals depend on particulars, meaning that they are only real if they are instantiated. Nominalists reject the idea that universals exist in either form. For them, the world is composed exclusively of particulars. Conceptualists offer an intermediate position, stating that universals exist, but only as concepts in the mind used to order experience by classifying entities.
Natural and social kinds are often understood as special types of universals. Entities belonging to the same natural kind share certain fundamental features characteristic of the structure of the natural world. In this regard, natural kinds are not an artificially constructed classification but are discovered, usually by the natural sciences, and include kinds like electrons, H2O, and tigers. Scientific realists and anti-realists disagree about whether natural kinds exist. Social kinds, like money and baseball, are studied by social metaphysics and characterized as useful social constructions that, while not purely fictional, do not reflect the fundamental structure of mind-independent reality.
=== Possibility and necessity ===
The concepts of possibility and necessity convey what can or must be the case, expressed in modal statements like "it is possible to find a cure for cancer" and "it is necessary that two plus two equals four". Modal metaphysics studies metaphysical problems surrounding possibility and necessity, for instance, why some modal statements are true while others are false. Some metaphysicians hold that modality is a fundamental aspect of reality, meaning that besides facts about what is the case, there are additional facts about what could or must be the case. A different view argues that modal truths are not about an independent aspect of reality but can be reduced to non-modal characteristics, for example, to facts about what properties or linguistic descriptions are compatible with each other or to fictional statements.
Borrowing a term from German philosopher Gottfried Wilhelm Leibniz's theodicy, many metaphysicians use the concept of possible worlds to analyze the meaning and ontological ramifications of modal statements. A possible world is a complete and consistent way the totality of things could have been. For example, the dinosaurs were wiped out in the actual world but there are possible worlds in which they are still alive. According to possible world semantics, a statement is possibly true if it is true in at least one possible world, whereas it is necessarily true if it is true in all possible worlds. Modal realists argue that possible worlds exist as concrete entities in the same sense as the actual world, with the main difference being that the actual world is the world we live in while other possible worlds are inhabited by counterparts. This view is controversial and various alternatives have been suggested, for example, that possible worlds only exist as abstract objects or are similar to stories told in works of fiction.
=== Space, time, and change ===
Space and time are dimensions that entities occupy. Spacetime realists state that space and time are fundamental aspects of reality and exist independently of the human mind. Spacetime idealists, by contrast, hold that space and time are constructs of the human mind, created to organize and make sense of reality. Spacetime absolutism or substantivalism understands spacetime as a distinct object, with some metaphysicians conceptualizing it as a container that holds all other entities within it. Spacetime relationism sees spacetime not as an object but as a network of relations between objects, such as the spatial relation of being next to and the temporal relation of coming before.
In the metaphysics of time, an important contrast is between the A-series and the B-series. According to the A-series theory, the flow of time is real, meaning that events are categorized into the past, present, and future. The present continually moves forward in time and events that are in the present now will eventually change their status and lie in the past. From the perspective of the B-series theory, time is static, and events are ordered by the temporal relations earlier-than and later-than without any essential difference between past, present, and future. Eternalism holds that past, present, and future are equally real, whereas presentism asserts that only entities in the present exist.
Material objects persist through time and change in the process, like a tree that grows or loses leaves. The main ways of conceptualizing persistence through time are endurantism and perdurantism. According to endurantism, material objects are three-dimensional entities that are wholly present at each moment. As they change, they gain or lose properties but otherwise remain the same. Perdurantists see material objects as four-dimensional entities that extend through time and are made up of different temporal parts. At each moment, only one part of the object is present, not the object as a whole. Change means that an earlier part is qualitatively different from a later part. For example, when a banana ripens, there is an unripe part followed by a ripe part.
=== Causality ===
Causality is the relation between cause and effect whereby one entity produces or alters another entity. For instance, if a person bumps a glass and spills its contents then the bump is the cause and the spill is the effect. Besides the single-case causation between particulars in this example, there is also general-case causation expressed in statements such as "smoking causes cancer". The term agent causation is used when people and their actions cause something. Causation is usually interpreted deterministically, meaning that a cause always brings about its effect. However, some philosophers such as G. E. M. Anscombe have provided counterexamples to this idea. Such counterexamples have inspired the development of probabilistic theories, which claim that the cause merely increases the probability that the effect occurs. This view can explain that smoking causes cancer even though this does not happen in every single case.
The regularity theory of causation, inspired by David Hume's philosophy, states that causation is nothing but a constant conjunction in which the mind apprehends that one phenomenon, like putting one's hand in a fire, is always followed by another phenomenon, like a feeling of pain. According to nomic regularity theories, regularities manifest as laws of nature studied by science. Counterfactual theories focus not on regularities but on how effects depend on their causes. They state that effects owe their existence to the cause and would not occur without them. According to primitivism, causation is a basic concept that cannot be analyzed in terms of non-causal concepts, such as regularities or dependence relations. One form of primitivism identifies causal powers inherent in entities as the underlying mechanism. Eliminativists reject the above theories by holding that there is no causation.
=== Mind and free will ===
Mind encompasses phenomena like thinking, perceiving, feeling, and desiring as well as the underlying faculties responsible for these phenomena. The mind–body problem is the challenge of clarifying the relation between physical and mental phenomena. According to Cartesian dualism, minds and bodies are distinct substances. They causally interact with each other in various ways but can, at least in principle, exist on their own. This view is rejected by monists, who argue that reality is made up of only one kind. According to metaphysical idealism, everything is mental or dependent on the mind, including physical objects, which may be understood as ideas or perceptions of conscious minds. Materialists, by contrast, state that all reality is at its core material. Some deny that mind exists but the more common approach is to explain mind in terms of certain aspects of matter, such as brain states, behavioral dispositions, or functional roles. Neutral monists argue that reality is fundamentally neither material nor mental and suggest that matter and mind are both derivative phenomena. A key aspect of the mind–body problem is the hard problem of consciousness or how to explain that physical systems like brains can produce phenomenal consciousness.
The status of free will as the ability of a person to choose their actions is a central aspect of the mind–body problem. Metaphysicians are interested in the relation between free will and causal determinism—the view that everything in the universe, including human behavior, is determined by preceding events and laws of nature. It is controversial whether causal determinism is true, and, if so, whether this would imply that there is no free will. According to incompatibilism, free will cannot exist in a deterministic world since there is no true choice or control if everything is determined. Hard determinists infer from this that there is no free will, whereas libertarians conclude that determinism must be false. Compatibilists offer a third perspective, arguing that determinism and free will do not exclude each other, for instance, because a person can still act in tune with their motivation and choices even if they are determined by other forces. Free will plays a key role in ethics regarding the moral responsibility people have for what they do.
=== Others ===
Identity is a relation that every entity has to itself as a form of sameness. It refers to numerical identity when the same entity is involved, as in the statement "the morning star is the evening star" (both are the planet Venus). In a slightly different sense, it encompasses qualitative identity, also called exact similarity and indiscernibility, which occurs when two distinct entities are exactly alike, such as perfect identical twins. The principle of the indiscernibility of identicals is widely accepted and holds that numerically identical entities exactly resemble one another. The converse principle, known as the identity of indiscernibles or Leibniz's Law, is more controversial and states that two entities are numerically identical if they exactly resemble one another. Another distinction is between synchronic and diachronic identity. Synchronic identity relates an entity to itself at the same time, whereas diachronic identity is about the same entity at different times, as in statements like "the table I bought last year is the same as the table in my dining room now". Personal identity is a related topic in metaphysics that uses the term identity in a slightly different sense and concerns questions like what personhood is or what makes someone a person.
Various contemporary metaphysicians rely on the concepts of truth, truth-bearer, and truthmaker to conduct their inquiry. Truth is a property of being in accord with reality. Truth-bearers are entities that can be true or false, such as linguistic statements and mental representations. A truthmaker of a statement is the entity whose existence makes the statement true. For example, the fact that a tomato exists and that it is red acts as a truthmaker for the statement "a tomato is red". Based on this observation, it is possible to pursue metaphysical research by asking what the truthmakers of statements are, with different areas of metaphysics being dedicated to different types of statements. According to this view, modal metaphysics asks what makes statements about what is possible and necessary true while the metaphysics of time is interested in the truthmakers of temporal statements about the past, present, and future. A closely related topic concerns the nature of truth. Theories of truth aim to determine this nature and include correspondence, coherence, pragmatic, semantic, and deflationary theories.
== Methodology ==
Metaphysicians employ a variety of methods to develop metaphysical theories and formulate arguments for and against them. Traditionally, a priori methods have been the dominant approach. They rely on rational intuition and abstract reasoning from general principles rather than sensory experience. A posteriori approaches, by contrast, ground metaphysical theories in empirical observations and scientific theories. Some metaphysicians incorporate perspectives from fields such as physics, psychology, linguistics, and history into their inquiry. The two approaches are not mutually exclusive: it is possible to combine elements from both. The method a metaphysician chooses often depends on their understanding of the nature of metaphysics, for example, whether they see it as an inquiry into the mind-independent structure of reality, as metaphysical realists claim, or the principles underlying thought and experience, as some metaphysical anti-realists contend.
A priori approaches often rely on intuitions—non-inferential impressions about the correctness of specific claims or general principles. For example, arguments for the A-theory of time, which states that time flows from the past through the present and into the future, often rely on pre-theoretical intuitions associated with the sense of the passage of time. Some approaches use intuitions to establish a small set of self-evident fundamental principles, known as axioms, and employ deductive reasoning to build complex metaphysical systems by drawing conclusions from these axioms. Intuition-based approaches can be combined with thought experiments, which help evoke and clarify intuitions by linking them to imagined situations. They use counterfactual thinking to assess the possible consequences of these situations. For example, to explore the relation between matter and consciousness, some theorists compare humans to philosophical zombies—hypothetical creatures identical to humans but without conscious experience. A related method relies on commonly accepted beliefs instead of intuitions to formulate arguments and theories. The common-sense approach is often used to criticize metaphysical theories that deviate significantly from how the average person thinks about an issue. For example, common-sense philosophers have argued that mereological nihilism is false since it implies that commonly accepted things, like tables, do not exist.
Conceptual analysis, a method particularly prominent in analytic philosophy, aims to decompose metaphysical concepts into component parts to clarify their meaning and identify essential relations. In phenomenology, the method of eidetic variation is used to investigate essential structures underlying phenomena. This method involves imagining an object and varying its features to determine which ones are essential and cannot be changed. The transcendental method is a further approach and examines the metaphysical structure of reality by observing what entities there are and studying the conditions of possibility without which these entities could not exist.
Some approaches give less importance to a priori reasoning and view metaphysics as a practice continuous with the empirical sciences that generalizes their insights while making their underlying assumptions explicit. This approach is known as naturalized metaphysics and is closely associated with the work of Willard Van Orman Quine. He relies on the idea that true sentences from the sciences and other fields have ontological commitments, that is, they imply that certain entities exist. For example, if the sentence "some electrons are bonded to protons" is true then it can be used to justify that electrons and protons exist. Quine used this insight to argue that one can learn about metaphysics by closely analyzing scientific claims to understand what kind of metaphysical picture of the world they presuppose.
In addition to methods of conducting metaphysical inquiry, there are various methodological principles used to decide between competing theories by comparing their theoretical virtues. Ockham's Razor is a well-known principle that gives preference to simple theories, in particular, those that assume that few entities exist. Other principles consider explanatory power, theoretical usefulness, and proximity to established beliefs.
== Criticism ==
Despite its status as one of the main branches of philosophy, metaphysics has received numerous criticisms questioning its legitimacy as a field of inquiry. One criticism argues that metaphysical inquiry is impossible because humans lack the cognitive capacities needed to access the ultimate nature of reality. This line of thought leads to skepticism about the possibility of metaphysical knowledge. Empiricists often follow this idea, like Hume, who asserts that there is no good source of metaphysical knowledge since metaphysics lies outside the field of empirical knowledge and relies on dubious intuitions about the realm beyond sensory experience. Arguing that the mind actively structures experience, Kant criticizes traditional metaphysics for its attempt to gain insight into the mind-independent nature of reality. He asserts that knowledge is limited to the realm of possible experience, meaning that humans are not able to decide questions like whether the world has a beginning in time or is infinite. A related argument favoring the unreliability of metaphysical theorizing points to the deep and lasting disagreements about metaphysical issues, suggesting a lack of overall progress.
Another criticism holds that the problem lies not with human cognitive abilities but with metaphysical statements themselves, which some claim are neither true nor false but meaningless. According to logical positivists, for instance, the meaning of a statement is given by the procedure used to verify it, usually through the observations that would confirm it. Based on this controversial assumption, they argue that metaphysical statements are meaningless since they make no testable predictions about experience.
A slightly weaker position allows metaphysical statements to have meaning while holding that metaphysical disagreements are merely verbal disputes about different ways to describe the world. According to this view, the disagreement in the metaphysics of composition about whether there are tables or only particles arranged table-wise is a trivial debate about linguistic preferences without any substantive consequences for the nature of reality. The position that metaphysical disputes have no meaning or no significant point is called metaphysical or ontological deflationism. This view is opposed by so-called serious metaphysicians, who contend that metaphysical disputes are about substantial features of the underlying structure of reality. A closely related debate between ontological realists and anti-realists concerns the question of whether there are any objective facts that determine which metaphysical theories are true. A different criticism, formulated by pragmatists, sees the fault of metaphysics not in its cognitive ambitions or the meaninglessness of its statements, but in its practical irrelevance and lack of usefulness.
Martin Heidegger criticized traditional metaphysics, saying that it fails to distinguish between individual entities and being as their ontological ground. His attempt to reveal the underlying assumptions and limitations in the history of metaphysics to "overcome metaphysics" influenced Jacques Derrida's method of deconstruction. Derrida employed this approach to criticize metaphysical texts for relying on opposing terms, like presence and absence, which he thought were inherently unstable and contradictory.
There is no consensus about the validity of these criticisms and whether they affect metaphysics as a whole or only certain issues or approaches in it. For example, it could be the case that certain metaphysical disputes are merely verbal while others are substantive.
== Relation to other disciplines ==
Metaphysics is related to many fields of inquiry by investigating their basic concepts and relation to the fundamental structure of reality. For example, the natural sciences rely on concepts such as law of nature, causation, necessity, and spacetime to formulate their theories and predict or explain the outcomes of experiments. While scientists primarily focus on applying these concepts to specific situations, metaphysics examines their general nature and how they depend on each other. For instance, physicists formulate laws of nature, like laws of gravitation and thermodynamics, to describe how physical systems behave under various conditions. Metaphysicians, by contrast, examine what all laws of nature have in common, asking whether they merely describe contingent regularities or express necessary relations. New scientific discoveries have also influenced existing metaphysical theories and inspired new ones. Einstein's theory of relativity, for instance, prompted various metaphysicians to conceive space and time as a unified dimension rather than as independent dimensions. Empirically focused metaphysicians often rely on scientific theories to ground their theories about the nature of reality in empirical observations.
Similar issues arise in the social sciences where metaphysicians investigate their basic concepts and analyze their metaphysical implications. This includes questions like whether social facts emerge from non-social facts, whether social groups and institutions have mind-independent existence, and how they persist through time. Metaphysical assumptions and topics in psychology and psychiatry include the questions about the relation between body and mind, whether the nature of the human mind is historically fixed, and what the metaphysical status of diseases is.
Metaphysics is similar to both physical cosmology and theology in its exploration of the first causes and the universe as a whole. Key differences are that metaphysics relies on rational inquiry while physical cosmology gives more weight to empirical observations and theology incorporates divine revelation and other faith-based doctrines. Historically, cosmology and theology were considered subfields of metaphysics.
Computer scientists rely on metaphysics in the form of ontology to represent and classify objects. They develop conceptual frameworks, called ontologies, for limited domains, such as a database with categories like person, company, address, and name to represent information about clients and employees. Ontologies provide standards for encoding and storing information in a structured way, allowing computational processes to use the information for various purposes. Upper ontologies, such as Suggested Upper Merged Ontology and Basic Formal Ontology, define concepts at a more abstract level, making it possible to integrate information belonging to different domains.
Logic as the study of correct reasoning is often used by metaphysicians to engage in their inquiry and express insights through precise logical formulas. Another relation between the two fields concerns the metaphysical assumptions associated with logical systems. Many logical systems like first-order logic rely on existential quantifiers to express existential statements. For instance, in the logical formula
∃
x
Horse
(
x
)
{\displaystyle \exists x{\text{Horse}}(x)}
the existential quantifier
∃
{\displaystyle \exists }
is applied to the predicate
Horse
{\displaystyle {\text{Horse}}}
to express that there are horses. Following Quine, various metaphysicians assume that existential quantifiers carry ontological commitments, meaning that existential statements imply that the entities over which one quantifies are part of reality.
== History ==
Metaphysics originated in the ancient period from speculations about the nature and origin of the cosmos. In ancient India, starting in the 7th century BCE, the Upanishads were written as religious and philosophical texts that examine how ultimate reality constitutes the ground of all being. They further explore the nature of the self and how it can reach liberation by understanding ultimate reality. This period also saw the emergence of Buddhism in the 6th century BCE, which denies the existence of an independent self and understands the world as a cyclic process. At about the same time in ancient China, the school of Daoism was formed and explored the natural order of the universe, known as Dao, and how it is characterized by the interplay of yin and yang as two correlated forces.
In ancient Greece, metaphysics emerged in the 6th century BCE with the pre-Socratic philosophers, who gave rational explanations of the cosmos as a whole by examining the first principles from which everything arises. Building on their work, Plato (427–347 BCE) formulated his theory of forms, which states that eternal forms or ideas possess the highest kind of reality while the material world is only an imperfect reflection of them. Aristotle (384–322 BCE) accepted Plato's idea that there are universal forms but held that they cannot exist on their own but depend on matter. He also proposed a system of categories and developed a comprehensive framework of the natural world through his theory of the four causes. Starting in the 4th century BCE, Hellenistic philosophy explored the rational order underlying the cosmos and the laws governing it. Neoplatonism emerged towards the end of the ancient period in the 3rd century CE and introduced the idea of "the One" as the transcendent and ineffable source of all creation.
Meanwhile, in Indian Buddhism, the Madhyamaka school developed the idea that all phenomena are inherently empty without a permanent essence. The consciousness-only doctrine of the Yogācāra school stated that experienced objects are mere transformations of consciousness and do not reflect external reality. The Hindu school of Samkhya philosophy introduced a metaphysical dualism with pure consciousness and matter as its fundamental categories. In China, the school of Xuanxue explored metaphysical problems such as the contrast between being and non-being.
Medieval Western philosophy was profoundly shaped by ancient Greek thought as philosophers integrated these ideas with Christian philosophical teachings. Boethius (477–524 CE) sought to reconcile Plato's and Aristotle's theories of universals, proposing that universals can exist both in matter and mind. His theory inspired the development of nominalism and conceptualism, as in the thought of Peter Abelard (1079–1142 CE). Thomas Aquinas (1224–1274 CE) understood metaphysics as the discipline investigating different meanings of being, such as the contrast between substance and accident, and principles applying to all beings, such as the principle of identity. William of Ockham (1285–1347 CE) developed a methodological principle, known as Ockham's razor, to choose between competing metaphysical theories. Arabic–Persian philosophy flourished from the early 9th century CE to the late 12th century CE, integrating ancient Greek philosophies to interpret and clarify the teachings of the Quran. Avicenna (980–1037 CE) developed a comprehensive philosophical system that examined the contrast between existence and essence and distinguished between contingent and necessary existence. Medieval India saw the emergence of the monist school of Advaita Vedanta in the 8th century CE, which holds that everything is one and that the idea of many entities existing independently is an illusion. In China, Neo-Confucianism arose in the 9th century CE and explored the concept of li as the rational principle that is the ground of being and reflects the order of the universe.
In the early modern period and following renewed interest in Platonism during the Renaissance, René Descartes (1596–1650) developed a substance dualism according to which body and mind exist as independent entities that causally interact. This idea was rejected by Baruch Spinoza (1632–1677), who formulated a monist philosophy suggesting that there is only one substance with both physical and mental attributes that develop side-by-side without interacting. Gottfried Wilhelm Leibniz (1646–1716) introduced the concept of possible worlds and articulated a metaphysical system known as monadology, which views the universe as a collection of simple substances synchronized without causal interaction. Christian Wolff (1679–1754), conceptualized the scope of metaphysics by distinguishing between general and special metaphysics. According to the idealism of George Berkeley (1685–1753), everything is mental, including material objects, which are ideas perceived by the mind. David Hume (1711–1776) made various contributions to metaphysics, including the regularity theory of causation and the idea that there are no necessary connections between distinct entities. Inspired by the empiricism of Francis Bacon (1561–1626) and John Locke (1632–1704), Hume criticized metaphysical theories that seek ultimate principles inaccessible to sensory experience. This critical outlook was embraced by Immanuel Kant (1724–1804), who tried to reconceptualize metaphysics as an inquiry into the basic principles and categories of thought and understanding rather than seeing it as an attempt to comprehend mind-independent reality.
Many developments in the later modern period were shaped by Kant's philosophy. German idealists adopted his idealistic outlook in their attempt to find a unifying principle as the foundation of all reality. Georg Wilhelm Friedrich Hegel's (1770–1831) idealistic contention is that reality is conceptual all the way down, and being itself is rational. He inspired the British idealism of Francis Herbert Bradley (1846–1924), who interpreted Hegel's concept of absolute spirit as the all-inclusive totality of being. Arthur Schopenhauer (1788–1860) was a strong critic of German idealism and articulated a different metaphysical vision, positing a blind and irrational will as the underlying principle of reality. Pragmatists like C. S. Peirce (1839–1914) and John Dewey (1859–1952) conceived metaphysics as an observational science of the most general features of reality and experience.
At the turn of the 20th century in analytic philosophy, philosophers such as Bertrand Russell (1872–1970) and G. E. Moore (1873–1958) led a "revolt against idealism", arguing for the existence of a mind-independent world aligned with common sense and empirical science. Logical atomists, like Russell and the early Ludwig Wittgenstein (1889–1951), conceived the world as a multitude of atomic facts, which later inspired metaphysicians such as D. M. Armstrong (1926–2014). Alfred North Whitehead (1861–1947) developed process metaphysics as an attempt to provide a holistic description of both the objective and the subjective realms.
Rudolf Carnap (1891–1970) and other logical positivists formulated a wide-ranging criticism of metaphysical statements, arguing that they are meaningless because there is no way to verify them. Other criticisms of traditional metaphysics identified misunderstandings of ordinary language as the source of many traditional metaphysical problems or challenged complex metaphysical deductions by appealing to common sense.
The decline of logical positivism led to a revival of metaphysical theorizing. Willard Van Orman Quine (1908–2000) tried to naturalize metaphysics by connecting it to the empirical sciences. His student David Lewis (1941–2001) employed the concept of possible worlds to formulate his modal realism. Saul Kripke (1940–2022) helped revive discussions of identity and essentialism, distinguishing necessity as a metaphysical notion from the epistemic notion of a priori.
In continental philosophy, Edmund Husserl (1859–1938) engaged in ontology through a phenomenological description of experience, while his student Martin Heidegger (1889–1976) developed fundamental ontology to clarify the meaning of being. Heidegger's philosophy inspired Jacques Derrida's (1930–2004) criticism of metaphysics. Gilles Deleuze's (1925–1995) approach to metaphysics challenged traditionally influential concepts like substance, essence, and identity by reconceptualizing the field through alternative notions such as multiplicity, event, and difference.
== See also ==
Computational metaphysics
Doctor of Metaphysics
Enrico Berti's classification of metaphysics
Feminist metaphysics
Fundamental question of metaphysics
List of metaphysicians
Metaphysical grounding
== References ==
=== Notes ===
=== Citations ===
=== Sources ===
== External links ==
Metaphysics at PhilPapers
Metaphysics at the Indiana Philosophy Ontology Project
"Metaphysics". Internet Encyclopedia of Philosophy.
Metaphysics at Encyclopædia Britannica
Metaphysics public domain audiobook at LibriVox | Wikipedia/Metaphysics |
In abstract algebra, an interior algebra is a certain type of algebraic structure that encodes the idea of the topological interior of a set. Interior algebras are to topology and the modal logic S4 what Boolean algebras are to set theory and ordinary propositional logic. Interior algebras form a variety of modal algebras.
== Definition ==
An interior algebra is an algebraic structure with the signature
⟨S, ·, +, ′, 0, 1, I⟩
where
⟨S, ·, +, ′, 0, 1⟩
is a Boolean algebra and postfix I designates a unary operator, the interior operator, satisfying the identities:
xI ≤ x
xII = xI
(xy)I = xIyI
1I = 1
xI is called the interior of x.
The dual of the interior operator is the closure operator C defined by xC = ((x′)I)′. xC is called the closure of x. By the principle of duality, the closure operator satisfies the identities:
xC ≥ x
xCC = xC
(x + y)C = xC + yC
0C = 0
If the closure operator is taken as primitive, the interior operator can be defined as xI = ((x′)C)′. Thus the theory of interior algebras may be formulated using the closure operator instead of the interior operator, in which case one considers closure algebras of the form ⟨S, ·, +, ′, 0, 1, C⟩, where ⟨S, ·, +, ′, 0, 1⟩ is again a Boolean algebra and C satisfies the above identities for the closure operator. Closure and interior algebras form dual pairs, and are paradigmatic instances of "Boolean algebras with operators." The early literature on this subject (mainly Polish topology) invoked closure operators, but the interior operator formulation eventually became the norm following the work of Wim Blok.
== Open and closed elements ==
Elements of an interior algebra satisfying the condition xI = x are called open. The complements of open elements are called closed and are characterized by the condition xC = x. An interior of an element is always open and the closure of an element is always closed. Interiors of closed elements are called regular open and closures of open elements are called regular closed. Elements that are both open and closed are called clopen. 0 and 1 are clopen.
An interior algebra is called Boolean if all its elements are open (and hence clopen). Boolean interior algebras can be identified with ordinary Boolean algebras as their interior and closure operators provide no meaningful additional structure. A special case is the class of trivial interior algebras, which are the single element interior algebras characterized by the identity 0 = 1.
== Morphisms of interior algebras ==
=== Homomorphisms ===
Interior algebras, by virtue of being algebraic structures, have homomorphisms. Given two interior algebras A and B, a map f : A → B is an interior algebra homomorphism if and only if f is a homomorphism between the underlying Boolean algebras of A and B, that also preserves interiors and closures. Hence:
f(xI) = f(x)I;
f(xC) = f(x)C.
=== Topomorphisms ===
Topomorphisms are another important, and more general, class of morphisms between interior algebras. A map f : A → B is a topomorphism if and only if f is a homomorphism between the Boolean algebras underlying A and B, that also preserves the open and closed elements of A. Hence:
If x is open in A, then f(x) is open in B;
If x is closed in A, then f(x) is closed in B.
(Such morphisms have also been called stable homomorphisms and closure algebra semi-homomorphisms.) Every interior algebra homomorphism is a topomorphism, but not every topomorphism is an interior algebra homomorphism.
=== Boolean homomorphisms ===
Early research often considered mappings between interior algebras that were homomorphisms of the underlying Boolean algebras but that did not necessarily preserve the interior or closure operator. Such mappings were called Boolean homomorphisms. (The terms closure homomorphism or topological homomorphism were used in the case where these were preserved, but this terminology is now redundant as the standard definition of a homomorphism in universal algebra requires that it preserves all operations.) Applications involving countably complete interior algebras (in which countable meets and joins always exist, also called σ-complete) typically made use of countably complete Boolean homomorphisms also called Boolean σ-homomorphisms—these preserve countable meets and joins.
=== Continuous morphisms ===
The earliest generalization of continuity to interior algebras was Sikorski's, based on the inverse image map of a continuous map. This is a Boolean homomorphism, preserves unions of sequences and includes the closure of an inverse image in the inverse image of the closure. Sikorski thus defined a continuous homomorphism as a Boolean σ-homomorphism f between two σ-complete interior algebras such that f(x)C ≤ f(xC). This definition had several difficulties: The construction acts contravariantly producing a dual of a continuous map rather than a generalization. On the one hand σ-completeness is too weak to characterize inverse image maps (completeness is required), on the other hand it is too restrictive for a generalization. (Sikorski remarked on using non-σ-complete homomorphisms but included σ-completeness in his axioms for closure algebras.) Later J. Schmid defined a continuous homomorphism or continuous morphism for interior algebras as a Boolean homomorphism f between two interior algebras satisfying f(xC) ≤ f(x)C. This generalizes the forward image map of a continuous map—the image of a closure is contained in the closure of the image. This construction is covariant but not suitable for category theoretic applications as it only allows construction of continuous morphisms from continuous maps in the case of bijections. (C. Naturman returned to Sikorski's approach while dropping σ-completeness to produce topomorphisms as defined above. In this terminology, Sikorski's original "continuous homomorphisms" are σ-complete topomorphisms between σ-complete interior algebras.)
== Relationships to other areas of mathematics ==
=== Topology ===
Given a topological space X = ⟨X, T⟩ one can form the power set Boolean algebra of X:
⟨P(X), ∩, ∪, ′, ø, X⟩
and extend it to an interior algebra
A(X) = ⟨P(X), ∩, ∪, ′, ø, X, I⟩,
where I is the usual topological interior operator. For all S ⊆ X it is defined by
SI = ∪ {O | O ⊆ S and O is open in X}
For all S ⊆ X the corresponding closure operator is given by
SC = ∩ {C | S ⊆ C and C is closed in X}
SI is the largest open subset of S and SC is the smallest closed superset of S in X. The open, closed, regular open, regular closed and clopen elements of the interior algebra A(X) are just the open, closed, regular open, regular closed and clopen subsets of X respectively in the usual topological sense.
Every complete atomic interior algebra is isomorphic to an interior algebra of the form A(X) for some topological space X. Moreover, every interior algebra can be embedded in such an interior algebra giving a representation of an interior algebra as a topological field of sets. The properties of the structure A(X) are the very motivation for the definition of interior algebras. Because of this intimate connection with topology, interior algebras have also been called topo-Boolean algebras or topological Boolean algebras.
Given a continuous map between two topological spaces
f : X → Y
we can define a complete topomorphism
A(f) : A(Y) → A(X)
by
A(f)(S) = f−1[S]
for all subsets S of Y. Every complete topomorphism between two complete atomic interior algebras can be derived in this way. If Top is the category of topological spaces and continuous maps and Cit is the category of complete atomic interior algebras and complete topomorphisms then Top and Cit are dually isomorphic and A : Top → Cit is a contravariant functor that is a dual isomorphism of categories. A(f) is a homomorphism if and only if f is a continuous open map.
Under this dual isomorphism of categories many natural topological properties correspond to algebraic properties, in particular connectedness properties correspond to irreducibility properties:
X is empty if and only if A(X) is trivial
X is indiscrete if and only if A(X) is simple
X is discrete if and only if A(X) is Boolean
X is almost discrete if and only if A(X) is semisimple
X is finitely generated (Alexandrov) if and only if A(X) is operator complete i.e. its interior and closure operators distribute over arbitrary meets and joins respectively
X is connected if and only if A(X) is directly indecomposable
X is ultraconnected if and only if A(X) is finitely subdirectly irreducible
X is compact ultra-connected if and only if A(X) is subdirectly irreducible
==== Generalized topology ====
The modern formulation of topological spaces in terms of topologies of open subsets, motivates an alternative formulation of interior algebras: A generalized topological space is an algebraic structure of the form
⟨B, ·, +, ′, 0, 1, T⟩
where ⟨B, ·, +, ′, 0, 1⟩ is a Boolean algebra as usual, and T is a unary relation on B (subset of B) such that:
0,1 ∈ T
T is closed under arbitrary joins (i.e. if a join of an arbitrary subset of T exists then it will be in T)
T is closed under finite meets
For every element b of B, the join Σ{a ∈T | a ≤ b} exists
T is said to be a generalized topology in the Boolean algebra.
Given an interior algebra its open elements form a generalized topology. Conversely given a generalized topological space
⟨B, ·, +, ′, 0, 1, T⟩
we can define an interior operator on B by bI = Σ{a ∈T | a ≤ b} thereby producing an interior algebra whose open elements are precisely T. Thus generalized topological spaces are equivalent to interior algebras.
Considering interior algebras to be generalized topological spaces, topomorphisms are then the standard homomorphisms of Boolean algebras with added relations, so that standard results from universal algebra apply.
==== Neighbourhood functions and neighbourhood lattices ====
The topological concept of neighbourhoods can be generalized to interior algebras: An element y of an interior algebra is said to be a neighbourhood of an element x if x ≤ yI. The set of neighbourhoods of x is denoted by N(x) and forms a filter. This leads to another formulation of interior algebras:
A neighbourhood function on a Boolean algebra is a mapping N from its underlying set B to its set of filters, such that:
For all x ∈ B, max{y ∈ B | x ∈ N(y)} exists
For all x,y ∈ B, x ∈ N(y) if and only if there is a z ∈ B such that y ≤ z ≤ x and z ∈ N(z).
The mapping N of elements of an interior algebra to their filters of neighbourhoods is a neighbourhood function on the underlying Boolean algebra of the interior algebra. Moreover, given a neighbourhood function N on a Boolean algebra with underlying set B, we can define an interior operator by xI = max{y ∈ B | x ∈ N(y)} thereby obtaining an interior algebra.
N
(
x
)
{\displaystyle N(x)}
will then be precisely the filter of neighbourhoods of x in this interior algebra. Thus interior algebras are equivalent to Boolean algebras with specified neighbourhood functions.
In terms of neighbourhood functions, the open elements are precisely those elements x such that x ∈ N(x). In terms of open elements x ∈ N(y) if and only if there is an open element z such that y ≤ z ≤ x.
Neighbourhood functions may be defined more generally on (meet)-semilattices producing the structures known as neighbourhood (semi)lattices. Interior algebras may thus be viewed as precisely the Boolean neighbourhood lattices i.e. those neighbourhood lattices whose underlying semilattice forms a Boolean algebra.
=== Modal logic ===
Given a theory (set of formal sentences) M in the modal logic S4, we can form its Lindenbaum–Tarski algebra:
L(M) = ⟨M / ~, ∧, ∨, ¬, F, T, □⟩
where ~ is the equivalence relation on sentences in M given by p ~ q if and only if p and q are logically equivalent in M, and M / ~ is the set of equivalence classes under this relation. Then L(M) is an interior algebra. The interior operator in this case corresponds to the modal operator □ (necessarily), while the closure operator corresponds to ◊ (possibly). This construction is a special case of a more general result for modal algebras and modal logic.
The open elements of L(M) correspond to sentences that are only true if they are necessarily true, while the closed elements correspond to those that are only false if they are necessarily false.
Because of their relation to S4, interior algebras are sometimes called S4 algebras or Lewis algebras, after the logician C. I. Lewis, who first proposed the modal logics S4 and S5.
=== Preorders ===
Since interior algebras are (normal) Boolean algebras with operators, they can be represented by fields of sets on appropriate relational structures. In particular, since they are modal algebras, they can be represented as fields of sets on a set with a single binary relation, called a Kripke frame. The Kripke frames corresponding to interior algebras are precisely the preordered sets. Preordered sets (also called S4-frames) provide the Kripke semantics of the modal logic S4, and the connection between interior algebras and preorders is deeply related to their connection with modal logic.
Given a preordered set X = ⟨X, «⟩ we can construct an interior algebra
B(X) = ⟨P(X), ∩, ∪, ′, ø, X, I⟩
from the power set Boolean algebra of X where the interior operator I is given by
SI = {x ∈ X | for all y ∈ X, x « y implies y ∈ S} for all S ⊆ X.
The corresponding closure operator is given by
SC = {x ∈ X | there exists a y ∈ S with y « x} for all S ⊆ X.
SI is the set of all worlds inaccessible from worlds outside S, and SC is the set of all worlds accessible from some world in S. Every interior algebra can be embedded in an interior algebra of the form B(X) for some preordered set X giving the above-mentioned representation as a field of sets (a preorder field).
This construction and representation theorem is a special case of the more general result for modal algebras and Kripke frames. In this regard, interior algebras are particularly interesting because of their connection to topology. The construction provides the preordered set X with a topology, the Alexandrov topology, producing a topological space T(X) whose open sets are:
{O ⊆ X | for all x ∈ O and all y ∈ X, x « y implies y ∈ O}.
The corresponding closed sets are:
{C ⊆ X | for all x ∈ C and all y ∈ X, y « x implies y ∈ C}.
In other words, the open sets are the ones whose worlds are inaccessible from outside (the up-sets), and the closed sets are the ones for which every outside world is inaccessible from inside (the down-sets). Moreover, B(X) = A(T(X)).
=== Monadic Boolean algebras ===
Any monadic Boolean algebra can be considered to be an interior algebra where the interior operator is the universal quantifier and the closure operator is the existential quantifier. The monadic Boolean algebras are then precisely the variety of interior algebras satisfying the identity xIC = xI. In other words, they are precisely the interior algebras in which every open element is closed or equivalently, in which every closed element is open. Moreover, such interior algebras are precisely the semisimple interior algebras. They are also the interior algebras corresponding to the modal logic S5, and so have also been called S5 algebras.
In the relationship between preordered sets and interior algebras they correspond to the case where the preorder is an equivalence relation, reflecting the fact that such preordered sets provide the Kripke semantics for S5. This also reflects the relationship between the monadic logic of quantification (for which monadic Boolean algebras provide an algebraic description) and S5 where the modal operators □ (necessarily) and ◊ (possibly) can be interpreted in the Kripke semantics using monadic universal and existential quantification, respectively, without reference to an accessibility relation.
=== Heyting algebras ===
The open elements of an interior algebra form a Heyting algebra and the closed elements form a dual Heyting algebra. The regular open elements and regular closed elements correspond to the pseudo-complemented elements and dual pseudo-complemented elements of these algebras respectively and thus form Boolean algebras. The clopen elements correspond to the complemented elements and form a common subalgebra of these Boolean algebras as well as of the interior algebra itself. Every Heyting algebra can be represented as the open elements of an interior algebra and the latter may be chosen to be an interior algebra generated by its open elements—such interior algebras correspond one-to-one with Heyting algebras (up to isomorphism) being the free Boolean extensions of the latter.
Heyting algebras play the same role for intuitionistic logic that interior algebras play for the modal logic S4 and Boolean algebras play for propositional logic. The relation between Heyting algebras and interior algebras reflects the relationship between intuitionistic logic and S4, in which one can interpret theories of intuitionistic logic as S4 theories closed under necessity. The one-to-one correspondence between Heyting algebras and interior algebras generated by their open elements reflects the correspondence between extensions of intuitionistic logic and normal extensions of the modal logic S4.Grz.
=== Derivative algebras ===
Given an interior algebra A, the closure operator obeys the axioms of the derivative operator, D. Hence we can form a derivative algebra D(A) with the same underlying Boolean algebra as A by using the closure operator as a derivative operator.
Thus interior algebras are derivative algebras. From this perspective, they are precisely the variety of derivative algebras satisfying the identity xD ≥ x. Derivative algebras provide the appropriate algebraic semantics for the modal logic wK4. Hence derivative algebras stand to topological derived sets and wK4 as interior/closure algebras stand to topological interiors/closures and S4.
Given a derivative algebra V with derivative operator D, we can form an interior algebra I(V) with the same underlying Boolean algebra as V, with interior and closure operators defined by xI = x·x ′ D ′ and xC = x + xD, respectively. Thus every derivative algebra can be regarded as an interior algebra. Moreover, given an interior algebra A, we have I(D(A)) = A. However, D(I(V)) = V does not necessarily hold for every derivative algebra V.
== Stone duality and representation for interior algebras ==
Stone duality provides a category theoretic duality between Boolean algebras and a class of topological spaces known as Boolean spaces. Building on nascent ideas of relational semantics (later formalized by Kripke) and a result of R. S. Pierce, Jónsson, Tarski and G. Hansoul extended Stone duality to Boolean algebras with operators by equipping Boolean spaces with relations that correspond to the operators via a power set construction. In the case of interior algebras the interior (or closure) operator corresponds to a pre-order on the Boolean space. Homomorphisms between interior algebras correspond to a class of continuous maps between the Boolean spaces known as pseudo-epimorphisms or p-morphisms for short. This generalization of Stone duality to interior algebras based on the Jónsson–Tarski representation was investigated by Leo Esakia and is also known as the Esakia duality for S4-algebras (interior algebras) and is closely related to the Esakia duality for Heyting algebras.
Whereas the Jónsson–Tarski generalization of Stone duality applies to Boolean algebras with operators in general, the connection between interior algebras and topology allows for another method of generalizing Stone duality that is unique to interior algebras. An intermediate step in the development of Stone duality is Stone's representation theorem, which represents a Boolean algebra as a field of sets. The Stone topology of the corresponding Boolean space is then generated using the field of sets as a topological basis. Building on the topological semantics introduced by Tang Tsao-Chen for Lewis's modal logic, McKinsey and Tarski showed that by generating a topology equivalent to using only the complexes that correspond to open elements as a basis, a representation of an interior algebra is obtained as a topological field of sets—a field of sets on a topological space that is closed with respect to taking interiors or closures. By equipping topological fields of sets with appropriate morphisms known as field maps, C. Naturman showed that this approach can be formalized as a category theoretic Stone duality in which the usual Stone duality for Boolean algebras corresponds to the case of interior algebras having redundant interior operator (Boolean interior algebras).
The pre-order obtained in the Jónsson–Tarski approach corresponds to the accessibility relation in the Kripke semantics for an S4 theory, while the intermediate field of sets corresponds to a representation of the Lindenbaum–Tarski algebra for the theory using the sets of possible worlds in the Kripke semantics in which sentences of the theory hold. Moving from the field of sets to a Boolean space somewhat obfuscates this connection. By treating fields of sets on pre-orders as a category in its own right this deep connection can be formulated as a category theoretic duality that generalizes Stone representation without topology. R. Goldblatt had shown that with restrictions to appropriate homomorphisms such a duality can be formulated for arbitrary modal algebras and Kripke frames. Naturman showed that in the case of interior algebras this duality applies to more general topomorphisms and can be factored via a category theoretic functor through the duality with topological fields of sets. The latter represent the Lindenbaum–Tarski algebra using sets of points satisfying sentences of the S4 theory in the topological semantics. The pre-order can be obtained as the specialization pre-order of the McKinsey–Tarski topology. The Esakia duality can be recovered via a functor that replaces the field of sets with the Boolean space it generates. Via a functor that instead replaces the pre-order with its corresponding Alexandrov topology, an alternative representation of the interior algebra as a field of sets is obtained where the topology is the Alexandrov bico-reflection of the McKinsey–Tarski topology. The approach of formulating a topological duality for interior algebras using both the Stone topology of the Jónsson–Tarski approach and the Alexandrov topology of the pre-order to form a bi-topological space has been investigated by G. Bezhanishvili, R.Mines, and P.J. Morandi. The McKinsey–Tarski topology of an interior algebra is the intersection of the former two topologies.
== Metamathematics ==
Grzegorczyk proved the first-order theory of closure algebras undecidable. Naturman demonstrated that the theory is hereditarily undecidable (all its subtheories are undecidable) and demonstrated an infinite chain of elementary classes of interior algebras with hereditarily undecidable theories.
== Notes ==
== References ==
Blok, W.A., 1976, Varieties of interior algebras, Ph.D. thesis, University of Amsterdam.
Esakia, L., 2004, "Intuitionistic logic and modality via topology," Annals of Pure and Applied Logic 127: 155-70.
McKinsey, J.C.C. and Alfred Tarski, 1944, "The Algebra of Topology," Annals of Mathematics 45: 141-91.
Naturman, C.A., 1991, Interior Algebras and Topology, Ph.D. thesis, University of Cape Town Department of Mathematics.
Bezhanishvili, G., Mines, R. and Morandi, P.J., 2008, Topo-canonical completions of closure algebras and Heyting algebras, Algebra Universalis 58: 1-34.
Schmid, J., 1973, On the compactification of closure algebras, Fundamenta Mathematicae 79: 33-48
Sikorski R., 1955, Closure homomorphisms and interior mappings, Fundamenta Mathematicae 41: 12-20 | Wikipedia/Interior_algebra |
In mathematics, a set A is a subset of a set B if all elements of A are also elements of B; B is then a superset of A. It is possible for A and B to be equal; if they are unequal, then A is a proper subset of B. The relationship of one set being a subset of another is called inclusion (or sometimes containment). A is a subset of B may also be expressed as B includes (or contains) A or A is included (or contained) in B. A k-subset is a subset with k elements.
When quantified,
A
⊆
B
{\displaystyle A\subseteq B}
is represented as
∀
x
(
x
∈
A
⇒
x
∈
B
)
.
{\displaystyle \forall x\left(x\in A\Rightarrow x\in B\right).}
One can prove the statement
A
⊆
B
{\displaystyle A\subseteq B}
by applying a proof technique known as the element argument:Let sets A and B be given. To prove that
A
⊆
B
,
{\displaystyle A\subseteq B,}
suppose that a is a particular but arbitrarily chosen element of A
show that a is an element of B.
The validity of this technique can be seen as a consequence of universal generalization: the technique shows
(
c
∈
A
)
⇒
(
c
∈
B
)
{\displaystyle (c\in A)\Rightarrow (c\in B)}
for an arbitrarily chosen element c. Universal generalisation then implies
∀
x
(
x
∈
A
⇒
x
∈
B
)
,
{\displaystyle \forall x\left(x\in A\Rightarrow x\in B\right),}
which is equivalent to
A
⊆
B
,
{\displaystyle A\subseteq B,}
as stated above.
== Definition ==
If A and B are sets and every element of A is also an element of B, then:
A is a subset of B, denoted by
A
⊆
B
{\displaystyle A\subseteq B}
, or equivalently,
B is a superset of A, denoted by
B
⊇
A
.
{\displaystyle B\supseteq A.}
If A is a subset of B, but A is not equal to B (i.e. there exists at least one element of B which is not an element of A), then:
A is a proper (or strict) subset of B, denoted by
A
⊊
B
{\displaystyle A\subsetneq B}
, or equivalently,
B is a proper (or strict) superset of A, denoted by
B
⊋
A
.
{\displaystyle B\supsetneq A.}
The empty set, written
{
}
{\displaystyle \{\}}
or
∅
,
{\displaystyle \varnothing ,}
has no elements, and therefore is vacuously a subset of any set X.
== Basic properties ==
Reflexivity: Given any set
A
{\displaystyle A}
,
A
⊆
A
{\displaystyle A\subseteq A}
Transitivity: If
A
⊆
B
{\displaystyle A\subseteq B}
and
B
⊆
C
{\displaystyle B\subseteq C}
, then
A
⊆
C
{\displaystyle A\subseteq C}
Antisymmetry: If
A
⊆
B
{\displaystyle A\subseteq B}
and
B
⊆
A
{\displaystyle B\subseteq A}
, then
A
=
B
{\displaystyle A=B}
.
=== Proper subset ===
Irreflexivity: Given any set
A
{\displaystyle A}
,
A
⊊
A
{\displaystyle A\subsetneq A}
is False.
Transitivity: If
A
⊊
B
{\displaystyle A\subsetneq B}
and
B
⊊
C
{\displaystyle B\subsetneq C}
, then
A
⊊
C
{\displaystyle A\subsetneq C}
Asymmetry: If
A
⊊
B
{\displaystyle A\subsetneq B}
then
B
⊊
A
{\displaystyle B\subsetneq A}
is False.
== ⊂ and ⊃ symbols ==
Some authors use the symbols
⊂
{\displaystyle \subset }
and
⊃
{\displaystyle \supset }
to indicate subset and superset respectively; that is, with the same meaning as and instead of the symbols
⊆
{\displaystyle \subseteq }
and
⊇
.
{\displaystyle \supseteq .}
For example, for these authors, it is true of every set A that
A
⊂
A
.
{\displaystyle A\subset A.}
(a reflexive relation).
Other authors prefer to use the symbols
⊂
{\displaystyle \subset }
and
⊃
{\displaystyle \supset }
to indicate proper (also called strict) subset and proper superset respectively; that is, with the same meaning as and instead of the symbols
⊊
{\displaystyle \subsetneq }
and
⊋
.
{\displaystyle \supsetneq .}
This usage makes
⊆
{\displaystyle \subseteq }
and
⊂
{\displaystyle \subset }
analogous to the inequality symbols
≤
{\displaystyle \leq }
and
<
.
{\displaystyle <.}
For example, if
x
≤
y
,
{\displaystyle x\leq y,}
then x may or may not equal y, but if
x
<
y
,
{\displaystyle x<y,}
then x definitely does not equal y, and is less than y (an irreflexive relation). Similarly, using the convention that
⊂
{\displaystyle \subset }
is proper subset, if
A
⊆
B
,
{\displaystyle A\subseteq B,}
then A may or may not equal B, but if
A
⊂
B
,
{\displaystyle A\subset B,}
then A definitely does not equal B.
== Examples of subsets ==
The set A = {1, 2} is a proper subset of B = {1, 2, 3}, thus both expressions
A
⊆
B
{\displaystyle A\subseteq B}
and
A
⊊
B
{\displaystyle A\subsetneq B}
are true.
The set D = {1, 2, 3} is a subset (but not a proper subset) of E = {1, 2, 3}, thus
D
⊆
E
{\displaystyle D\subseteq E}
is true, and
D
⊊
E
{\displaystyle D\subsetneq E}
is not true (false).
The set {x: x is a prime number greater than 10} is a proper subset of {x: x is an odd number greater than 10}
The set of natural numbers is a proper subset of the set of rational numbers; likewise, the set of points in a line segment is a proper subset of the set of points in a line. These are two examples in which both the subset and the whole set are infinite, and the subset has the same cardinality (the concept that corresponds to size, that is, the number of elements, of a finite set) as the whole; such cases can run counter to one's initial intuition.
The set of rational numbers is a proper subset of the set of real numbers. In this example, both sets are infinite, but the latter set has a larger cardinality (or power) than the former set.
Another example in an Euler diagram:
== Power set ==
The set of all subsets of
S
{\displaystyle S}
is called its power set, and is denoted by
P
(
S
)
{\displaystyle {\mathcal {P}}(S)}
.
The inclusion relation
⊆
{\displaystyle \subseteq }
is a partial order on the set
P
(
S
)
{\displaystyle {\mathcal {P}}(S)}
defined by
A
≤
B
⟺
A
⊆
B
{\displaystyle A\leq B\iff A\subseteq B}
. We may also partially order
P
(
S
)
{\displaystyle {\mathcal {P}}(S)}
by reverse set inclusion by defining
A
≤
B
if and only if
B
⊆
A
.
{\displaystyle A\leq B{\text{ if and only if }}B\subseteq A.}
For the power set
P
(
S
)
{\displaystyle \operatorname {\mathcal {P}} (S)}
of a set S, the inclusion partial order is—up to an order isomorphism—the Cartesian product of
k
=
|
S
|
{\displaystyle k=|S|}
(the cardinality of S) copies of the partial order on
{
0
,
1
}
{\displaystyle \{0,1\}}
for which
0
<
1.
{\displaystyle 0<1.}
This can be illustrated by enumerating
S
=
{
s
1
,
s
2
,
…
,
s
k
}
,
{\displaystyle S=\left\{s_{1},s_{2},\ldots ,s_{k}\right\},}
, and associating with each subset
T
⊆
S
{\displaystyle T\subseteq S}
(i.e., each element of
2
S
{\displaystyle 2^{S}}
) the k-tuple from
{
0
,
1
}
k
,
{\displaystyle \{0,1\}^{k},}
of which the ith coordinate is 1 if and only if
s
i
{\displaystyle s_{i}}
is a member of T.
The set of all
k
{\displaystyle k}
-subsets of
A
{\displaystyle A}
is denoted by
(
A
k
)
{\displaystyle {\tbinom {A}{k}}}
, in analogue with the notation for binomial coefficients, which count the number of
k
{\displaystyle k}
-subsets of an
n
{\displaystyle n}
-element set. In set theory, the notation
[
A
]
k
{\displaystyle [A]^{k}}
is also common, especially when
k
{\displaystyle k}
is a transfinite cardinal number.
== Other properties of inclusion ==
A set A is a subset of B if and only if their intersection is equal to A. Formally:
A
⊆
B
if and only if
A
∩
B
=
A
.
{\displaystyle A\subseteq B{\text{ if and only if }}A\cap B=A.}
A set A is a subset of B if and only if their union is equal to B. Formally:
A
⊆
B
if and only if
A
∪
B
=
B
.
{\displaystyle A\subseteq B{\text{ if and only if }}A\cup B=B.}
A finite set A is a subset of B, if and only if the cardinality of their intersection is equal to the cardinality of A. Formally:
A
⊆
B
if and only if
|
A
∩
B
|
=
|
A
|
.
{\displaystyle A\subseteq B{\text{ if and only if }}|A\cap B|=|A|.}
The subset relation defines a partial order on sets. In fact, the subsets of a given set form a Boolean algebra under the subset relation, in which the join and meet are given by intersection and union, and the subset relation itself is the Boolean inclusion relation.
Inclusion is the canonical partial order, in the sense that every partially ordered set
(
X
,
⪯
)
{\displaystyle (X,\preceq )}
is isomorphic to some collection of sets ordered by inclusion. The ordinal numbers are a simple example: if each ordinal n is identified with the set
[
n
]
{\displaystyle [n]}
of all ordinals less than or equal to n, then
a
≤
b
{\displaystyle a\leq b}
if and only if
[
a
]
⊆
[
b
]
.
{\displaystyle [a]\subseteq [b].}
== See also ==
Convex subset – In geometry, set whose intersection with every line is a single line segmentPages displaying short descriptions of redirect targets
Inclusion order – Partial order that arises as the subset-inclusion relation on some collection of objects
Mereology – Study of parts and the wholes they form
Region – Connected open subset of a topological spacePages displaying short descriptions of redirect targets
Subset sum problem – Decision problem in computer science
Subsumptive containment – System of elements that are subordinated to each other
Subspace – Mathematical set with some added structurePages displaying short descriptions of redirect targets
Total subset – Subset T of a topological vector space X where the linear span of T is a dense subset of X
== References ==
== Bibliography ==
Jech, Thomas (2002). Set Theory. Springer-Verlag. ISBN 3-540-44085-2.
== External links ==
Media related to Subsets at Wikimedia Commons
Weisstein, Eric W. "Subset". MathWorld. | Wikipedia/Inclusion_(set_theory) |
In mathematics, Stone's representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to a certain field of sets. The theorem is fundamental to the deeper understanding of Boolean algebra that emerged in the first half of the 20th century. The theorem was first proved by Marshall H. Stone. Stone was led to it by his study of the spectral theory of operators on a Hilbert space.
== Stone spaces ==
Each Boolean algebra B has an associated topological space, denoted here S(B), called its Stone space. The points in S(B) are the ultrafilters on B, or equivalently the homomorphisms from B to the two-element Boolean algebra. The topology on S(B) is generated by a basis consisting of all sets of the form
{
x
∈
S
(
B
)
∣
b
∈
x
}
,
{\displaystyle \{x\in S(B)\mid b\in x\},}
where b is an element of B. These sets are also closed and so are clopen (both closed and open). This is the topology of pointwise convergence of nets of homomorphisms into the two-element Boolean algebra.
For every Boolean algebra B, S(B) is a compact totally disconnected Hausdorff space; such spaces are called Stone spaces (also profinite spaces). Conversely, given any Stone space X, the collection of subsets of X that are clopen is a Boolean algebra.
== Representation theorem ==
A simple version of Stone's representation theorem states that every Boolean algebra B is isomorphic to the algebra of clopen subsets of its Stone space S(B). The isomorphism sends an element
b
∈
B
{\displaystyle b\in B}
to the set of all ultrafilters that contain b. This is a clopen set because of the choice of topology on S(B) and because B is a Boolean algebra.
Restating the theorem using the language of category theory; the theorem states that there is a duality between the category of Boolean algebras and the category of Stone spaces. This duality means that in addition to the correspondence between Boolean algebras and their Stone spaces, each homomorphism from a Boolean algebra A to a Boolean algebra B corresponds in a natural way to a continuous function from S(B) to S(A). In other words, there is a contravariant functor that gives an equivalence between the categories. This was an early example of a nontrivial duality of categories.
The theorem is a special case of Stone duality, a more general framework for dualities between topological spaces and partially ordered sets.
The proof requires either the axiom of choice or a weakened form of it. Specifically, the theorem is equivalent to the Boolean prime ideal theorem, a weakened choice principle that states that every Boolean algebra has a prime ideal.
An extension of the classical Stone duality to the category of Boolean spaces (that is, zero-dimensional locally compact Hausdorff spaces) and continuous maps (respectively, perfect maps) was obtained by G. D. Dimov (respectively, by H. P. Doctor).
== See also ==
Stone's representation theorem for distributive lattices
Representation theorem – Proof that every structure with certain properties is isomorphic to another structure
Field of sets – Algebraic concept in measure theory, also referred to as an algebra of sets
List of Boolean algebra topics
Stonean space – Topological space in which the closure of every open set is openPages displaying short descriptions of redirect targets
Stone functor – Functor in category theory
Profinite group – Topological group that is in a certain sense assembled from a system of finite groups
Ultrafilter lemma – Maximal proper filterPages displaying short descriptions of redirect targets
== Citations ==
== References ==
Halmos, Paul; Givant, Steven (1998). Logic as Algebra. Dolciani Mathematical Expositions. Vol. 21. The Mathematical Association of America. ISBN 0-88385-327-2.
Johnstone, Peter T. (1982). Stone Spaces. Cambridge University Press. ISBN 0-521-23893-5.
Burris, Stanley N.; Sankappanavar, H.P. (1981). A Course in Universal Algebra. Springer. ISBN 3-540-90578-2. | Wikipedia/Representation_theorem_for_Boolean_algebras |
In abstract algebra, a branch of pure mathematics, an MV-algebra is an algebraic structure with a binary operation
⊕
{\displaystyle \oplus }
, a unary operation
¬
{\displaystyle \neg }
, and the constant
0
{\displaystyle 0}
, satisfying certain axioms. MV-algebras are the algebraic semantics of Łukasiewicz logic; the letters MV refer to the many-valued logic of Łukasiewicz. MV-algebras coincide with the class of bounded commutative BCK algebras.
== Definitions ==
An MV-algebra is an algebraic structure
⟨
A
,
⊕
,
¬
,
0
⟩
,
{\displaystyle \langle A,\oplus ,\lnot ,0\rangle ,}
consisting of
a non-empty set
A
,
{\displaystyle A,}
a binary operation
⊕
{\displaystyle \oplus }
on
A
,
{\displaystyle A,}
a unary operation
¬
{\displaystyle \lnot }
on
A
,
{\displaystyle A,}
and
a constant
0
{\displaystyle 0}
denoting a fixed element of
A
,
{\displaystyle A,}
which satisfies the following identities:
(
x
⊕
y
)
⊕
z
=
x
⊕
(
y
⊕
z
)
,
{\displaystyle (x\oplus y)\oplus z=x\oplus (y\oplus z),}
x
⊕
0
=
x
,
{\displaystyle x\oplus 0=x,}
x
⊕
y
=
y
⊕
x
,
{\displaystyle x\oplus y=y\oplus x,}
¬
¬
x
=
x
,
{\displaystyle \lnot \lnot x=x,}
x
⊕
¬
0
=
¬
0
,
{\displaystyle x\oplus \lnot 0=\lnot 0,}
and
¬
(
¬
x
⊕
y
)
⊕
y
=
¬
(
¬
y
⊕
x
)
⊕
x
.
{\displaystyle \lnot (\lnot x\oplus y)\oplus y=\lnot (\lnot y\oplus x)\oplus x.}
By virtue of the first three axioms,
⟨
A
,
⊕
,
0
⟩
{\displaystyle \langle A,\oplus ,0\rangle }
is a commutative monoid. Being defined by identities, MV-algebras form a variety of algebras. The variety of MV-algebras is a subvariety of the variety of BL-algebras and contains all Boolean algebras.
An MV-algebra can equivalently be defined (Hájek 1998) as a prelinear commutative bounded integral residuated lattice
⟨
L
,
∧
,
∨
,
⊗
,
→
,
0
,
1
⟩
{\displaystyle \langle L,\wedge ,\vee ,\otimes ,\rightarrow ,0,1\rangle }
satisfying the additional identity
x
∨
y
=
(
x
→
y
)
→
y
.
{\displaystyle x\vee y=(x\rightarrow y)\rightarrow y.}
== Examples of MV-algebras ==
A simple numerical example is
A
=
[
0
,
1
]
,
{\displaystyle A=[0,1],}
with operations
x
⊕
y
=
min
(
x
+
y
,
1
)
{\displaystyle x\oplus y=\min(x+y,1)}
and
¬
x
=
1
−
x
.
{\displaystyle \lnot x=1-x.}
In mathematical fuzzy logic, this MV-algebra is called the standard MV-algebra, as it forms the standard real-valued semantics of Łukasiewicz logic.
The trivial MV-algebra has the only element 0 and the operations defined in the only possible way,
0
⊕
0
=
0
{\displaystyle 0\oplus 0=0}
and
¬
0
=
0.
{\displaystyle \lnot 0=0.}
The two-element MV-algebra is actually the two-element Boolean algebra
{
0
,
1
}
,
{\displaystyle \{0,1\},}
with
⊕
{\displaystyle \oplus }
coinciding with Boolean disjunction and
¬
{\displaystyle \lnot }
with Boolean negation. In fact adding the axiom
x
⊕
x
=
x
{\displaystyle x\oplus x=x}
to the axioms defining an MV-algebra results in an axiomatization of Boolean algebras.
If instead the axiom added is
x
⊕
x
⊕
x
=
x
⊕
x
{\displaystyle x\oplus x\oplus x=x\oplus x}
, then the axioms define the MV3 algebra corresponding to the three-valued Łukasiewicz logic Ł3. Other finite linearly ordered MV-algebras are obtained by restricting the universe and operations of the standard MV-algebra to the set of
n
{\displaystyle n}
equidistant real numbers between 0 and 1 (both included), that is, the set
{
0
,
1
/
(
n
−
1
)
,
2
/
(
n
−
1
)
,
…
,
1
}
,
{\displaystyle \{0,1/(n-1),2/(n-1),\dots ,1\},}
which is closed under the operations
⊕
{\displaystyle \oplus }
and
¬
{\displaystyle \lnot }
of the standard MV-algebra; these algebras are usually denoted MVn.
Another important example is Chang's MV-algebra, consisting just of infinitesimals (with the order type ω) and their co-infinitesimals.
Chang also constructed an MV-algebra from an arbitrary totally ordered abelian group G by fixing a positive element u and defining the segment [0, u] as { x ∈ G | 0 ≤ x ≤ u }, which becomes an MV-algebra with x ⊕ y = min(u, x + y) and ¬x = u − x. Furthermore, Chang showed that every linearly ordered MV-algebra is isomorphic to an MV-algebra constructed from a group in this way.
Daniele Mundici extended the above construction to abelian lattice-ordered groups. If G is such a group with strong (order) unit u, then the "unit interval" { x ∈ G | 0 ≤ x ≤ u } can be equipped with ¬x = u − x, x ⊕ y = u ∧G (x + y), and x ⊗ y = 0 ∨G (x + y − u). This construction establishes a categorical equivalence between lattice-ordered abelian groups with strong unit and MV-algebras.
An effect algebra that is lattice-ordered and has the Riesz decomposition property is an MV-algebra. Conversely, any MV-algebra is a lattice-ordered effect algebra with the Riesz decomposition property.
== Relation to Łukasiewicz logic ==
C. C. Chang devised MV-algebras to study many-valued logics, introduced by Jan Łukasiewicz in 1920. In particular, MV-algebras form the algebraic semantics of Łukasiewicz logic, as described below.
Given an MV-algebra A, an A-valuation is a homomorphism from the algebra of propositional formulas (in the language consisting of
⊕
,
¬
,
{\displaystyle \oplus ,\lnot ,}
and 0) into A. Formulas mapped to 1 (that is, to
¬
{\displaystyle \lnot }
0) for all A-valuations are called A-tautologies. If the standard MV-algebra over [0,1] is employed, the set of all [0,1]-tautologies determines so-called infinite-valued Łukasiewicz logic.
Chang's (1958, 1959) completeness theorem states that any MV-algebra equation holding in the standard MV-algebra over the interval [0,1] will hold in every MV-algebra. Algebraically, this means that the standard MV-algebra generates the variety of all MV-algebras. Equivalently, Chang's completeness theorem says that MV-algebras characterize infinite-valued Łukasiewicz logic, defined as the set of [0,1]-tautologies.
The way the [0,1] MV-algebra characterizes all possible MV-algebras parallels the well-known fact that identities holding in the two-element Boolean algebra hold in all possible Boolean algebras. Moreover, MV-algebras characterize infinite-valued Łukasiewicz logic in a manner analogous to the way that Boolean algebras characterize classical bivalent logic (see Lindenbaum–Tarski algebra).
In 1984, Font, Rodriguez and Torrens introduced the Wajsberg algebra as an alternative model for the infinite-valued Łukasiewicz logic. Wajsberg algebras and MV-algebras are term-equivalent.
=== MVn-algebras ===
In the 1940s, Grigore Moisil introduced his Łukasiewicz–Moisil algebras (LMn-algebras) in the hope of giving algebraic semantics for the (finitely) n-valued Łukasiewicz logic. However, in 1956, Alan Rose discovered that for n ≥ 5, the Łukasiewicz–Moisil algebra does not model the Łukasiewicz n-valued logic. Although C. C. Chang published his MV-algebra in 1958, it is a faithful model only for the ℵ0-valued (infinitely-many-valued) Łukasiewicz–Tarski logic. For the axiomatically more complicated (finitely) n-valued Łukasiewicz logics, suitable algebras were published in 1977 by Revaz Grigolia and called MVn-algebras. MVn-algebras are a subclass of LMn-algebras; the inclusion is strict for n ≥ 5.
The MVn-algebras are MV-algebras that satisfy some additional axioms, just like the n-valued Łukasiewicz logics have additional axioms added to the ℵ0-valued logic.
In 1982, Roberto Cignoli published some additional constraints that added to LMn-algebras yield proper models for n-valued Łukasiewicz logic; Cignoli called his discovery proper n-valued Łukasiewicz algebras. The LMn-algebras that are also MVn-algebras are precisely Cignoli's proper n-valued Łukasiewicz algebras.
== Relation to functional analysis ==
MV-algebras were related by Daniele Mundici to approximately finite-dimensional C*-algebras by establishing a bijective correspondence between all isomorphism classes of approximately finite-dimensional C*-algebras with lattice-ordered dimension group and all isomorphism classes of countable MV algebras. Some instances of this correspondence include:
== In software ==
There are multiple frameworks implementing fuzzy logic (type II), and most of them implement what has been called a multi-adjoint logic. This is no more than the implementation of an MV-algebra.
== References ==
Chang, C. C. (1958) "Algebraic analysis of many-valued logics," Transactions of the American Mathematical Society 88: 476–490.
------ (1959) "A new proof of the completeness of the Lukasiewicz axioms," Transactions of the American Mathematical Society 88: 74–80.
Cignoli, R. L. O., D'Ottaviano, I. M. L., Mundici, D. (2000) Algebraic Foundations of Many-valued Reasoning. Kluwer.
Di Nola A., Lettieri A. (1993) "Equational characterization of all varieties of MV-algebras," Journal of Algebra 221: 463–474 doi:10.1006/jabr.1999.7900.
Hájek, Petr (1998) Metamathematics of Fuzzy Logic. Kluwer.
Mundici, D.: Interpretation of AF C*-algebras in Łukasiewicz sentential calculus. J. Funct. Anal. 65, 15–63 (1986) doi:10.1016/0022-1236(86)90015-7
== Further reading ==
Daniele Mundici, MV-ALGEBRAS. A short tutorial
D. Mundici (2011). Advanced Łukasiewicz calculus and MV-algebras. Springer. ISBN 978-94-007-0839-6.
Mundici, D. The C*-Algebras of Three-Valued Logic. Logic Colloquium '88, Proceedings of the Colloquium held in Padova 61–77 (1989). doi:10.1016/s0049-237x(08)70262-3
Cabrer, L. M. & Mundici, D. A Stone-Weierstrass theorem for MV-algebras and unital ℓ-groups. Journal of Logic and Computation (2014). doi:10.1093/logcom/exu023
Olivia Caramello, Anna Carla Russo (2014) The Morita-equivalence between MV-algebras and abelian ℓ-groups with strong unit
== External links ==
Stanford Encyclopedia of Philosophy: "Many-valued logic"—by Siegfried Gottwald. | Wikipedia/MV-algebra |
In mathematics, a binary relation associates some elements of one set called the domain with some elements of another set called the codomain. Precisely, a binary relation over sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
is a set of ordered pairs
(
x
,
y
)
{\displaystyle (x,y)}
, where
x
{\displaystyle x}
is an element of
X
{\displaystyle X}
and
y
{\displaystyle y}
is an element of
Y
{\displaystyle Y}
. It encodes the common concept of relation: an element
x
{\displaystyle x}
is related to an element
y
{\displaystyle y}
, if and only if the pair
(
x
,
y
)
{\displaystyle (x,y)}
belongs to the set of ordered pairs that defines the binary relation.
An example of a binary relation is the "divides" relation over the set of prime numbers
P
{\displaystyle \mathbb {P} }
and the set of integers
Z
{\displaystyle \mathbb {Z} }
, in which each prime
p
{\displaystyle p}
is related to each integer
z
{\displaystyle z}
that is a multiple of
p
{\displaystyle p}
, but not to an integer that is not a multiple of
p
{\displaystyle p}
. In this relation, for instance, the prime number
2
{\displaystyle 2}
is related to numbers such as
−
4
{\displaystyle -4}
,
0
{\displaystyle 0}
,
6
{\displaystyle 6}
,
10
{\displaystyle 10}
, but not to
1
{\displaystyle 1}
or
9
{\displaystyle 9}
, just as the prime number
3
{\displaystyle 3}
is related to
0
{\displaystyle 0}
,
6
{\displaystyle 6}
, and
9
{\displaystyle 9}
, but not to
4
{\displaystyle 4}
or
13
{\displaystyle 13}
.
Binary relations, and especially homogeneous relations, are used in many branches of mathematics to model a wide variety of concepts. These include, among others:
the "is greater than", "is equal to", and "divides" relations in arithmetic;
the "is congruent to" relation in geometry;
the "is adjacent to" relation in graph theory;
the "is orthogonal to" relation in linear algebra.
A function may be defined as a binary relation that meets additional constraints. Binary relations are also heavily used in computer science.
A binary relation over sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
is an element of the power set of
X
×
Y
.
{\displaystyle X\times Y.}
Since the latter set is ordered by inclusion (
⊆
{\displaystyle \subseteq }
), each relation has a place in the lattice of subsets of
X
×
Y
.
{\displaystyle X\times Y.}
A binary relation is called a homogeneous relation when
X
=
Y
{\displaystyle X=Y}
. A binary relation is also called a heterogeneous relation when it is not necessary that
X
=
Y
{\displaystyle X=Y}
.
Since relations are sets, they can be manipulated using set operations, including union, intersection, and complementation, and satisfying the laws of an algebra of sets. Beyond that, operations like the converse of a relation and the composition of relations are available, satisfying the laws of a calculus of relations, for which there are textbooks by Ernst Schröder, Clarence Lewis, and Gunther Schmidt. A deeper analysis of relations involves decomposing them into subsets called concepts, and placing them in a complete lattice.
In some systems of axiomatic set theory, relations are extended to classes, which are generalizations of sets. This extension is needed for, among other things, modeling the concepts of "is an element of" or "is a subset of" in set theory, without running into logical inconsistencies such as Russell's paradox.
A binary relation is the most studied special case
n
=
2
{\displaystyle n=2}
of an
n
{\displaystyle n}
-ary relation over sets
X
1
,
…
,
X
n
{\displaystyle X_{1},\dots ,X_{n}}
, which is a subset of the Cartesian product
X
1
×
⋯
×
X
n
.
{\displaystyle X_{1}\times \cdots \times X_{n}.}
== Definition ==
Given sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
, the Cartesian product
X
×
Y
{\displaystyle X\times Y}
is defined as
{
(
x
,
y
)
∣
x
∈
X
and
y
∈
Y
}
,
{\displaystyle \{(x,y)\mid x\in X{\text{ and }}y\in Y\},}
and its elements are called ordered pairs.
A binary relation
R
{\displaystyle R}
over sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
is a subset of
X
×
Y
.
{\displaystyle X\times Y.}
The set
X
{\displaystyle X}
is called the domain or set of departure of
R
{\displaystyle R}
, and the set
Y
{\displaystyle Y}
the codomain or set of destination of
R
{\displaystyle R}
. In order to specify the choices of the sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
, some authors define a binary relation or correspondence as an ordered triple
(
X
,
Y
,
G
)
{\displaystyle (X,Y,G)}
, where
G
{\displaystyle G}
is a subset of
X
×
Y
{\displaystyle X\times Y}
called the graph of the binary relation. The statement
(
x
,
y
)
∈
R
{\displaystyle (x,y)\in R}
reads "
x
{\displaystyle x}
is
R
{\displaystyle R}
-related to
y
{\displaystyle y}
" and is denoted by
x
R
y
{\displaystyle xRy}
. The domain of definition or active domain of
R
{\displaystyle R}
is the set of all
x
{\displaystyle x}
such that
x
R
y
{\displaystyle xRy}
for at least one
y
{\displaystyle y}
. The codomain of definition, active codomain, image or range of
R
{\displaystyle R}
is the set of all
y
{\displaystyle y}
such that
x
R
y
{\displaystyle xRy}
for at least one
x
{\displaystyle x}
. The field of
R
{\displaystyle R}
is the union of its domain of definition and its codomain of definition.
When
X
=
Y
,
{\displaystyle X=Y,}
a binary relation is called a homogeneous relation (or endorelation). To emphasize the fact that
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are allowed to be different, a binary relation is also called a heterogeneous relation. The prefix hetero is from the Greek ἕτερος (heteros, "other, another, different").
A heterogeneous relation has been called a rectangular relation, suggesting that it does not have the square-like symmetry of a homogeneous relation on a set where
A
=
B
.
{\displaystyle A=B.}
Commenting on the development of binary relations beyond homogeneous relations, researchers wrote, "... a variant of the theory has evolved that treats relations from the very beginning as heterogeneous or rectangular, i.e. as relations where the normal case is that they are relations between different sets."
The terms correspondence, dyadic relation and two-place relation are synonyms for binary relation, though some authors use the term "binary relation" for any subset of a Cartesian product
X
×
Y
{\displaystyle X\times Y}
without reference to
X
{\displaystyle X}
and
Y
{\displaystyle Y}
, and reserve the term "correspondence" for a binary relation with reference to
X
{\displaystyle X}
and
Y
{\displaystyle Y}
.
In a binary relation, the order of the elements is important; if
x
≠
y
{\displaystyle x\neq y}
then
y
R
x
{\displaystyle yRx}
can be true or false independently of
x
R
y
{\displaystyle xRy}
. For example,
3
{\displaystyle 3}
divides
9
{\displaystyle 9}
, but
9
{\displaystyle 9}
does not divide
3
{\displaystyle 3}
.
== Operations ==
=== Union ===
If
R
{\displaystyle R}
and
S
{\displaystyle S}
are binary relations over sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
then
R
∪
S
=
{
(
x
,
y
)
∣
x
R
y
or
x
S
y
}
{\displaystyle R\cup S=\{(x,y)\mid xRy{\text{ or }}xSy\}}
is the union relation of
R
{\displaystyle R}
and
S
{\displaystyle S}
over
X
{\displaystyle X}
and
Y
{\displaystyle Y}
.
The identity element is the empty relation. For example,
≤
{\displaystyle \leq }
is the union of < and =, and
≥
{\displaystyle \geq }
is the union of > and =.
=== Intersection ===
If
R
{\displaystyle R}
and
S
{\displaystyle S}
are binary relations over sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
then
R
∩
S
=
{
(
x
,
y
)
∣
x
R
y
and
x
S
y
}
{\displaystyle R\cap S=\{(x,y)\mid xRy{\text{ and }}xSy\}}
is the intersection relation of
R
{\displaystyle R}
and
S
{\displaystyle S}
over
X
{\displaystyle X}
and
Y
{\displaystyle Y}
.
The identity element is the universal relation. For example, the relation "is divisible by 6" is the intersection of the relations "is divisible by 3" and "is divisible by 2".
=== Composition ===
If
R
{\displaystyle R}
is a binary relation over sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
, and
S
{\displaystyle S}
is a binary relation over sets
Y
{\displaystyle Y}
and
Z
{\displaystyle Z}
then
S
∘
R
=
{
(
x
,
z
)
∣
there exists
y
∈
Y
such that
x
R
y
and
y
S
z
}
{\displaystyle S\circ R=\{(x,z)\mid {\text{ there exists }}y\in Y{\text{ such that }}xRy{\text{ and }}ySz\}}
(also denoted by
R
;
S
{\displaystyle R;S}
) is the composition relation of
R
{\displaystyle R}
and
S
{\displaystyle S}
over
X
{\displaystyle X}
and
Z
{\displaystyle Z}
.
The identity element is the identity relation. The order of
R
{\displaystyle R}
and
S
{\displaystyle S}
in the notation
S
∘
R
,
{\displaystyle S\circ R,}
used here agrees with the standard notational order for composition of functions. For example, the composition (is parent of)
∘
{\displaystyle \circ }
(is mother of) yields (is maternal grandparent of), while the composition (is mother of)
∘
{\displaystyle \circ }
(is parent of) yields (is grandmother of). For the former case, if
x
{\displaystyle x}
is the parent of
y
{\displaystyle y}
and
y
{\displaystyle y}
is the mother of
z
{\displaystyle z}
, then
x
{\displaystyle x}
is the maternal grandparent of
z
{\displaystyle z}
.
=== Converse ===
If
R
{\displaystyle R}
is a binary relation over sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
then
R
T
=
{
(
y
,
x
)
∣
x
R
y
}
{\displaystyle R^{\textsf {T}}=\{(y,x)\mid xRy\}}
is the converse relation, also called inverse relation, of
R
{\displaystyle R}
over
Y
{\displaystyle Y}
and
X
{\displaystyle X}
.
For example,
=
{\displaystyle =}
is the converse of itself, as is
≠
{\displaystyle \neq }
, and
<
{\displaystyle <}
and
>
{\displaystyle >}
are each other's converse, as are
≤
{\displaystyle \leq }
and
≥
.
{\displaystyle \geq .}
A binary relation is equal to its converse if and only if it is symmetric.
=== Complement ===
If
R
{\displaystyle R}
is a binary relation over sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
then
R
¯
=
{
(
x
,
y
)
∣
¬
x
R
y
}
{\displaystyle {\bar {R}}=\{(x,y)\mid \neg xRy\}}
(also denoted by
¬
R
{\displaystyle \neg R}
) is the complementary relation of
R
{\displaystyle R}
over
X
{\displaystyle X}
and
Y
{\displaystyle Y}
.
For example,
=
{\displaystyle =}
and
≠
{\displaystyle \neq }
are each other's complement, as are
⊆
{\displaystyle \subseteq }
and
⊈
{\displaystyle \not \subseteq }
,
⊇
{\displaystyle \supseteq }
and
⊉
{\displaystyle \not \supseteq }
,
∈
{\displaystyle \in }
and
∉
{\displaystyle \not \in }
, and for total orders also
<
{\displaystyle <}
and
≥
{\displaystyle \geq }
, and
>
{\displaystyle >}
and
≤
{\displaystyle \leq }
.
The complement of the converse relation
R
T
{\displaystyle R^{\textsf {T}}}
is the converse of the complement:
R
T
¯
=
R
¯
T
.
{\displaystyle {\overline {R^{\mathsf {T}}}}={\bar {R}}^{\mathsf {T}}.}
If
X
=
Y
,
{\displaystyle X=Y,}
the complement has the following properties:
If a relation is symmetric, then so is the complement.
The complement of a reflexive relation is irreflexive—and vice versa.
The complement of a strict weak order is a total preorder—and vice versa.
=== Restriction ===
If
R
{\displaystyle R}
is a binary homogeneous relation over a set
X
{\displaystyle X}
and
S
{\displaystyle S}
is a subset of
X
{\displaystyle X}
then
R
|
S
=
{
(
x
,
y
)
∣
x
R
y
and
x
∈
S
and
y
∈
S
}
{\displaystyle R_{\vert S}=\{(x,y)\mid xRy{\text{ and }}x\in S{\text{ and }}y\in S\}}
is the restriction relation of
R
{\displaystyle R}
to
S
{\displaystyle S}
over
X
{\displaystyle X}
.
If
R
{\displaystyle R}
is a binary relation over sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
and if
S
{\displaystyle S}
is a subset of
X
{\displaystyle X}
then
R
|
S
=
{
(
x
,
y
)
∣
x
R
y
and
x
∈
S
}
{\displaystyle R_{\vert S}=\{(x,y)\mid xRy{\text{ and }}x\in S\}}
is the left-restriction relation of
R
{\displaystyle R}
to
S
{\displaystyle S}
over
X
{\displaystyle X}
and
Y
{\displaystyle Y}
.
If a relation is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, strict weak order, total preorder (weak order), or an equivalence relation, then so too are its restrictions.
However, the transitive closure of a restriction is a subset of the restriction of the transitive closure, i.e., in general not equal. For example, restricting the relation "
x
{\displaystyle x}
is parent of
y
{\displaystyle y}
" to females yields the relation "
x
{\displaystyle x}
is mother of the woman
y
{\displaystyle y}
"; its transitive closure does not relate a woman with her paternal grandmother. On the other hand, the transitive closure of "is parent of" is "is ancestor of"; its restriction to females does relate a woman with her paternal grandmother.
Also, the various concepts of completeness (not to be confused with being "total") do not carry over to restrictions. For example, over the real numbers a property of the relation
≤
{\displaystyle \leq }
is that every non-empty subset
S
⊆
R
{\displaystyle S\subseteq \mathbb {R} }
with an upper bound in
R
{\displaystyle \mathbb {R} }
has a least upper bound (also called supremum) in
R
.
{\displaystyle \mathbb {R} .}
However, for the rational numbers this supremum is not necessarily rational, so the same property does not hold on the restriction of the relation
≤
{\displaystyle \leq }
to the rational numbers.
A binary relation
R
{\displaystyle R}
over sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
is said to be contained in a relation
S
{\displaystyle S}
over
X
{\displaystyle X}
and
Y
{\displaystyle Y}
, written
R
⊆
S
,
{\displaystyle R\subseteq S,}
if
R
{\displaystyle R}
is a subset of
S
{\displaystyle S}
, that is, for all
x
∈
X
{\displaystyle x\in X}
and
y
∈
Y
,
{\displaystyle y\in Y,}
if
x
R
y
{\displaystyle xRy}
, then
x
S
y
{\displaystyle xSy}
. If
R
{\displaystyle R}
is contained in
S
{\displaystyle S}
and
S
{\displaystyle S}
is contained in
R
{\displaystyle R}
, then
R
{\displaystyle R}
and
S
{\displaystyle S}
are called equal written
R
=
S
{\displaystyle R=S}
. If
R
{\displaystyle R}
is contained in
S
{\displaystyle S}
but
S
{\displaystyle S}
is not contained in
R
{\displaystyle R}
, then
R
{\displaystyle R}
is said to be smaller than
S
{\displaystyle S}
, written
R
⊊
S
.
{\displaystyle R\subsetneq S.}
For example, on the rational numbers, the relation
>
{\displaystyle >}
is smaller than
≥
{\displaystyle \geq }
, and equal to the composition
>
∘
>
{\displaystyle >\circ >}
.
=== Matrix representation ===
Binary relations over sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
can be represented algebraically by logical matrices indexed by
X
{\displaystyle X}
and
Y
{\displaystyle Y}
with entries in the Boolean semiring (addition corresponds to OR and multiplication to AND) where matrix addition corresponds to union of relations, matrix multiplication corresponds to composition of relations (of a relation over
X
{\displaystyle X}
and
Y
{\displaystyle Y}
and a relation over
Y
{\displaystyle Y}
and
Z
{\displaystyle Z}
), the Hadamard product corresponds to intersection of relations, the zero matrix corresponds to the empty relation, and the matrix of ones corresponds to the universal relation. Homogeneous relations (when
X
=
Y
{\displaystyle X=Y}
) form a matrix semiring (indeed, a matrix semialgebra over the Boolean semiring) where the identity matrix corresponds to the identity relation.
== Examples ==
== Types of binary relations ==
Some important types of binary relations
R
{\displaystyle R}
over sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are listed below.
Uniqueness properties:
Injective (also called left-unique): for all
x
,
y
∈
X
{\displaystyle x,y\in X}
and all
z
∈
Y
,
{\displaystyle z\in Y,}
if
x
R
z
{\displaystyle xRz}
and
y
R
z
{\displaystyle yRz}
then
x
=
y
{\displaystyle x=y}
. In other words, every element of the codomain has at most one preimage element. For such a relation,
Y
{\displaystyle Y}
is called a primary key of
R
{\displaystyle R}
. For example, the green and blue binary relations in the diagram are injective, but the red one is not (as it relates both
−
1
{\displaystyle -1}
and
1
{\displaystyle 1}
to
1
{\displaystyle 1}
), nor the black one (as it relates both
−
1
{\displaystyle -1}
and
1
{\displaystyle 1}
to
0
{\displaystyle 0}
).
Functional (also called right-unique or univalent): for all
x
∈
X
{\displaystyle x\in X}
and all
y
,
z
∈
Y
,
{\displaystyle y,z\in Y,}
if
x
R
y
{\displaystyle xRy}
and
x
R
z
{\displaystyle xRz}
then
y
=
z
{\displaystyle y=z}
. In other words, every element of the domain has at most one image element. Such a binary relation is called a partial function or partial mapping. For such a relation,
{
X
}
{\displaystyle \{X\}}
is called a primary key of
R
{\displaystyle R}
. For example, the red and green binary relations in the diagram are functional, but the blue one is not (as it relates
1
{\displaystyle 1}
to both
1
{\displaystyle 1}
and
−
1
{\displaystyle -1}
), nor the black one (as it relates
0
{\displaystyle 0}
to both
−
1
{\displaystyle -1}
and
1
{\displaystyle 1}
).
One-to-one: injective and functional. For example, the green binary relation in the diagram is one-to-one, but the red, blue and black ones are not.
One-to-many: injective and not functional. For example, the blue binary relation in the diagram is one-to-many, but the red, green and black ones are not.
Many-to-one: functional and not injective. For example, the red binary relation in the diagram is many-to-one, but the green, blue and black ones are not.
Many-to-many: not injective nor functional. For example, the black binary relation in the diagram is many-to-many, but the red, green and blue ones are not.
Totality properties (only definable if the domain
X
{\displaystyle X}
and codomain
Y
{\displaystyle Y}
are specified):
Total (also called left-total): for all
x
∈
X
{\displaystyle x\in X}
there exists a
y
∈
Y
{\displaystyle y\in Y}
such that
x
R
y
{\displaystyle xRy}
. In other words, every element of the domain has at least one image element. In other words, the domain of definition of
R
{\displaystyle R}
is equal to
X
{\displaystyle X}
. This property, is different from the definition of connected (also called total by some authors) in Properties. Such a binary relation is called a multivalued function. For example, the red and green binary relations in the diagram are total, but the blue one is not (as it does not relate
−
1
{\displaystyle -1}
to any real number), nor the black one (as it does not relate
2
{\displaystyle 2}
to any real number). As another example,
>
{\displaystyle >}
is a total relation over the integers. But it is not a total relation over the positive integers, because there is no
y
{\displaystyle y}
in the positive integers such that
1
>
y
{\displaystyle 1>y}
. However,
<
{\displaystyle <}
is a total relation over the positive integers, the rational numbers and the real numbers. Every reflexive relation is total: for a given
x
{\displaystyle x}
, choose
y
=
x
{\displaystyle y=x}
.
Surjective (also called right-total): for all
y
∈
Y
{\displaystyle y\in Y}
, there exists an
x
∈
X
{\displaystyle x\in X}
such that
x
R
y
{\displaystyle xRy}
. In other words, every element of the codomain has at least one preimage element. In other words, the codomain of definition of
R
{\displaystyle R}
is equal to
Y
{\displaystyle Y}
. For example, the green and blue binary relations in the diagram are surjective, but the red one is not (as it does not relate any real number to
−
1
{\displaystyle -1}
), nor the black one (as it does not relate any real number to
2
{\displaystyle 2}
).
Uniqueness and totality properties (only definable if the domain
X
{\displaystyle X}
and codomain
Y
{\displaystyle Y}
are specified):
A function (also called mapping): a binary relation that is functional and total. In other words, every element of the domain has exactly one image element. For example, the red and green binary relations in the diagram are functions, but the blue and black ones are not.
An injection: a function that is injective. For example, the green relation in the diagram is an injection, but the red one is not; the black and the blue relation is not even a function.
A surjection: a function that is surjective. For example, the green relation in the diagram is a surjection, but the red one is not.
A bijection: a function that is injective and surjective. In other words, every element of the domain has exactly one image element and every element of the codomain has exactly one preimage element. For example, the green binary relation in the diagram is a bijection, but the red one is not.
If relations over proper classes are allowed:
Set-like (also called local): for all
x
∈
X
{\displaystyle x\in X}
, the class of all
y
∈
Y
{\displaystyle y\in Y}
such that
y
R
x
{\displaystyle yRx}
, i.e.
{
y
∈
Y
,
y
R
x
}
{\displaystyle \{y\in Y,yRx\}}
, is a set. For example, the relation
∈
{\displaystyle \in }
is set-like, and every relation on two sets is set-like. The usual ordering < over the class of ordinal numbers is a set-like relation, while its inverse > is not.
== Sets versus classes ==
Certain mathematical "relations", such as "equal to", "subset of", and "member of", cannot be understood to be binary relations as defined above, because their domains and codomains cannot be taken to be sets in the usual systems of axiomatic set theory. For example, to model the general concept of "equality" as a binary relation
=
{\displaystyle =}
, take the domain and codomain to be the "class of all sets", which is not a set in the usual set theory.
In most mathematical contexts, references to the relations of equality, membership and subset are harmless because they can be understood implicitly to be restricted to some set in the context. The usual work-around to this problem is to select a "large enough" set
A
{\displaystyle A}
, that contains all the objects of interest, and work with the restriction
=
A
{\displaystyle =_{A}}
instead of
=
{\displaystyle =}
. Similarly, the "subset of" relation
⊆
{\displaystyle \subseteq }
needs to be restricted to have domain and codomain
P
(
A
)
{\displaystyle P(A)}
(the power set of a specific set
A
{\displaystyle A}
): the resulting set relation can be denoted by
⊆
A
.
{\displaystyle \subseteq _{A}.}
Also, the "member of" relation needs to be restricted to have domain
A
{\displaystyle A}
and codomain
P
(
A
)
{\displaystyle P(A)}
to obtain a binary relation
∈
A
{\displaystyle \in _{A}}
that is a set. Bertrand Russell has shown that assuming
∈
{\displaystyle \in }
to be defined over all sets leads to a contradiction in naive set theory, see Russell's paradox.
Another solution to this problem is to use a set theory with proper classes, such as NBG or Morse–Kelley set theory, and allow the domain and codomain (and so the graph) to be proper classes: in such a theory, equality, membership, and subset are binary relations without special comment. (A minor modification needs to be made to the concept of the ordered triple
(
X
,
Y
,
G
)
{\displaystyle (X,Y,G)}
, as normally a proper class cannot be a member of an ordered tuple; or of course one can identify the binary relation with its graph in this context.) With this definition one can for instance define a binary relation over every set and its power set.
== Homogeneous relation ==
A homogeneous relation over a set
X
{\displaystyle X}
is a binary relation over
X
{\displaystyle X}
and itself, i.e. it is a subset of the Cartesian product
X
×
X
.
{\displaystyle X\times X.}
It is also simply called a (binary) relation over
X
{\displaystyle X}
.
A homogeneous relation
R
{\displaystyle R}
over a set
X
{\displaystyle X}
may be identified with a directed simple graph permitting loops, where
X
{\displaystyle X}
is the vertex set and
R
{\displaystyle R}
is the edge set (there is an edge from a vertex
x
{\displaystyle x}
to a vertex
y
{\displaystyle y}
if and only if
x
R
y
{\displaystyle xRy}
).
The set of all homogeneous relations
B
(
X
)
{\displaystyle {\mathcal {B}}(X)}
over a set
X
{\displaystyle X}
is the power set
2
X
×
X
{\displaystyle 2^{X\times X}}
which is a Boolean algebra augmented with the involution of mapping of a relation to its converse relation. Considering composition of relations as a binary operation on
B
(
X
)
{\displaystyle {\mathcal {B}}(X)}
, it forms a semigroup with involution.
Some important properties that a homogeneous relation
R
{\displaystyle R}
over a set
X
{\displaystyle X}
may have are:
Reflexive: for all
x
∈
X
,
{\displaystyle x\in X,}
x
R
x
{\displaystyle xRx}
. For example,
≥
{\displaystyle \geq }
is a reflexive relation but > is not.
Irreflexive: for all
x
∈
X
,
{\displaystyle x\in X,}
not
x
R
x
{\displaystyle xRx}
. For example,
>
{\displaystyle >}
is an irreflexive relation, but
≥
{\displaystyle \geq }
is not.
Symmetric: for all
x
,
y
∈
X
,
{\displaystyle x,y\in X,}
if
x
R
y
{\displaystyle xRy}
then
y
R
x
{\displaystyle yRx}
. For example, "is a blood relative of" is a symmetric relation.
Antisymmetric: for all
x
,
y
∈
X
,
{\displaystyle x,y\in X,}
if
x
R
y
{\displaystyle xRy}
and
y
R
x
{\displaystyle yRx}
then
x
=
y
.
{\displaystyle x=y.}
For example,
≥
{\displaystyle \geq }
is an antisymmetric relation.
Asymmetric: for all
x
,
y
∈
X
,
{\displaystyle x,y\in X,}
if
x
R
y
{\displaystyle xRy}
then not
y
R
x
{\displaystyle yRx}
. A relation is asymmetric if and only if it is both antisymmetric and irreflexive. For example, > is an asymmetric relation, but
≥
{\displaystyle \geq }
is not.
Transitive: for all
x
,
y
,
z
∈
X
,
{\displaystyle x,y,z\in X,}
if
x
R
y
{\displaystyle xRy}
and
y
R
z
{\displaystyle yRz}
then
x
R
z
{\displaystyle xRz}
. A transitive relation is irreflexive if and only if it is asymmetric. For example, "is ancestor of" is a transitive relation, while "is parent of" is not.
Connected: for all
x
,
y
∈
X
,
{\displaystyle x,y\in X,}
if
x
≠
y
{\displaystyle x\neq y}
then
x
R
y
{\displaystyle xRy}
or
y
R
x
{\displaystyle yRx}
.
Strongly connected: for all
x
,
y
∈
X
,
{\displaystyle x,y\in X,}
x
R
y
{\displaystyle xRy}
or
y
R
x
{\displaystyle yRx}
.
Dense: for all
x
,
y
∈
X
,
{\displaystyle x,y\in X,}
if
x
R
y
,
{\displaystyle xRy,}
then some
z
∈
X
{\displaystyle z\in X}
exists such that
x
R
z
{\displaystyle xRz}
and
z
R
y
{\displaystyle zRy}
.
A partial order is a relation that is reflexive, antisymmetric, and transitive. A strict partial order is a relation that is irreflexive, asymmetric, and transitive. A total order is a relation that is reflexive, antisymmetric, transitive and connected. A strict total order is a relation that is irreflexive, asymmetric, transitive and connected.
An equivalence relation is a relation that is reflexive, symmetric, and transitive.
For example, "
x
{\displaystyle x}
divides
y
{\displaystyle y}
" is a partial, but not a total order on natural numbers
N
,
{\displaystyle \mathbb {N} ,}
"
x
<
y
{\displaystyle x<y}
" is a strict total order on
N
,
{\displaystyle \mathbb {N} ,}
and "
x
{\displaystyle x}
is parallel to
y
{\displaystyle y}
" is an equivalence relation on the set of all lines in the Euclidean plane.
All operations defined in section § Operations also apply to homogeneous relations.
Beyond that, a homogeneous relation over a set
X
{\displaystyle X}
may be subjected to closure operations like:
Reflexive closure
the smallest reflexive relation over
X
{\displaystyle X}
containing
R
{\displaystyle R}
,
Transitive closure
the smallest transitive relation over
X
{\displaystyle X}
containing
R
{\displaystyle R}
,
Equivalence closure
the smallest equivalence relation over
X
{\displaystyle X}
containing
R
{\displaystyle R}
.
== Calculus of relations ==
Developments in algebraic logic have facilitated usage of binary relations. The calculus of relations includes the algebra of sets, extended by composition of relations and the use of converse relations. The inclusion
R
⊆
S
,
{\displaystyle R\subseteq S,}
meaning that
a
R
b
{\displaystyle aRb}
implies
a
S
b
{\displaystyle aSb}
, sets the scene in a lattice of relations. But since
P
⊆
Q
≡
(
P
∩
Q
¯
=
∅
)
≡
(
P
∩
Q
=
P
)
,
{\displaystyle P\subseteq Q\equiv (P\cap {\bar {Q}}=\varnothing )\equiv (P\cap Q=P),}
the inclusion symbol is superfluous. Nevertheless, composition of relations and manipulation of the operators according to Schröder rules, provides a calculus to work in the power set of
A
×
B
.
{\displaystyle A\times B.}
In contrast to homogeneous relations, the composition of relations operation is only a partial function. The necessity of matching target to source of composed relations has led to the suggestion that the study of heterogeneous relations is a chapter of category theory as in the category of sets, except that the morphisms of this category are relations. The objects of the category Rel are sets, and the relation-morphisms compose as required in a category.
== Induced concept lattice ==
Binary relations have been described through their induced concept lattices:
A concept
C
⊂
R
{\displaystyle C\subset R}
satisfies two properties:
The logical matrix of
C
{\displaystyle C}
is the outer product of logical vectors
C
i
j
=
u
i
v
j
,
u
,
v
{\displaystyle C_{ij}=u_{i}v_{j},\quad u,v}
logical vectors.
C
{\displaystyle C}
is maximal, not contained in any other outer product. Thus
C
{\displaystyle C}
is described as a non-enlargeable rectangle.
For a given relation
R
⊆
X
×
Y
,
{\displaystyle R\subseteq X\times Y,}
the set of concepts, enlarged by their joins and meets, forms an "induced lattice of concepts", with inclusion
⊑
{\displaystyle \sqsubseteq }
forming a preorder.
The MacNeille completion theorem (1937) (that any partial order may be embedded in a complete lattice) is cited in a 2013 survey article "Decomposition of relations on concept lattices". The decomposition is
R
=
f
E
g
T
{\displaystyle R=fEg^{\textsf {T}}}
, where
f
{\displaystyle f}
and
g
{\displaystyle g}
are functions, called mappings or left-total, functional relations in this context. The "induced concept lattice is isomorphic to the cut completion of the partial order
E
{\displaystyle E}
that belongs to the minimal decomposition
(
f
,
g
,
E
)
{\displaystyle (f,g,E)}
of the relation
R
{\displaystyle R}
."
Particular cases are considered below:
E
{\displaystyle E}
total order corresponds to Ferrers type, and
E
{\displaystyle E}
identity corresponds to difunctional, a generalization of equivalence relation on a set.
Relations may be ranked by the Schein rank which counts the number of concepts necessary to cover a relation. Structural analysis of relations with concepts provides an approach for data mining.
== Particular relations ==
Proposition: If
R
{\displaystyle R}
is a surjective relation and
R
T
{\displaystyle R^{\mathsf {T}}}
is its transpose, then
I
⊆
R
T
R
{\displaystyle I\subseteq R^{\textsf {T}}R}
where
I
{\displaystyle I}
is the
m
×
m
{\displaystyle m\times m}
identity relation.
Proposition: If
R
{\displaystyle R}
is a serial relation, then
I
⊆
R
R
T
{\displaystyle I\subseteq RR^{\textsf {T}}}
where
I
{\displaystyle I}
is the
n
×
n
{\displaystyle n\times n}
identity relation.
=== Difunctional ===
The idea of a difunctional relation is to partition objects by distinguishing attributes, as a generalization of the concept of an equivalence relation. One way this can be done is with an intervening set
Z
=
{
x
,
y
,
z
,
…
}
{\displaystyle Z=\{x,y,z,\ldots \}}
of indicators. The partitioning relation
R
=
F
G
T
{\displaystyle R=FG^{\textsf {T}}}
is a composition of relations using functional relations
F
⊆
A
×
Z
and
G
⊆
B
×
Z
.
{\displaystyle F\subseteq A\times Z{\text{ and }}G\subseteq B\times Z.}
Jacques Riguet named these relations difunctional since the composition
F
G
T
{\displaystyle FG^{\mathsf {T}}}
involves functional relations, commonly called partial functions.
In 1950 Riguet showed that such relations satisfy the inclusion:
R
R
T
R
⊆
R
{\displaystyle RR^{\textsf {T}}R\subseteq R}
In automata theory, the term rectangular relation has also been used to denote a difunctional relation. This terminology recalls the fact that, when represented as a logical matrix, the columns and rows of a difunctional relation can be arranged as a block matrix with rectangular blocks of ones on the (asymmetric) main diagonal. More formally, a relation
R
{\displaystyle R}
on
X
×
Y
{\displaystyle X\times Y}
is difunctional if and only if it can be written as the union of Cartesian products
A
i
×
B
i
{\displaystyle A_{i}\times B_{i}}
, where the
A
i
{\displaystyle A_{i}}
are a partition of a subset of
X
{\displaystyle X}
and the
B
i
{\displaystyle B_{i}}
likewise a partition of a subset of
Y
{\displaystyle Y}
.
Using the notation
{
y
∣
x
R
y
}
=
x
R
{\displaystyle \{y\mid xRy\}=xR}
, a difunctional relation can also be characterized as a relation
R
{\displaystyle R}
such that wherever
x
1
R
{\displaystyle x_{1}R}
and
x
2
R
{\displaystyle x_{2}R}
have a non-empty intersection, then these two sets coincide; formally
x
1
∩
x
2
≠
∅
{\displaystyle x_{1}\cap x_{2}\neq \varnothing }
implies
x
1
R
=
x
2
R
.
{\displaystyle x_{1}R=x_{2}R.}
In 1997 researchers found "utility of binary decomposition based on difunctional dependencies in database management." Furthermore, difunctional relations are fundamental in the study of bisimulations.
In the context of homogeneous relations, a partial equivalence relation is difunctional.
=== Ferrers type ===
A strict order on a set is a homogeneous relation arising in order theory.
In 1951 Jacques Riguet adopted the ordering of an integer partition, called a Ferrers diagram, to extend ordering to binary relations in general.
The corresponding logical matrix of a general binary relation has rows which finish with a sequence of ones. Thus the dots of a Ferrer's diagram are changed to ones and aligned on the right in the matrix.
An algebraic statement required for a Ferrers type relation R is
R
R
¯
T
R
⊆
R
.
{\displaystyle R{\bar {R}}^{\textsf {T}}R\subseteq R.}
If any one of the relations
R
,
R
¯
,
R
T
{\displaystyle R,{\bar {R}},R^{\textsf {T}}}
is of Ferrers type, then all of them are.
=== Contact ===
Suppose
B
{\displaystyle B}
is the power set of
A
{\displaystyle A}
, the set of all subsets of
A
{\displaystyle A}
. Then a relation
g
{\displaystyle g}
is a contact relation if it satisfies three properties:
for all
x
∈
A
,
Y
=
{
x
}
implies
x
g
Y
.
{\displaystyle {\text{for all }}x\in A,Y=\{x\}{\text{ implies }}xgY.}
Y
⊆
Z
and
x
g
Y
implies
x
g
Z
.
{\displaystyle Y\subseteq Z{\text{ and }}xgY{\text{ implies }}xgZ.}
for all
y
∈
Y
,
y
g
Z
and
x
g
Y
implies
x
g
Z
.
{\displaystyle {\text{for all }}y\in Y,ygZ{\text{ and }}xgY{\text{ implies }}xgZ.}
The set membership relation,
ϵ
=
{\displaystyle \epsilon =}
"is an element of", satisfies these properties so
ϵ
{\displaystyle \epsilon }
is a contact relation. The notion of a general contact relation was introduced by Georg Aumann in 1970.
In terms of the calculus of relations, sufficient conditions for a contact relation include
C
T
C
¯
⊆∋
C
¯
≡
C
∋
C
¯
¯
⊆
C
,
{\displaystyle C^{\textsf {T}}{\bar {C}}\subseteq \ni {\bar {C}}\equiv C{\overline {\ni {\bar {C}}}}\subseteq C,}
where
∋
{\displaystyle \ni }
is the converse of set membership (
∈
{\displaystyle \in }
).: 280
== Preorder R\R ==
Every relation
R
{\displaystyle R}
generates a preorder
R
∖
R
{\displaystyle R\backslash R}
which is the left residual. In terms of converse and complements,
R
∖
R
≡
R
T
R
¯
¯
.
{\displaystyle R\backslash R\equiv {\overline {R^{\textsf {T}}{\bar {R}}}}.}
Forming the diagonal of
R
T
R
¯
{\displaystyle R^{\textsf {T}}{\bar {R}}}
, the corresponding row of
R
T
{\displaystyle R^{\textsf {T}}}
and column of
R
¯
{\displaystyle {\bar {R}}}
will be of opposite logical values, so the diagonal is all zeros. Then
R
T
R
¯
⊆
I
¯
⟹
I
⊆
R
T
R
¯
¯
=
R
∖
R
{\displaystyle R^{\textsf {T}}{\bar {R}}\subseteq {\bar {I}}\implies I\subseteq {\overline {R^{\textsf {T}}{\bar {R}}}}=R\backslash R}
, so that
R
∖
R
{\displaystyle R\backslash R}
is a reflexive relation.
To show transitivity, one requires that
(
R
∖
R
)
(
R
∖
R
)
⊆
R
∖
R
.
{\displaystyle (R\backslash R)(R\backslash R)\subseteq R\backslash R.}
Recall that
X
=
R
∖
R
{\displaystyle X=R\backslash R}
is the largest relation such that
R
X
⊆
R
.
{\displaystyle RX\subseteq R.}
Then
R
(
R
∖
R
)
⊆
R
{\displaystyle R(R\backslash R)\subseteq R}
R
(
R
∖
R
)
(
R
∖
R
)
⊆
R
{\displaystyle R(R\backslash R)(R\backslash R)\subseteq R}
(repeat)
≡
R
T
R
¯
⊆
(
R
∖
R
)
(
R
∖
R
)
¯
{\displaystyle \equiv R^{\textsf {T}}{\bar {R}}\subseteq {\overline {(R\backslash R)(R\backslash R)}}}
(Schröder's rule)
≡
(
R
∖
R
)
(
R
∖
R
)
⊆
R
T
R
¯
¯
{\displaystyle \equiv (R\backslash R)(R\backslash R)\subseteq {\overline {R^{\textsf {T}}{\bar {R}}}}}
(complementation)
≡
(
R
∖
R
)
(
R
∖
R
)
⊆
R
∖
R
.
{\displaystyle \equiv (R\backslash R)(R\backslash R)\subseteq R\backslash R.}
(definition)
The inclusion relation Ω on the power set of
U
{\displaystyle U}
can be obtained in this way from the membership relation
∈
{\displaystyle \in }
on subsets of
U
{\displaystyle U}
:
Ω
=
∋
∈
¯
¯
=∈
∖
∈
.
{\displaystyle \Omega ={\overline {\ni {\bar {\in }}}}=\in \backslash \in .}
: 283
== Fringe of a relation ==
Given a relation
R
{\displaystyle R}
, its fringe is the sub-relation defined as
fringe
(
R
)
=
R
∩
R
R
¯
T
R
¯
.
{\displaystyle \operatorname {fringe} (R)=R\cap {\overline {R{\bar {R}}^{\textsf {T}}R}}.}
When
R
{\displaystyle R}
is a partial identity relation, difunctional, or a block diagonal relation, then
fringe
(
R
)
=
R
{\displaystyle \operatorname {fringe} (R)=R}
. Otherwise the
fringe
{\displaystyle \operatorname {fringe} }
operator selects a boundary sub-relation described in terms of its logical matrix:
fringe
(
R
)
{\displaystyle \operatorname {fringe} (R)}
is the side diagonal if
R
{\displaystyle R}
is an upper right triangular linear order or strict order.
fringe
(
R
)
{\displaystyle \operatorname {fringe} (R)}
is the block fringe if
R
{\displaystyle R}
is irreflexive (
R
⊆
I
¯
{\displaystyle R\subseteq {\bar {I}}}
) or upper right block triangular.
fringe
(
R
)
{\displaystyle \operatorname {fringe} (R)}
is a sequence of boundary rectangles when
R
{\displaystyle R}
is of Ferrers type.
On the other hand,
fringe
(
R
)
=
∅
{\displaystyle \operatorname {fringe} (R)=\emptyset }
when
R
{\displaystyle R}
is a dense, linear, strict order.
== Mathematical heaps ==
Given two sets
A
{\displaystyle A}
and
B
{\displaystyle B}
, the set of binary relations between them
B
(
A
,
B
)
{\displaystyle {\mathcal {B}}(A,B)}
can be equipped with a ternary operation
[
a
,
b
,
c
]
=
a
b
T
c
{\displaystyle [a,b,c]=ab^{\textsf {T}}c}
where
b
T
{\displaystyle b^{\mathsf {T}}}
denotes the converse relation of
b
{\displaystyle b}
. In 1953 Viktor Wagner used properties of this ternary operation to define semiheaps, heaps, and generalized heaps. The contrast of heterogeneous and homogeneous relations is highlighted by these definitions:
There is a pleasant symmetry in Wagner's work between heaps, semiheaps, and generalised heaps on the one hand, and groups, semigroups, and generalised groups on the other. Essentially, the various types of semiheaps appear whenever we consider binary relations (and partial one-one mappings) between different sets
A
{\displaystyle A}
and
B
{\displaystyle B}
, while the various types of semigroups appear in the case where
A
=
B
{\displaystyle A=B}
.
== See also ==
== Notes ==
== References ==
== Bibliography ==
Schmidt, Gunther (2010). Relational Mathematics. Berlin: Cambridge University Press. ISBN 9780511778810.
Schmidt, Gunther; Ströhlein, Thomas (2012). "Chapter 3: Heterogeneous relations". Relations and Graphs: Discrete Mathematics for Computer Scientists. Springer Science & Business Media. ISBN 978-3-642-77968-8.
Ernst Schröder (1895) Algebra der Logik, Band III, via Internet Archive
Codd, Edgar Frank (1990). The Relational Model for Database Management: Version 2 (PDF). Boston: Addison-Wesley. ISBN 978-0201141924. Archived (PDF) from the original on 2022-10-09.
Enderton, Herbert (1977). Elements of Set Theory. Boston: Academic Press. ISBN 978-0-12-238440-0.
Kilp, Mati; Knauer, Ulrich; Mikhalev, Alexander (2000). Monoids, Acts and Categories: with Applications to Wreath Products and Graphs. Berlin: De Gruyter. ISBN 978-3-11-015248-7.
Van Gasteren, Antonetta (1990). On the Shape of Mathematical Arguments. Berlin: Springer. ISBN 9783540528494.
Peirce, Charles Sanders (1873). "Description of a Notation for the Logic of Relatives, Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic". Memoirs of the American Academy of Arts and Sciences. 9 (2): 317–178. Bibcode:1873MAAAS...9..317P. doi:10.2307/25058006. hdl:2027/hvd.32044019561034. JSTOR 25058006. Retrieved 2020-05-05.
Schmidt, Gunther (2010). Relational Mathematics. Cambridge: Cambridge University Press. ISBN 978-0-521-76268-7.
== External links ==
"Binary relation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Difunctional |
In mathematical logic, algebraic semantics is a formal semantics based on algebras studied as part of algebraic logic. For example, the modal logic S4 is characterized by the class of topological boolean algebras—that is, boolean algebras with an interior operator. Other modal logics are characterized by various other algebras with operators. The class of boolean algebras characterizes classical propositional logic, and the class of Heyting algebras propositional intuitionistic logic. MV-algebras are the algebraic semantics of Łukasiewicz logic.
== See also ==
Algebraic semantics (computer science)
Lindenbaum–Tarski algebra
== Further reading ==
Josep Maria Font; Ramón Jansana (1996). A general algebraic semantics for sentential logics. Springer-Verlag. ISBN 9783540616993. (2nd published by ASL in 2009) open access at Project Euclid
W.J. Blok; Don Pigozzi (1989). Algebraizable logics. American Mathematical Society. ISBN 0821824597.
Janusz Czelakowski (2001). Protoalgebraic logics. Springer. ISBN 9780792369400.
J. Michael Dunn; Gary M. Hardegree (2001). Algebraic methods in philosophical logic. Oxford University Press. ISBN 9780198531920. Good introduction for readers with prior exposure to non-classical logics but without much background in order theory and/or universal algebra; the book covers these prerequisites at length. The book, however, has been criticized for poor and sometimes incorrect presentation of abstract algebraic logic results. [1] | Wikipedia/Algebraic_semantics_(mathematical_logic) |
In mathematics, the notion of cylindric algebra, developed by Alfred Tarski, arises naturally in the algebraization of first-order logic with equality. This is comparable to the role Boolean algebras play for propositional logic. Cylindric algebras are Boolean algebras equipped with additional cylindrification operations that model quantification and equality. They differ from polyadic algebras in that the latter do not model equality.
The cylindric algebra should not be confused with the measure theoretic concept cylindrical algebra that arises in the study of cylinder set measures and the cylindrical σ-algebra.
== Definition of a cylindric algebra ==
A cylindric algebra of dimension
α
{\displaystyle \alpha }
(where
α
{\displaystyle \alpha }
is any ordinal number) is an algebraic structure
(
A
,
+
,
⋅
,
−
,
0
,
1
,
c
κ
,
d
κ
λ
)
κ
,
λ
<
α
{\displaystyle (A,+,\cdot ,-,0,1,c_{\kappa },d_{\kappa \lambda })_{\kappa ,\lambda <\alpha }}
such that
(
A
,
+
,
⋅
,
−
,
0
,
1
)
{\displaystyle (A,+,\cdot ,-,0,1)}
is a Boolean algebra,
c
κ
{\displaystyle c_{\kappa }}
a unary operator on
A
{\displaystyle A}
for every
κ
{\displaystyle \kappa }
(called a cylindrification), and
d
κ
λ
{\displaystyle d_{\kappa \lambda }}
a distinguished element of
A
{\displaystyle A}
for every
κ
{\displaystyle \kappa }
and
λ
{\displaystyle \lambda }
(called a diagonal), such that the following hold:
(C1)
c
κ
0
=
0
{\displaystyle c_{\kappa }0=0}
(C2)
x
≤
c
κ
x
{\displaystyle x\leq c_{\kappa }x}
(C3)
c
κ
(
x
⋅
c
κ
y
)
=
c
κ
x
⋅
c
κ
y
{\displaystyle c_{\kappa }(x\cdot c_{\kappa }y)=c_{\kappa }x\cdot c_{\kappa }y}
(C4)
c
κ
c
λ
x
=
c
λ
c
κ
x
{\displaystyle c_{\kappa }c_{\lambda }x=c_{\lambda }c_{\kappa }x}
(C5)
d
κ
κ
=
1
{\displaystyle d_{\kappa \kappa }=1}
(C6) If
κ
∉
{
λ
,
μ
}
{\displaystyle \kappa \notin \{\lambda ,\mu \}}
, then
d
λ
μ
=
c
κ
(
d
λ
κ
⋅
d
κ
μ
)
{\displaystyle d_{\lambda \mu }=c_{\kappa }(d_{\lambda \kappa }\cdot d_{\kappa \mu })}
(C7) If
κ
≠
λ
{\displaystyle \kappa \neq \lambda }
, then
c
κ
(
d
κ
λ
⋅
x
)
⋅
c
κ
(
d
κ
λ
⋅
−
x
)
=
0
{\displaystyle c_{\kappa }(d_{\kappa \lambda }\cdot x)\cdot c_{\kappa }(d_{\kappa \lambda }\cdot -x)=0}
Assuming a presentation of first-order logic without function symbols,
the operator
c
κ
x
{\displaystyle c_{\kappa }x}
models existential quantification over variable
κ
{\displaystyle \kappa }
in formula
x
{\displaystyle x}
while the operator
d
κ
λ
{\displaystyle d_{\kappa \lambda }}
models the equality of variables
κ
{\displaystyle \kappa }
and
λ
{\displaystyle \lambda }
. Hence, reformulated using standard logical notations, the axioms read as
(C1)
∃
κ
.
f
a
l
s
e
⟺
f
a
l
s
e
{\displaystyle \exists \kappa .{\mathit {false}}\iff {\mathit {false}}}
(C2)
x
⟹
∃
κ
.
x
{\displaystyle x\implies \exists \kappa .x}
(C3)
∃
κ
.
(
x
∧
∃
κ
.
y
)
⟺
(
∃
κ
.
x
)
∧
(
∃
κ
.
y
)
{\displaystyle \exists \kappa .(x\wedge \exists \kappa .y)\iff (\exists \kappa .x)\wedge (\exists \kappa .y)}
(C4)
∃
κ
∃
λ
.
x
⟺
∃
λ
∃
κ
.
x
{\displaystyle \exists \kappa \exists \lambda .x\iff \exists \lambda \exists \kappa .x}
(C5)
κ
=
κ
⟺
t
r
u
e
{\displaystyle \kappa =\kappa \iff {\mathit {true}}}
(C6) If
κ
{\displaystyle \kappa }
is a variable different from both
λ
{\displaystyle \lambda }
and
μ
{\displaystyle \mu }
, then
λ
=
μ
⟺
∃
κ
.
(
λ
=
κ
∧
κ
=
μ
)
{\displaystyle \lambda =\mu \iff \exists \kappa .(\lambda =\kappa \wedge \kappa =\mu )}
(C7) If
κ
{\displaystyle \kappa }
and
λ
{\displaystyle \lambda }
are different variables, then
∃
κ
.
(
κ
=
λ
∧
x
)
∧
∃
κ
.
(
κ
=
λ
∧
¬
x
)
⟺
f
a
l
s
e
{\displaystyle \exists \kappa .(\kappa =\lambda \wedge x)\wedge \exists \kappa .(\kappa =\lambda \wedge \neg x)\iff {\mathit {false}}}
== Cylindric set algebras ==
A cylindric set algebra of dimension
α
{\displaystyle \alpha }
is an algebraic structure
(
A
,
∪
,
∩
,
−
,
∅
,
X
α
,
c
κ
,
d
κ
λ
)
κ
,
λ
<
α
{\displaystyle (A,\cup ,\cap ,-,\emptyset ,X^{\alpha },c_{\kappa },d_{\kappa \lambda })_{\kappa ,\lambda <\alpha }}
such that
⟨
X
α
,
A
⟩
{\displaystyle \langle X^{\alpha },A\rangle }
is a field of sets,
c
κ
S
{\displaystyle c_{\kappa }S}
is given by
{
y
∈
X
α
∣
∃
x
∈
S
∀
β
≠
κ
y
(
β
)
=
x
(
β
)
}
{\displaystyle \{y\in X^{\alpha }\mid \exists x\in S\ \forall \beta \neq \kappa \ y(\beta )=x(\beta )\}}
, and
d
κ
λ
{\displaystyle d_{\kappa \lambda }}
is given by
{
x
∈
X
α
∣
x
(
κ
)
=
x
(
λ
)
}
{\displaystyle \{x\in X^{\alpha }\mid x(\kappa )=x(\lambda )\}}
. It necessarily validates the axioms C1–C7 of a cylindric algebra, with
∪
{\displaystyle \cup }
instead of
+
{\displaystyle +}
,
∩
{\displaystyle \cap }
instead of
⋅
{\displaystyle \cdot }
, set complement for complement, empty set as 0,
X
α
{\displaystyle X^{\alpha }}
as the unit, and
⊆
{\displaystyle \subseteq }
instead of
≤
{\displaystyle \leq }
. The set X is called the base.
A representation of a cylindric algebra is an isomorphism from that algebra to a cylindric set algebra. Not every cylindric algebra has a representation as a cylindric set algebra. It is easier to connect the semantics of first-order predicate logic with cylindric set algebra. (For more details, see § Further reading.)
== Generalizations ==
Cylindric algebras have been generalized to the case of many-sorted logic (Caleiro and Gonçalves 2006), which allows for a better modeling of the duality between first-order formulas and terms.
== Relation to monadic Boolean algebra ==
When
α
=
1
{\displaystyle \alpha =1}
and
κ
,
λ
{\displaystyle \kappa ,\lambda }
are restricted to being only 0, then
c
κ
{\displaystyle c_{\kappa }}
becomes
∃
{\displaystyle \exists }
, the diagonals can be dropped out, and the following theorem of cylindric algebra (Pinter 1973):
c
κ
(
x
+
y
)
=
c
κ
x
+
c
κ
y
{\displaystyle c_{\kappa }(x+y)=c_{\kappa }x+c_{\kappa }y}
turns into the axiom
∃
(
x
+
y
)
=
∃
x
+
∃
y
{\displaystyle \exists (x+y)=\exists x+\exists y}
of monadic Boolean algebra. The axiom (C4) drops out (becomes a tautology). Thus monadic Boolean algebra can be seen as a restriction of cylindric algebra to the one variable case.
== See also ==
Abstract algebraic logic
Lambda calculus and Combinatory logic—other approaches to modelling quantification and eliminating variables
Hyperdoctrines are a categorical formulation of cylindric algebras
Relation algebras (RA)
Polyadic algebra
Cylindrical algebraic decomposition
== Notes ==
== References ==
Charles Pinter (1973). "A Simple Algebra of First Order Logic". Notre Dame Journal of Formal Logic. XIV: 361–366.
Leon Henkin, J. Donald Monk, and Alfred Tarski (1971) Cylindric Algebras, Part I. North-Holland. ISBN 978-0-7204-2043-2.
Leon Henkin, J. Donald Monk, and Alfred Tarski (1985) Cylindric Algebras, Part II. North-Holland.
Robin Hirsch and Ian Hodkinson (2002) Relation algebras by games Studies in logic and the foundations of mathematics, North-Holland
Carlos Caleiro, Ricardo Gonçalves (2006). "On the algebraization of many-sorted logics" (PDF). In J. Fiadeiro and P.-Y. Schobbens (ed.). Proc. 18th int. conf. on Recent trends in algebraic development techniques (WADT). LNCS. Vol. 4409. Springer. pp. 21–36. ISBN 978-3-540-71997-7.
== Further reading ==
Imieliński, T.; Lipski, W. (1984). "The relational model of data and cylindric algebras". Journal of Computer and System Sciences. 28: 80–102. doi:10.1016/0022-0000(84)90077-1.
== External links ==
example of cylindrical algebra by CWoo on planetmath.org | Wikipedia/Cylindric_algebra |
In algebra and logic, a modal algebra is a structure
⟨
A
,
∧
,
∨
,
−
,
0
,
1
,
◻
⟩
{\displaystyle \langle A,\land ,\lor ,-,0,1,\Box \rangle }
such that
⟨
A
,
∧
,
∨
,
−
,
0
,
1
⟩
{\displaystyle \langle A,\land ,\lor ,-,0,1\rangle }
is a Boolean algebra,
◻
{\displaystyle \Box }
is a unary operation on A satisfying
◻
1
=
1
{\displaystyle \Box 1=1}
and
◻
(
x
∧
y
)
=
◻
x
∧
◻
y
{\displaystyle \Box (x\land y)=\Box x\land \Box y}
for all x, y in A.
Modal algebras provide models of propositional modal logics in the same way as Boolean algebras are models of classical logic. In particular, the variety of all modal algebras is the equivalent algebraic semantics of the modal logic K in the sense of abstract algebraic logic, and the lattice of its subvarieties is dually isomorphic to the lattice of normal modal logics.
Stone's representation theorem can be generalized to the Jónsson–Tarski duality, which ensures that each modal algebra can be represented as the algebra of admissible sets in a modal general frame.
A Magari algebra (or diagonalizable algebra) is a modal algebra satisfying
◻
(
−
◻
x
∨
x
)
=
◻
x
{\displaystyle \Box (-\Box x\lor x)=\Box x}
. Magari algebras correspond to provability logic.
== See also ==
Interior algebra
Heyting algebra
== References ==
A. Chagrov and M. Zakharyaschev, Modal Logic, Oxford Logic Guides vol. 35, Oxford University Press, 1997. ISBN 0-19-853779-4 | Wikipedia/Modal_algebra |
In abstract algebra, the field of fractions of an integral domain is the smallest field in which it can be embedded. The construction of the field of fractions is modeled on the relationship between the integral domain of integers and the field of rational numbers. Intuitively, it consists of ratios between integral domain elements.
The field of fractions of an integral domain
R
{\displaystyle R}
is sometimes denoted by
Frac
(
R
)
{\displaystyle \operatorname {Frac} (R)}
or
Quot
(
R
)
{\displaystyle \operatorname {Quot} (R)}
, and the construction is sometimes also called the fraction field, field of quotients, or quotient field of
R
{\displaystyle R}
. All four are in common usage, but are not to be confused with the quotient of a ring by an ideal, which is a quite different concept. For a commutative ring that is not an integral domain, the analogous construction is called the localization or ring of quotients.
== Definition ==
Given an integral domain
R
{\displaystyle R}
and letting
R
∗
=
R
∖
{
0
}
{\displaystyle R^{*}=R\setminus \{0\}}
, we define an equivalence relation on
R
×
R
∗
{\displaystyle R\times R^{*}}
by letting
(
n
,
d
)
∼
(
m
,
b
)
{\displaystyle (n,d)\sim (m,b)}
whenever
n
b
=
m
d
{\displaystyle nb=md}
. We denote the equivalence class of
(
n
,
d
)
{\displaystyle (n,d)}
by
n
d
{\displaystyle {\frac {n}{d}}}
. This notion of equivalence is motivated by the rational numbers
Q
{\displaystyle \mathbb {Q} }
, which have the same property with respect to the underlying ring
Z
{\displaystyle \mathbb {Z} }
of integers.
Then the field of fractions is the set
Frac
(
R
)
=
(
R
×
R
∗
)
/
∼
{\displaystyle {\text{Frac}}(R)=(R\times R^{*})/\sim }
with addition given by
n
d
+
m
b
=
n
b
+
m
d
d
b
{\displaystyle {\frac {n}{d}}+{\frac {m}{b}}={\frac {nb+md}{db}}}
and multiplication given by
n
d
⋅
m
b
=
n
m
d
b
.
{\displaystyle {\frac {n}{d}}\cdot {\frac {m}{b}}={\frac {nm}{db}}.}
One may check that these operations are well-defined and that, for any integral domain
R
{\displaystyle R}
,
Frac
(
R
)
{\displaystyle {\text{Frac}}(R)}
is indeed a field. In particular, for
n
,
d
≠
0
{\displaystyle n,d\neq 0}
, the multiplicative inverse of
n
d
{\displaystyle {\frac {n}{d}}}
is as expected:
d
n
⋅
n
d
=
1
{\displaystyle {\frac {d}{n}}\cdot {\frac {n}{d}}=1}
.
The embedding of
R
{\displaystyle R}
in
Frac
(
R
)
{\displaystyle \operatorname {Frac} (R)}
maps each
n
{\displaystyle n}
in
R
{\displaystyle R}
to the fraction
e
n
e
{\displaystyle {\frac {en}{e}}}
for any nonzero
e
∈
R
{\displaystyle e\in R}
(the equivalence class is independent of the choice
e
{\displaystyle e}
). This is modeled on the identity
n
1
=
n
{\displaystyle {\frac {n}{1}}=n}
.
The field of fractions of
R
{\displaystyle R}
is characterized by the following universal property:
if
h
:
R
→
F
{\displaystyle h:R\to F}
is an injective ring homomorphism from
R
{\displaystyle R}
into a field
F
{\displaystyle F}
, then there exists a unique ring homomorphism
g
:
Frac
(
R
)
→
F
{\displaystyle g:\operatorname {Frac} (R)\to F}
that extends
h
{\displaystyle h}
.
There is a categorical interpretation of this construction. Let
C
{\displaystyle \mathbf {C} }
be the category of integral domains and injective ring maps. The functor from
C
{\displaystyle \mathbf {C} }
to the category of fields that takes every integral domain to its fraction field and every homomorphism to the induced map on fields (which exists by the universal property) is the left adjoint of the inclusion functor from the category of fields to
C
{\displaystyle \mathbf {C} }
. Thus the category of fields (which is a full subcategory) is a reflective subcategory of
C
{\displaystyle \mathbf {C} }
.
A multiplicative identity is not required for the role of the integral domain; this construction can be applied to any nonzero commutative rng
R
{\displaystyle R}
with no nonzero zero divisors. The embedding is given by
r
↦
r
s
s
{\displaystyle r\mapsto {\frac {rs}{s}}}
for any nonzero
s
∈
R
{\displaystyle s\in R}
.
== Examples ==
The field of fractions of the ring of integers is the field of rationals:
Q
=
Frac
(
Z
)
{\displaystyle \mathbb {Q} =\operatorname {Frac} (\mathbb {Z} )}
.
Let
R
:=
{
a
+
b
i
∣
a
,
b
∈
Z
}
{\displaystyle R:=\{a+b\mathrm {i} \mid a,b\in \mathbb {Z} \}}
be the ring of Gaussian integers. Then
Frac
(
R
)
=
{
c
+
d
i
∣
c
,
d
∈
Q
}
{\displaystyle \operatorname {Frac} (R)=\{c+d\mathrm {i} \mid c,d\in \mathbb {Q} \}}
, the field of Gaussian rationals.
The field of fractions of a field is canonically isomorphic to the field itself.
Given a field
K
{\displaystyle K}
, the field of fractions of the polynomial ring in one indeterminate
K
[
X
]
{\displaystyle K[X]}
(which is an integral domain), is called the field of rational functions, field of rational fractions, or field of rational expressions and is denoted
K
(
X
)
{\displaystyle K(X)}
.
The field of fractions of the convolution ring of half-line functions yields a space of operators, including the Dirac delta function, differential operator, and integral operator. This construction gives an alternate representation of the Laplace transform that does not depend explicitly on an integral transform.
== Generalizations ==
=== Localization ===
For any commutative ring
R
{\displaystyle R}
and any multiplicative set
S
{\displaystyle S}
in
R
{\displaystyle R}
, the localization
S
−
1
R
{\displaystyle S^{-1}R}
is the commutative ring consisting of fractions
r
s
{\displaystyle {\frac {r}{s}}}
with
r
∈
R
{\displaystyle r\in R}
and
s
∈
S
{\displaystyle s\in S}
, where now
(
r
,
s
)
{\displaystyle (r,s)}
is equivalent to
(
r
′
,
s
′
)
{\displaystyle (r',s')}
if and only if there exists
t
∈
S
{\displaystyle t\in S}
such that
t
(
r
s
′
−
r
′
s
)
=
0
{\displaystyle t(rs'-r's)=0}
.
Two special cases of this are notable:
If
S
{\displaystyle S}
is the complement of a prime ideal
P
{\displaystyle P}
, then
S
−
1
R
{\displaystyle S^{-1}R}
is also denoted
R
P
{\displaystyle R_{P}}
.When
R
{\displaystyle R}
is an integral domain and
P
{\displaystyle P}
is the zero ideal,
R
P
{\displaystyle R_{P}}
is the field of fractions of
R
{\displaystyle R}
.
If
S
{\displaystyle S}
is the set of non-zero-divisors in
R
{\displaystyle R}
, then
S
−
1
R
{\displaystyle S^{-1}R}
is called the total quotient ring.The total quotient ring of an integral domain is its field of fractions, but the total quotient ring is defined for any commutative ring.
Note that it is permitted for
S
{\displaystyle S}
to contain 0, but in that case
S
−
1
R
{\displaystyle S^{-1}R}
will be the trivial ring.
=== Semifield of fractions ===
The semifield of fractions of a commutative semiring in which every nonzero element is (multiplicatively) cancellative is the smallest semifield in which it can be embedded. (Note that, unlike the case of rings, a semiring with no zero divisors can still have nonzero elements that are not cancellative. For example, let
T
{\displaystyle \mathbb {T} }
denote the tropical semiring and let
R
=
T
[
X
]
{\displaystyle R=\mathbb {T} [X]}
be the polynomial semiring over
T
{\displaystyle \mathbb {T} }
. Then
R
{\displaystyle R}
has no zero divisors, but the element
1
+
X
{\displaystyle 1+X}
is not cancellative because
(
1
+
X
)
(
1
+
X
+
X
2
)
=
1
+
X
+
X
2
+
X
3
=
(
1
+
X
)
(
1
+
X
2
)
{\displaystyle (1+X)(1+X+X^{2})=1+X+X^{2}+X^{3}=(1+X)(1+X^{2})}
).
The elements of the semifield of fractions of the commutative semiring
R
{\displaystyle R}
are equivalence classes written as
a
b
{\displaystyle {\frac {a}{b}}}
with
a
{\displaystyle a}
and
b
{\displaystyle b}
in
R
{\displaystyle R}
and
b
≠
0
{\displaystyle b\neq 0}
.
== See also ==
Ore condition; condition related to constructing fractions in the noncommutative case.
Total ring of fractions
== References == | Wikipedia/Field_of_rational_functions |
In mathematics, the modular lambda function λ(τ) is a highly symmetric holomorphic function on the complex upper half-plane. It is invariant under the fractional linear action of the congruence group Γ(2), and generates the function field of the corresponding quotient, i.e., it is a Hauptmodul for the modular curve X(2). Over any point τ, its value can be described as a cross ratio of the branch points of a ramified double cover of the projective line by the elliptic curve
C
/
⟨
1
,
τ
⟩
{\displaystyle \mathbb {C} /\langle 1,\tau \rangle }
, where the map is defined as the quotient by the [−1] involution.
The q-expansion, where
q
=
e
π
i
τ
{\displaystyle q=e^{\pi i\tau }}
is the nome, is given by:
λ
(
τ
)
=
16
q
−
128
q
2
+
704
q
3
−
3072
q
4
+
11488
q
5
−
38400
q
6
+
…
{\displaystyle \lambda (\tau )=16q-128q^{2}+704q^{3}-3072q^{4}+11488q^{5}-38400q^{6}+\dots }
. OEIS: A115977
By symmetrizing the lambda function under the canonical action of the symmetric group S3 on X(2), and then normalizing suitably, one obtains a function on the upper half-plane that is invariant under the full modular group
SL
2
(
Z
)
{\displaystyle \operatorname {SL} _{2}(\mathbb {Z} )}
, and it is in fact Klein's modular j-invariant.
== Modular properties ==
The function
λ
(
τ
)
{\displaystyle \lambda (\tau )}
is invariant under the group generated by
τ
↦
τ
+
2
;
τ
↦
τ
1
−
2
τ
.
{\displaystyle \tau \mapsto \tau +2\ ;\ \tau \mapsto {\frac {\tau }{1-2\tau }}\ .}
The generators of the modular group act by
τ
↦
τ
+
1
:
λ
↦
λ
λ
−
1
;
{\displaystyle \tau \mapsto \tau +1\ :\ \lambda \mapsto {\frac {\lambda }{\lambda -1}}\,;}
τ
↦
−
1
τ
:
λ
↦
1
−
λ
.
{\displaystyle \tau \mapsto -{\frac {1}{\tau }}\ :\ \lambda \mapsto 1-\lambda \ .}
Consequently, the action of the modular group on
λ
(
τ
)
{\displaystyle \lambda (\tau )}
is that of the anharmonic group, giving the six values of the cross-ratio:
{
λ
,
1
1
−
λ
,
λ
−
1
λ
,
1
λ
,
λ
λ
−
1
,
1
−
λ
}
.
{\displaystyle \left\lbrace {\lambda ,{\frac {1}{1-\lambda }},{\frac {\lambda -1}{\lambda }},{\frac {1}{\lambda }},{\frac {\lambda }{\lambda -1}},1-\lambda }\right\rbrace \ .}
== Relations to other functions ==
It is the square of the elliptic modulus, that is,
λ
(
τ
)
=
k
2
(
τ
)
{\displaystyle \lambda (\tau )=k^{2}(\tau )}
. In terms of the Dedekind eta function
η
(
τ
)
{\displaystyle \eta (\tau )}
and theta functions,
λ
(
τ
)
=
(
2
η
(
τ
2
)
η
2
(
2
τ
)
η
3
(
τ
)
)
8
=
16
(
η
(
τ
/
2
)
η
(
2
τ
)
)
8
+
16
=
θ
2
4
(
τ
)
θ
3
4
(
τ
)
{\displaystyle \lambda (\tau )={\Bigg (}{\frac {{\sqrt {2}}\,\eta ({\tfrac {\tau }{2}})\eta ^{2}(2\tau )}{\eta ^{3}(\tau )}}{\Bigg )}^{8}={\frac {16}{\left({\frac {\eta (\tau /2)}{\eta (2\tau )}}\right)^{8}+16}}={\frac {\theta _{2}^{4}(\tau )}{\theta _{3}^{4}(\tau )}}}
and,
1
(
λ
(
τ
)
)
1
/
4
−
(
λ
(
τ
)
)
1
/
4
=
1
2
(
η
(
τ
4
)
η
(
τ
)
)
4
=
2
θ
4
2
(
τ
2
)
θ
2
2
(
τ
2
)
{\displaystyle {\frac {1}{{\big (}\lambda (\tau ){\big )}^{1/4}}}-{\big (}\lambda (\tau ){\big )}^{1/4}={\frac {1}{2}}\left({\frac {\eta ({\tfrac {\tau }{4}})}{\eta (\tau )}}\right)^{4}=2\,{\frac {\theta _{4}^{2}({\tfrac {\tau }{2}})}{\theta _{2}^{2}({\tfrac {\tau }{2}})}}}
where
θ
2
(
τ
)
=
∑
n
=
−
∞
∞
e
π
i
τ
(
n
+
1
/
2
)
2
{\displaystyle \theta _{2}(\tau )=\sum _{n=-\infty }^{\infty }e^{\pi i\tau (n+1/2)^{2}}}
θ
3
(
τ
)
=
∑
n
=
−
∞
∞
e
π
i
τ
n
2
{\displaystyle \theta _{3}(\tau )=\sum _{n=-\infty }^{\infty }e^{\pi i\tau n^{2}}}
θ
4
(
τ
)
=
∑
n
=
−
∞
∞
(
−
1
)
n
e
π
i
τ
n
2
{\displaystyle \theta _{4}(\tau )=\sum _{n=-\infty }^{\infty }(-1)^{n}e^{\pi i\tau n^{2}}}
In terms of the half-periods of Weierstrass's elliptic functions, let
[
ω
1
,
ω
2
]
{\displaystyle [\omega _{1},\omega _{2}]}
be a fundamental pair of periods with
τ
=
ω
2
ω
1
{\displaystyle \tau ={\frac {\omega _{2}}{\omega _{1}}}}
.
e
1
=
℘
(
ω
1
2
)
,
e
2
=
℘
(
ω
2
2
)
,
e
3
=
℘
(
ω
1
+
ω
2
2
)
{\displaystyle e_{1}=\wp \left({\frac {\omega _{1}}{2}}\right),\quad e_{2}=\wp \left({\frac {\omega _{2}}{2}}\right),\quad e_{3}=\wp \left({\frac {\omega _{1}+\omega _{2}}{2}}\right)}
we have
λ
=
e
3
−
e
2
e
1
−
e
2
.
{\displaystyle \lambda ={\frac {e_{3}-e_{2}}{e_{1}-e_{2}}}\,.}
Since the three half-period values are distinct, this shows that
λ
{\displaystyle \lambda }
does not take the value 0 or 1.
The relation to the j-invariant is
j
(
τ
)
=
256
(
1
−
λ
(
1
−
λ
)
)
3
(
λ
(
1
−
λ
)
)
2
=
256
(
1
−
λ
+
λ
2
)
3
λ
2
(
1
−
λ
)
2
.
{\displaystyle j(\tau )={\frac {256(1-\lambda (1-\lambda ))^{3}}{(\lambda (1-\lambda ))^{2}}}={\frac {256(1-\lambda +\lambda ^{2})^{3}}{\lambda ^{2}(1-\lambda )^{2}}}\ .}
which is the j-invariant of the elliptic curve of Legendre form
y
2
=
x
(
x
−
1
)
(
x
−
λ
)
{\displaystyle y^{2}=x(x-1)(x-\lambda )}
Given
m
∈
C
∖
{
0
,
1
}
{\displaystyle m\in \mathbb {C} \setminus \{0,1\}}
, let
τ
=
i
K
{
1
−
m
}
K
{
m
}
{\displaystyle \tau =i{\frac {K\{1-m\}}{K\{m\}}}}
where
K
{\displaystyle K}
is the complete elliptic integral of the first kind with parameter
m
=
k
2
{\displaystyle m=k^{2}}
.
Then
λ
(
τ
)
=
m
.
{\displaystyle \lambda (\tau )=m.}
== Modular equations ==
The modular equation of degree
p
{\displaystyle p}
(where
p
{\displaystyle p}
is a prime number) is an algebraic equation in
λ
(
p
τ
)
{\displaystyle \lambda (p\tau )}
and
λ
(
τ
)
{\displaystyle \lambda (\tau )}
. If
λ
(
p
τ
)
=
u
8
{\displaystyle \lambda (p\tau )=u^{8}}
and
λ
(
τ
)
=
v
8
{\displaystyle \lambda (\tau )=v^{8}}
, the modular equations of degrees
p
=
2
,
3
,
5
,
7
{\displaystyle p=2,3,5,7}
are, respectively,
(
1
+
u
4
)
2
v
8
−
4
u
4
=
0
,
{\displaystyle (1+u^{4})^{2}v^{8}-4u^{4}=0,}
u
4
−
v
4
+
2
u
v
(
1
−
u
2
v
2
)
=
0
,
{\displaystyle u^{4}-v^{4}+2uv(1-u^{2}v^{2})=0,}
u
6
−
v
6
+
5
u
2
v
2
(
u
2
−
v
2
)
+
4
u
v
(
1
−
u
4
v
4
)
=
0
,
{\displaystyle u^{6}-v^{6}+5u^{2}v^{2}(u^{2}-v^{2})+4uv(1-u^{4}v^{4})=0,}
(
1
−
u
8
)
(
1
−
v
8
)
−
(
1
−
u
v
)
8
=
0.
{\displaystyle (1-u^{8})(1-v^{8})-(1-uv)^{8}=0.}
The quantity
v
{\displaystyle v}
(and hence
u
{\displaystyle u}
) can be thought of as a holomorphic function on the upper half-plane
Im
τ
>
0
{\displaystyle \operatorname {Im} \tau >0}
:
v
=
∏
k
=
1
∞
tanh
(
k
−
1
/
2
)
π
i
τ
=
2
e
π
i
τ
/
8
∑
k
∈
Z
e
(
2
k
2
+
k
)
π
i
τ
∑
k
∈
Z
e
k
2
π
i
τ
=
2
e
π
i
τ
/
8
1
+
e
π
i
τ
1
+
e
π
i
τ
+
e
2
π
i
τ
1
+
e
2
π
i
τ
+
e
3
π
i
τ
1
+
e
3
π
i
τ
+
⋱
{\displaystyle {\begin{aligned}v&=\prod _{k=1}^{\infty }\tanh {\frac {(k-1/2)\pi i}{\tau }}={\sqrt {2}}e^{\pi i\tau /8}{\frac {\sum _{k\in \mathbb {Z} }e^{(2k^{2}+k)\pi i\tau }}{\sum _{k\in \mathbb {Z} }e^{k^{2}\pi i\tau }}}\\&={\cfrac {{\sqrt {2}}e^{\pi i\tau /8}}{1+{\cfrac {e^{\pi i\tau }}{1+e^{\pi i\tau }+{\cfrac {e^{2\pi i\tau }}{1+e^{2\pi i\tau }+{\cfrac {e^{3\pi i\tau }}{1+e^{3\pi i\tau }+\ddots }}}}}}}}\end{aligned}}}
Since
λ
(
i
)
=
1
/
2
{\displaystyle \lambda (i)=1/2}
, the modular equations can be used to give algebraic values of
λ
(
p
i
)
{\displaystyle \lambda (pi)}
for any prime
p
{\displaystyle p}
. The algebraic values of
λ
(
n
i
)
{\displaystyle \lambda (ni)}
are also given by
λ
(
n
i
)
=
∏
k
=
1
n
/
2
sl
8
(
2
k
−
1
)
ϖ
2
n
(
n
even
)
{\displaystyle \lambda (ni)=\prod _{k=1}^{n/2}\operatorname {sl} ^{8}{\frac {(2k-1)\varpi }{2n}}\quad (n\,{\text{even}})}
λ
(
n
i
)
=
1
2
n
∏
k
=
1
n
−
1
(
1
−
sl
2
k
ϖ
n
)
2
(
n
odd
)
{\displaystyle \lambda (ni)={\frac {1}{2^{n}}}\prod _{k=1}^{n-1}\left(1-\operatorname {sl} ^{2}{\frac {k\varpi }{n}}\right)^{2}\quad (n\,{\text{odd}})}
where
sl
{\displaystyle \operatorname {sl} }
is the lemniscate sine and
ϖ
{\displaystyle \varpi }
is the lemniscate constant.
== Lambda-star ==
=== Definition and computation of lambda-star ===
The function
λ
∗
(
x
)
{\displaystyle \lambda ^{*}(x)}
(where
x
∈
R
+
{\displaystyle x\in \mathbb {R} ^{+}}
) gives the value of the elliptic modulus
k
{\displaystyle k}
, for which the complete elliptic integral of the first kind
K
(
k
)
{\displaystyle K(k)}
and its complementary counterpart
K
(
1
−
k
2
)
{\displaystyle K({\sqrt {1-k^{2}}})}
are related by following expression:
K
[
1
−
λ
∗
(
x
)
2
]
K
[
λ
∗
(
x
)
]
=
x
{\displaystyle {\frac {K\left[{\sqrt {1-\lambda ^{*}(x)^{2}}}\right]}{K[\lambda ^{*}(x)]}}={\sqrt {x}}}
The values of
λ
∗
(
x
)
{\displaystyle \lambda ^{*}(x)}
can be computed as follows:
λ
∗
(
x
)
=
θ
2
2
(
i
x
)
θ
3
2
(
i
x
)
{\displaystyle \lambda ^{*}(x)={\frac {\theta _{2}^{2}(i{\sqrt {x}})}{\theta _{3}^{2}(i{\sqrt {x}})}}}
λ
∗
(
x
)
=
[
∑
a
=
−
∞
∞
exp
[
−
(
a
+
1
/
2
)
2
π
x
]
]
2
[
∑
a
=
−
∞
∞
exp
(
−
a
2
π
x
)
]
−
2
{\displaystyle \lambda ^{*}(x)=\left[\sum _{a=-\infty }^{\infty }\exp[-(a+1/2)^{2}\pi {\sqrt {x}}]\right]^{2}\left[\sum _{a=-\infty }^{\infty }\exp(-a^{2}\pi {\sqrt {x}})\right]^{-2}}
λ
∗
(
x
)
=
[
∑
a
=
−
∞
∞
sech
[
(
a
+
1
/
2
)
π
x
]
]
[
∑
a
=
−
∞
∞
sech
(
a
π
x
)
]
−
1
{\displaystyle \lambda ^{*}(x)=\left[\sum _{a=-\infty }^{\infty }\operatorname {sech} [(a+1/2)\pi {\sqrt {x}}]\right]\left[\sum _{a=-\infty }^{\infty }\operatorname {sech} (a\pi {\sqrt {x}})\right]^{-1}}
The functions
λ
∗
{\displaystyle \lambda ^{*}}
and
λ
{\displaystyle \lambda }
are related to each other in this way:
λ
∗
(
x
)
=
λ
(
i
x
)
{\displaystyle \lambda ^{*}(x)={\sqrt {\lambda (i{\sqrt {x}})}}}
=== Properties of lambda-star ===
Every
λ
∗
{\displaystyle \lambda ^{*}}
value of a positive rational number is a positive algebraic number:
λ
∗
(
x
∈
Q
+
)
∈
A
+
.
{\displaystyle \lambda ^{*}(x\in \mathbb {Q} ^{+})\in \mathbb {A} ^{+}.}
K
(
λ
∗
(
x
)
)
{\displaystyle K(\lambda ^{*}(x))}
and
E
(
λ
∗
(
x
)
)
{\displaystyle E(\lambda ^{*}(x))}
(the complete elliptic integral of the second kind) can be expressed in closed form in terms of the gamma function for any
x
∈
Q
+
{\displaystyle x\in \mathbb {Q} ^{+}}
, as Selberg and Chowla proved in 1949.
The following expression is valid for all
n
∈
N
{\displaystyle n\in \mathbb {N} }
:
n
=
∑
a
=
1
n
dn
[
2
a
n
K
[
λ
∗
(
1
n
)
]
;
λ
∗
(
1
n
)
]
{\displaystyle {\sqrt {n}}=\sum _{a=1}^{n}\operatorname {dn} \left[{\frac {2a}{n}}K\left[\lambda ^{*}\left({\frac {1}{n}}\right)\right];\lambda ^{*}\left({\frac {1}{n}}\right)\right]}
where
dn
{\displaystyle \operatorname {dn} }
is the Jacobi elliptic function delta amplitudinis with modulus
k
{\displaystyle k}
.
By knowing one
λ
∗
{\displaystyle \lambda ^{*}}
value, this formula can be used to compute related
λ
∗
{\displaystyle \lambda ^{*}}
values:
λ
∗
(
n
2
x
)
=
λ
∗
(
x
)
n
∏
a
=
1
n
sn
{
2
a
−
1
n
K
[
λ
∗
(
x
)
]
;
λ
∗
(
x
)
}
2
{\displaystyle \lambda ^{*}(n^{2}x)=\lambda ^{*}(x)^{n}\prod _{a=1}^{n}\operatorname {sn} \left\{{\frac {2a-1}{n}}K[\lambda ^{*}(x)];\lambda ^{*}(x)\right\}^{2}}
where
n
∈
N
{\displaystyle n\in \mathbb {N} }
and
sn
{\displaystyle \operatorname {sn} }
is the Jacobi elliptic function sinus amplitudinis with modulus
k
{\displaystyle k}
.
Further relations:
λ
∗
(
x
)
2
+
λ
∗
(
1
/
x
)
2
=
1
{\displaystyle \lambda ^{*}(x)^{2}+\lambda ^{*}(1/x)^{2}=1}
[
λ
∗
(
x
)
+
1
]
[
λ
∗
(
4
/
x
)
+
1
]
=
2
{\displaystyle [\lambda ^{*}(x)+1][\lambda ^{*}(4/x)+1]=2}
λ
∗
(
4
x
)
=
1
−
1
−
λ
∗
(
x
)
2
1
+
1
−
λ
∗
(
x
)
2
=
tan
{
1
2
arcsin
[
λ
∗
(
x
)
]
}
2
{\displaystyle \lambda ^{*}(4x)={\frac {1-{\sqrt {1-\lambda ^{*}(x)^{2}}}}{1+{\sqrt {1-\lambda ^{*}(x)^{2}}}}}=\tan \left\{{\frac {1}{2}}\arcsin[\lambda ^{*}(x)]\right\}^{2}}
λ
∗
(
x
)
−
λ
∗
(
9
x
)
=
2
[
λ
∗
(
x
)
λ
∗
(
9
x
)
]
1
/
4
−
2
[
λ
∗
(
x
)
λ
∗
(
9
x
)
]
3
/
4
{\displaystyle \lambda ^{*}(x)-\lambda ^{*}(9x)=2[\lambda ^{*}(x)\lambda ^{*}(9x)]^{1/4}-2[\lambda ^{*}(x)\lambda ^{*}(9x)]^{3/4}}
a
6
−
f
6
=
2
a
f
+
2
a
5
f
5
(
a
=
[
2
λ
∗
(
x
)
1
−
λ
∗
(
x
)
2
]
1
/
12
)
(
f
=
[
2
λ
∗
(
25
x
)
1
−
λ
∗
(
25
x
)
2
]
1
/
12
)
a
8
+
b
8
−
7
a
4
b
4
=
2
2
a
b
+
2
2
a
7
b
7
(
a
=
[
2
λ
∗
(
x
)
1
−
λ
∗
(
x
)
2
]
1
/
12
)
(
b
=
[
2
λ
∗
(
49
x
)
1
−
λ
∗
(
49
x
)
2
]
1
/
12
)
a
12
−
c
12
=
2
2
(
a
c
+
a
3
c
3
)
(
1
+
3
a
2
c
2
+
a
4
c
4
)
(
2
+
3
a
2
c
2
+
2
a
4
c
4
)
(
a
=
[
2
λ
∗
(
x
)
1
−
λ
∗
(
x
)
2
]
1
/
12
)
(
c
=
[
2
λ
∗
(
121
x
)
1
−
λ
∗
(
121
x
)
2
]
1
/
12
)
(
a
2
−
d
2
)
(
a
4
+
d
4
−
7
a
2
d
2
)
[
(
a
2
−
d
2
)
4
−
a
2
d
2
(
a
2
+
d
2
)
2
]
=
8
a
d
+
8
a
13
d
13
(
a
=
[
2
λ
∗
(
x
)
1
−
λ
∗
(
x
)
2
]
1
/
12
)
(
d
=
[
2
λ
∗
(
169
x
)
1
−
λ
∗
(
169
x
)
2
]
1
/
12
)
{\displaystyle {\begin{aligned}&a^{6}-f^{6}=2af+2a^{5}f^{5}\,&\left(a=\left[{\frac {2\lambda ^{*}(x)}{1-\lambda ^{*}(x)^{2}}}\right]^{1/12}\right)&\left(f=\left[{\frac {2\lambda ^{*}(25x)}{1-\lambda ^{*}(25x)^{2}}}\right]^{1/12}\right)\\&a^{8}+b^{8}-7a^{4}b^{4}=2{\sqrt {2}}ab+2{\sqrt {2}}a^{7}b^{7}\,&\left(a=\left[{\frac {2\lambda ^{*}(x)}{1-\lambda ^{*}(x)^{2}}}\right]^{1/12}\right)&\left(b=\left[{\frac {2\lambda ^{*}(49x)}{1-\lambda ^{*}(49x)^{2}}}\right]^{1/12}\right)\\&a^{12}-c^{12}=2{\sqrt {2}}(ac+a^{3}c^{3})(1+3a^{2}c^{2}+a^{4}c^{4})(2+3a^{2}c^{2}+2a^{4}c^{4})\,&\left(a=\left[{\frac {2\lambda ^{*}(x)}{1-\lambda ^{*}(x)^{2}}}\right]^{1/12}\right)&\left(c=\left[{\frac {2\lambda ^{*}(121x)}{1-\lambda ^{*}(121x)^{2}}}\right]^{1/12}\right)\\&(a^{2}-d^{2})(a^{4}+d^{4}-7a^{2}d^{2})[(a^{2}-d^{2})^{4}-a^{2}d^{2}(a^{2}+d^{2})^{2}]=8ad+8a^{13}d^{13}\,&\left(a=\left[{\frac {2\lambda ^{*}(x)}{1-\lambda ^{*}(x)^{2}}}\right]^{1/12}\right)&\left(d=\left[{\frac {2\lambda ^{*}(169x)}{1-\lambda ^{*}(169x)^{2}}}\right]^{1/12}\right)\end{aligned}}}
=== Ramanujan's class invariants ===
Ramanujan's class invariants
G
n
{\displaystyle G_{n}}
and
g
n
{\displaystyle g_{n}}
are defined as
G
n
=
2
−
1
/
4
e
π
n
/
24
∏
k
=
0
∞
(
1
+
e
−
(
2
k
+
1
)
π
n
)
,
{\displaystyle G_{n}=2^{-1/4}e^{\pi {\sqrt {n}}/24}\prod _{k=0}^{\infty }\left(1+e^{-(2k+1)\pi {\sqrt {n}}}\right),}
g
n
=
2
−
1
/
4
e
π
n
/
24
∏
k
=
0
∞
(
1
−
e
−
(
2
k
+
1
)
π
n
)
,
{\displaystyle g_{n}=2^{-1/4}e^{\pi {\sqrt {n}}/24}\prod _{k=0}^{\infty }\left(1-e^{-(2k+1)\pi {\sqrt {n}}}\right),}
where
n
∈
Q
+
{\displaystyle n\in \mathbb {Q} ^{+}}
. For such
n
{\displaystyle n}
, the class invariants are algebraic numbers. For example
g
58
=
5
+
29
2
,
g
190
=
(
5
+
2
)
(
10
+
3
)
.
{\displaystyle g_{58}={\sqrt {\frac {5+{\sqrt {29}}}{2}}},\quad g_{190}={\sqrt {({\sqrt {5}}+2)({\sqrt {10}}+3)}}.}
Identities with the class invariants include
G
n
=
G
1
/
n
,
g
n
=
1
g
4
/
n
,
g
4
n
=
2
1
/
4
g
n
G
n
.
{\displaystyle G_{n}=G_{1/n},\quad g_{n}={\frac {1}{g_{4/n}}},\quad g_{4n}=2^{1/4}g_{n}G_{n}.}
The class invariants are very closely related to the Weber modular functions
f
{\displaystyle {\mathfrak {f}}}
and
f
1
{\displaystyle {\mathfrak {f}}_{1}}
. These are the relations between lambda-star and the class invariants:
G
n
=
sin
{
2
arcsin
[
λ
∗
(
n
)
]
}
−
1
/
12
=
1
/
[
2
λ
∗
(
n
)
12
1
−
λ
∗
(
n
)
2
24
]
{\displaystyle G_{n}=\sin\{2\arcsin[\lambda ^{*}(n)]\}^{-1/12}=1{\Big /}\left[{\sqrt[{12}]{2\lambda ^{*}(n)}}{\sqrt[{24}]{1-\lambda ^{*}(n)^{2}}}\right]}
g
n
=
tan
{
2
arctan
[
λ
∗
(
n
)
]
}
−
1
/
12
=
[
1
−
λ
∗
(
n
)
2
]
/
[
2
λ
∗
(
n
)
]
12
{\displaystyle g_{n}=\tan\{2\arctan[\lambda ^{*}(n)]\}^{-1/12}={\sqrt[{12}]{[1-\lambda ^{*}(n)^{2}]/[2\lambda ^{*}(n)]}}}
λ
∗
(
n
)
=
tan
{
1
2
arctan
[
g
n
−
12
]
}
=
g
n
24
+
1
−
g
n
12
{\displaystyle \lambda ^{*}(n)=\tan \left\{{\frac {1}{2}}\arctan[g_{n}^{-12}]\right\}={\sqrt {g_{n}^{24}+1}}-g_{n}^{12}}
== Other appearances ==
=== Little Picard theorem ===
The lambda function is used in the original proof of the Little Picard theorem, that an entire non-constant function on the complex plane cannot omit more than one value. This theorem was proved by Picard in 1879. Suppose if possible that f is entire and does not take the values 0 and 1. Since λ is holomorphic, it has a local holomorphic inverse ω defined away from 0,1,∞. Consider the function z → ω(f(z)). By the Monodromy theorem this is holomorphic and maps the complex plane C to the upper half plane. From this it is easy to construct a holomorphic function from C to the unit disc, which by Liouville's theorem must be constant.
=== Moonshine ===
The function
τ
↦
16
/
λ
(
2
τ
)
−
8
{\displaystyle \tau \mapsto 16/\lambda (2\tau )-8}
is the normalized Hauptmodul for the group
Γ
0
(
4
)
{\displaystyle \Gamma _{0}(4)}
, and its q-expansion
q
−
1
+
20
q
−
62
q
3
+
…
{\displaystyle q^{-1}+20q-62q^{3}+\dots }
, OEIS: A007248 where
q
=
e
2
π
i
τ
{\displaystyle q=e^{2\pi i\tau }}
, is the graded character of any element in conjugacy class 4C of the monster group acting on the monster vertex algebra.
== Footnotes ==
== References ==
=== Notes ===
=== Other ===
Abramowitz, Milton; Stegun, Irene A., eds. (1972), Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, New York: Dover Publications, ISBN 978-0-486-61272-0, Zbl 0543.33001
Chandrasekharan, K. (1985), Elliptic Functions, Grundlehren der mathematischen Wissenschaften, vol. 281, Springer-Verlag, pp. 108–121, ISBN 3-540-15295-4, Zbl 0575.33001
Conway, John Horton; Norton, Simon (1979), "Monstrous moonshine", Bulletin of the London Mathematical Society, 11 (3): 308–339, doi:10.1112/blms/11.3.308, MR 0554399, Zbl 0424.20010
Rankin, Robert A. (1977), Modular Forms and Functions, Cambridge University Press, ISBN 0-521-21212-X, Zbl 0376.10020
Reinhardt, W. P.; Walker, P. L. (2010), "Elliptic Modular Function", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
Borwein, J. M. and Borwein, P. B. Pi & the AGM: A Study in Analytic Number Theory and Computational Complexity. New York: Wiley, pp. 139 and 298, 1987.
Conway, J. H. and Norton, S. P. "Monstrous Moonshine." Bull. London Math. Soc. 11, 308-339, 1979.
Selberg, A. and Chowla, S. "On Epstein's Zeta-Function." J. reine angew. Math. 227, 86-110, 1967.
== External links ==
Modular lambda function at Fungrim | Wikipedia/Modular_lambda_function |
In mathematics, a quintic function is a function of the form
g
(
x
)
=
a
x
5
+
b
x
4
+
c
x
3
+
d
x
2
+
e
x
+
f
,
{\displaystyle g(x)=ax^{5}+bx^{4}+cx^{3}+dx^{2}+ex+f,\,}
where a, b, c, d, e and f are members of a field, typically the rational numbers, the real numbers or the complex numbers, and a is nonzero. In other words, a quintic function is defined by a polynomial of degree five.
Because they have an odd degree, normal quintic functions appear similar to normal cubic functions when graphed, except they may possess one additional local maximum and one additional local minimum. The derivative of a quintic function is a quartic function.
Setting g(x) = 0 and assuming a ≠ 0 produces a quintic equation of the form:
a
x
5
+
b
x
4
+
c
x
3
+
d
x
2
+
e
x
+
f
=
0.
{\displaystyle ax^{5}+bx^{4}+cx^{3}+dx^{2}+ex+f=0.\,}
Solving quintic equations in terms of radicals (nth roots) was a major problem in algebra from the 16th century, when cubic and quartic equations were solved, until the first half of the 19th century, when the impossibility of such a general solution was proved with the Abel–Ruffini theorem.
== Finding roots of a quintic equation ==
Finding the roots (zeros) of a given polynomial has been a prominent mathematical problem.
Solving linear, quadratic, cubic and quartic equations in terms of radicals and elementary arithmetic operations on the coefficients can always be done, no matter whether the roots are rational or irrational, real or complex; there are formulas that yield the required solutions. However, there is no algebraic expression (that is, in terms of radicals) for the solutions of general quintic equations over the rationals; this statement is known as the Abel–Ruffini theorem, first asserted in 1799 and completely proven in 1824. This result also holds for equations of higher degree. An example of a quintic whose roots cannot be expressed in terms of radicals is x5 − x + 1 = 0.
Numerical approximations of quintics roots can be computed with root-finding algorithms for polynomials. Although some quintics may be solved in terms of radicals, the solution is generally too complicated to be used in practice.
== Solvable quintics ==
Some quintic equations can be solved in terms of radicals. These include the quintic equations defined by a polynomial that is reducible, such as x5 − x4 − x + 1 = (x2 + 1)(x + 1)(x − 1)2. For example, it has been shown that
x
5
−
x
−
r
=
0
{\displaystyle x^{5}-x-r=0}
has solutions in radicals if and only if it has an integer solution or r is one of ±15, ±22440, or ±2759640, in which cases the polynomial is reducible.
As solving reducible quintic equations reduces immediately to solving polynomials of lower degree, only irreducible quintic equations are considered in the remainder of this section, and the term "quintic" will refer only to irreducible quintics. A solvable quintic is thus an irreducible quintic polynomial whose roots may be expressed in terms of radicals.
To characterize solvable quintics, and more generally solvable polynomials of higher degree, Évariste Galois developed techniques which gave rise to group theory and Galois theory. Applying these techniques, Arthur Cayley found a general criterion for determining whether any given quintic is solvable. This criterion is the following.
Given the equation
a
x
5
+
b
x
4
+
c
x
3
+
d
x
2
+
e
x
+
f
=
0
,
{\displaystyle ax^{5}+bx^{4}+cx^{3}+dx^{2}+ex+f=0,}
the Tschirnhaus transformation x = y − b/5a, which depresses the quintic (that is, removes the term of degree four), gives the equation
y
5
+
p
y
3
+
q
y
2
+
r
y
+
s
=
0
,
{\displaystyle y^{5}+py^{3}+qy^{2}+ry+s=0,}
where
p
=
5
a
c
−
2
b
2
5
a
2
q
=
25
a
2
d
−
15
a
b
c
+
4
b
3
25
a
3
r
=
125
a
3
e
−
50
a
2
b
d
+
15
a
b
2
c
−
3
b
4
125
a
4
s
=
3125
a
4
f
−
625
a
3
b
e
+
125
a
2
b
2
d
−
25
a
b
3
c
+
4
b
5
3125
a
5
{\displaystyle {\begin{aligned}p&={\frac {5ac-2b^{2}}{5a^{2}}}\\[4pt]q&={\frac {25a^{2}d-15abc+4b^{3}}{25a^{3}}}\\[4pt]r&={\frac {125a^{3}e-50a^{2}bd+15ab^{2}c-3b^{4}}{125a^{4}}}\\[4pt]s&={\frac {3125a^{4}f-625a^{3}be+125a^{2}b^{2}d-25ab^{3}c+4b^{5}}{3125a^{5}}}\end{aligned}}}
Both quintics are solvable by radicals if and only if either they are factorisable in equations of lower degrees with rational coefficients or the polynomial P2 − 1024 z Δ, named Cayley's resolvent, has a rational root in z, where
P
=
z
3
−
z
2
(
20
r
+
3
p
2
)
−
z
(
8
p
2
r
−
16
p
q
2
−
240
r
2
+
400
s
q
−
3
p
4
)
−
p
6
+
28
p
4
r
−
16
p
3
q
2
−
176
p
2
r
2
−
80
p
2
s
q
+
224
p
r
q
2
−
64
q
4
+
4000
p
s
2
+
320
r
3
−
1600
r
s
q
{\displaystyle {\begin{aligned}P={}&z^{3}-z^{2}(20r+3p^{2})-z(8p^{2}r-16pq^{2}-240r^{2}+400sq-3p^{4})\\[4pt]&-p^{6}+28p^{4}r-16p^{3}q^{2}-176p^{2}r^{2}-80p^{2}sq+224prq^{2}-64q^{4}\\[4pt]&+4000ps^{2}+320r^{3}-1600rsq\end{aligned}}}
and
Δ
=
−
128
p
2
r
4
+
3125
s
4
−
72
p
4
q
r
s
+
560
p
2
q
r
2
s
+
16
p
4
r
3
+
256
r
5
+
108
p
5
s
2
−
1600
q
r
3
s
+
144
p
q
2
r
3
−
900
p
3
r
s
2
+
2000
p
r
2
s
2
−
3750
p
q
s
3
+
825
p
2
q
2
s
2
+
2250
q
2
r
s
2
+
108
q
5
s
−
27
q
4
r
2
−
630
p
q
3
r
s
+
16
p
3
q
3
s
−
4
p
3
q
2
r
2
.
{\displaystyle {\begin{aligned}\Delta ={}&-128p^{2}r^{4}+3125s^{4}-72p^{4}qrs+560p^{2}qr^{2}s+16p^{4}r^{3}+256r^{5}+108p^{5}s^{2}\\[4pt]&-1600qr^{3}s+144pq^{2}r^{3}-900p^{3}rs^{2}+2000pr^{2}s^{2}-3750pqs^{3}+825p^{2}q^{2}s^{2}\\[4pt]&+2250q^{2}rs^{2}+108q^{5}s-27q^{4}r^{2}-630pq^{3}rs+16p^{3}q^{3}s-4p^{3}q^{2}r^{2}.\end{aligned}}}
Cayley's result allows us to test if a quintic is solvable. If it is the case, finding its roots is a more difficult problem, which consists of expressing the roots in terms of radicals involving the coefficients of the quintic and the rational root of Cayley's resolvent.
In 1888, George Paxton Young described how to solve a solvable quintic equation, without providing an explicit formula; in 2004, Daniel Lazard wrote out a three-page formula.
=== Quintics in Bring–Jerrard form ===
There are several parametric representations of solvable quintics of the form x5 + ax + b = 0, called the Bring–Jerrard form.
During the second half of the 19th century, John Stuart Glashan, George Paxton Young, and Carl Runge gave such a parameterization: an irreducible quintic with rational coefficients in Bring–Jerrard form
is solvable if and only if either a = 0 or it may be written
x
5
+
5
μ
4
(
4
ν
+
3
)
ν
2
+
1
x
+
4
μ
5
(
2
ν
+
1
)
(
4
ν
+
3
)
ν
2
+
1
=
0
{\displaystyle x^{5}+{\frac {5\mu ^{4}(4\nu +3)}{\nu ^{2}+1}}x+{\frac {4\mu ^{5}(2\nu +1)(4\nu +3)}{\nu ^{2}+1}}=0}
where μ and ν are rational.
In 1994, Blair Spearman and Kenneth S. Williams gave an alternative,
x
5
+
5
e
4
(
4
c
+
3
)
c
2
+
1
x
+
−
4
e
5
(
2
c
−
11
)
c
2
+
1
=
0.
{\displaystyle x^{5}+{\frac {5e^{4}(4c+3)}{c^{2}+1}}x+{\frac {-4e^{5}(2c-11)}{c^{2}+1}}=0.}
The relationship between the 1885 and 1994 parameterizations can be seen by defining the expression
b
=
4
5
(
a
+
20
±
2
(
20
−
a
)
(
5
+
a
)
)
{\displaystyle b={\frac {4}{5}}\left(a+20\pm 2{\sqrt {(20-a)(5+a)}}\right)}
where
a
=
5
4
ν
+
3
ν
2
+
1
{\displaystyle a=5{\tfrac {4\nu +3}{\nu ^{2}+1}}}
. Using the negative case of the square root yields, after scaling variables, the first parametrization while the positive case gives the second.
The substitution
c
=
−
m
ℓ
5
,
{\displaystyle c=-{\tfrac {m}{\ell ^{5}}},}
e
=
1
ℓ
{\displaystyle e={\tfrac {1}{\ell }}}
in the Spearman–Williams parameterization allows one to not exclude the special case a = 0, giving the following result:
If a and b are rational numbers, the equation x5 + ax + b = 0 is solvable by radicals if either its left-hand side is a product of polynomials of degree less than 5 with rational coefficients or there exist two rational numbers ℓ and m such that
a
=
5
ℓ
(
3
ℓ
5
−
4
m
)
m
2
+
ℓ
10
b
=
4
(
11
ℓ
5
+
2
m
)
m
2
+
ℓ
10
.
{\displaystyle a={\frac {5\ell (3\ell ^{5}-4m)}{m^{2}+\ell ^{10}}}\qquad b={\frac {4(11\ell ^{5}+2m)}{m^{2}+\ell ^{10}}}.}
=== Roots of a solvable quintic ===
A polynomial equation is solvable by radicals if its Galois group is a solvable group. In the case of irreducible quintics, the Galois group is a subgroup of the symmetric group S5 of all permutations of a five element set, which is solvable if and only if it is a subgroup of the group F5, of order 20, generated by the cyclic permutations (1 2 3 4 5) and (1 2 4 3).
If the quintic is solvable, one of the solutions may be represented by an algebraic expression involving a fifth root and at most two square roots, generally nested. The other solutions may then be obtained either by changing the fifth root or by multiplying all the occurrences of the fifth root by the same power of a primitive 5th root of unity, such as
−
10
−
2
5
+
5
−
1
4
.
{\displaystyle {\frac {{\sqrt {-10-2{\sqrt {5}}}}+{\sqrt {5}}-1}{4}}.}
In fact, all four primitive fifth roots of unity may be obtained by changing the signs of the square roots appropriately; namely, the expression
α
−
10
−
2
β
5
+
β
5
−
1
4
,
{\displaystyle {\frac {\alpha {\sqrt {-10-2\beta {\sqrt {5}}}}+\beta {\sqrt {5}}-1}{4}},}
where
α
,
β
∈
{
−
1
,
1
}
{\displaystyle \alpha ,\beta \in \{-1,1\}}
, yields the four distinct primitive fifth roots of unity.
It follows that one may need four different square roots for writing all the roots of a solvable quintic. Even for the first root that involves at most two square roots, the expression of the solutions in terms of radicals is usually highly complicated. However, when no square root is needed, the form of the first solution may be rather simple, as for the equation x5 − 5x4 + 30x3 − 50x2 + 55x − 21 = 0, for which the only real solution is
x
=
1
+
2
5
−
(
2
5
)
2
+
(
2
5
)
3
−
(
2
5
)
4
.
{\displaystyle x=1+{\sqrt[{5}]{2}}-\left({\sqrt[{5}]{2}}\right)^{2}+\left({\sqrt[{5}]{2}}\right)^{3}-\left({\sqrt[{5}]{2}}\right)^{4}.}
An example of a more complicated (although small enough to be written here) solution is the unique real root of x5 − 5x + 12 = 0. Let a = √2φ−1, b = √2φ, and c = 4√5, where φ = 1+√5/2 is the golden ratio. Then the only real solution x = −1.84208... is given by
−
c
x
=
(
a
+
c
)
2
(
b
−
c
)
5
+
(
−
a
+
c
)
(
b
−
c
)
2
5
+
(
a
+
c
)
(
b
+
c
)
2
5
−
(
−
a
+
c
)
2
(
b
+
c
)
5
,
{\displaystyle -cx={\sqrt[{5}]{(a+c)^{2}(b-c)}}+{\sqrt[{5}]{(-a+c)(b-c)^{2}}}+{\sqrt[{5}]{(a+c)(b+c)^{2}}}-{\sqrt[{5}]{(-a+c)^{2}(b+c)}}\,,}
or, equivalently, by
x
=
y
1
5
+
y
2
5
+
y
3
5
+
y
4
5
,
{\displaystyle x={\sqrt[{5}]{y_{1}}}+{\sqrt[{5}]{y_{2}}}+{\sqrt[{5}]{y_{3}}}+{\sqrt[{5}]{y_{4}}}\,,}
where the yi are the four roots of the quartic equation
y
4
+
4
y
3
+
4
5
y
2
−
8
5
3
y
−
1
5
5
=
0
.
{\displaystyle y^{4}+4y^{3}+{\frac {4}{5}}y^{2}-{\frac {8}{5^{3}}}y-{\frac {1}{5^{5}}}=0\,.}
More generally, if an equation P(x) = 0 of prime degree p with rational coefficients is solvable in radicals, then one can define an auxiliary equation Q(y) = 0 of degree p − 1, also with rational coefficients, such that each root of P is the sum of p-th roots of the roots of Q. These p-th roots were introduced by Joseph-Louis Lagrange, and their products by p are commonly called Lagrange resolvents. The computation of Q and its roots can be used to solve P(x) = 0. However these p-th roots may not be computed independently (this would provide pp−1 roots instead of p). Thus a correct solution needs to express all these p-roots in term of one of them. Galois theory shows that this is always theoretically possible, even if the resulting formula may be too large to be of any use.
It is possible that some of the roots of Q are rational (as in the first example of this section) or some are zero. In these cases, the formula for the roots is much simpler, as for the solvable de Moivre quintic
x
5
+
5
a
x
3
+
5
a
2
x
+
b
=
0
,
{\displaystyle x^{5}+5ax^{3}+5a^{2}x+b=0\,,}
where the auxiliary equation has two zero roots and reduces, by factoring them out, to the quadratic equation
y
2
+
b
y
−
a
5
=
0
,
{\displaystyle y^{2}+by-a^{5}=0\,,}
such that the five roots of the de Moivre quintic are given by
x
k
=
ω
k
y
i
5
−
a
ω
k
y
i
5
,
{\displaystyle x_{k}=\omega ^{k}{\sqrt[{5}]{y_{i}}}-{\frac {a}{\omega ^{k}{\sqrt[{5}]{y_{i}}}}},}
where yi is any root of the auxiliary quadratic equation and ω is any of the four primitive 5th roots of unity. This can be easily generalized to construct a solvable septic and other odd degrees, not necessarily prime.
=== Other solvable quintics ===
There are infinitely many solvable quintics in Bring–Jerrard form which have been parameterized in a preceding section.
Up to the scaling of the variable, there are exactly five solvable quintics of the shape
x
5
+
a
x
2
+
b
{\displaystyle x^{5}+ax^{2}+b}
, which are (where s is a scaling factor):
x
5
−
2
s
3
x
2
−
s
5
5
{\displaystyle x^{5}-2s^{3}x^{2}-{\frac {s^{5}}{5}}}
x
5
−
100
s
3
x
2
−
1000
s
5
{\displaystyle x^{5}-100s^{3}x^{2}-1000s^{5}}
x
5
−
5
s
3
x
2
−
3
s
5
{\displaystyle x^{5}-5s^{3}x^{2}-3s^{5}}
x
5
−
5
s
3
x
2
+
15
s
5
{\displaystyle x^{5}-5s^{3}x^{2}+15s^{5}}
x
5
−
25
s
3
x
2
−
300
s
5
{\displaystyle x^{5}-25s^{3}x^{2}-300s^{5}}
Paxton Young (1888) gave a number of examples of solvable quintics:
An infinite sequence of solvable quintics may be constructed, whose roots are sums of nth roots of unity, with n = 10k + 1 being a prime number:
There are also two parameterized families of solvable quintics:
The Kondo–Brumer quintic,
x
5
+
(
a
−
3
)
x
4
+
(
−
a
+
b
+
3
)
x
3
+
(
a
2
−
a
−
1
−
2
b
)
x
2
+
b
x
+
a
=
0
{\displaystyle x^{5}+(a-3)\,x^{4}+(-a+b+3)\,x^{3}+(a^{2}-a-1-2b)\,x^{2}+b\,x+a=0}
and the family depending on the parameters
a
,
ℓ
,
m
{\displaystyle a,\ell ,m}
x
5
−
5
p
(
2
x
3
+
a
x
2
+
b
x
)
−
p
c
=
0
{\displaystyle x^{5}-5\,p\left(2\,x^{3}+a\,x^{2}+b\,x\right)-p\,c=0}
where
p
=
1
4
[
ℓ
2
(
4
m
2
+
a
2
)
−
m
2
]
,
{\displaystyle p={\tfrac {1}{4}}\left[\,\ell ^{2}(4m^{2}+a^{2})-m^{2}\,\right]\;,}
b
=
ℓ
(
4
m
2
+
a
2
)
−
5
p
−
2
m
2
,
{\displaystyle b=\ell \,(4m^{2}+a^{2})-5p-2m^{2}\;,}
c
=
1
2
[
b
(
a
+
4
m
)
−
p
(
a
−
4
m
)
−
a
2
m
]
.
{\displaystyle c={\tfrac {1}{2}}\left[\,b(a+4m)-p(a-4m)-a^{2}m\,\right]\;.}
=== Casus irreducibilis ===
Analogously to cubic equations, there are solvable quintics which have five real roots all of whose solutions in radicals involve roots of complex numbers. This is casus irreducibilis for the quintic, which is discussed in Dummit.: p.17 Indeed, if an irreducible quintic has all roots real, no root can be expressed purely in terms of real radicals (as is true for all polynomial degrees that are not powers of 2).
== Beyond radicals ==
About 1835, Jerrard demonstrated that quintics can be solved by using ultraradicals (also known as Bring radicals), the unique real root of t5 + t − a = 0 for real numbers a. In 1858, Charles Hermite showed that the Bring radical could be characterized in terms of the Jacobi theta functions and their associated elliptic modular functions, using an approach similar to the more familiar approach of solving cubic equations by means of trigonometric functions. At around the same time, Leopold Kronecker, using group theory, developed a simpler way of deriving Hermite's result, as had Francesco Brioschi. Later, Felix Klein came up with a method that relates the symmetries of the icosahedron, Galois theory, and the elliptic modular functions that are featured in Hermite's solution, giving an explanation for why they should appear at all, and developed his own solution in terms of generalized hypergeometric functions. Similar phenomena occur in degree 7 (septic equations) and 11, as studied by Klein and discussed in Icosahedral symmetry § Related geometries.
=== Solving with Bring radicals ===
A Tschirnhaus transformation, which may be computed by solving a quartic equation, reduces the general quintic equation of the form
x
5
+
a
4
x
4
+
a
3
x
3
+
a
2
x
2
+
a
1
x
+
a
0
=
0
{\displaystyle x^{5}+a_{4}x^{4}+a_{3}x^{3}+a_{2}x^{2}+a_{1}x+a_{0}=0\,}
to the Bring–Jerrard normal form x5 − x + t = 0.
The roots of this equation cannot be expressed by radicals. However, in 1858, Charles Hermite published the first known solution of this equation in terms of elliptic functions.
At around the same time Francesco Brioschi
and Leopold Kronecker
came upon equivalent solutions.
See Bring radical for details on these solutions and some related ones.
== Application to celestial mechanics ==
Solving for the locations of the Lagrangian points of an astronomical orbit in which the masses of both objects are non-negligible involves solving a quintic.
More precisely, the locations of L2 and L1 are the solutions to the following equations, where the gravitational forces of two masses on a third (for example, Sun and Earth on satellites such as Gaia and the James Webb Space Telescope at L2 and SOHO at L1) provide the satellite's centripetal force necessary to be in a synchronous orbit with Earth around the Sun:
G
m
M
S
(
R
±
r
)
2
±
G
m
M
E
r
2
=
m
ω
2
(
R
±
r
)
{\displaystyle {\frac {GmM_{S}}{(R\pm r)^{2}}}\pm {\frac {GmM_{E}}{r^{2}}}=m\omega ^{2}(R\pm r)}
The ± sign corresponds to L2 and L1, respectively; G is the gravitational constant, ω the angular velocity, r the distance of the satellite to Earth, R the distance Sun to Earth (that is, the semi-major axis of Earth's orbit), and m, ME, and MS are the respective masses of satellite, Earth, and Sun.
Using Kepler's Third Law
ω
2
=
4
π
2
P
2
=
G
(
M
S
+
M
E
)
R
3
{\displaystyle \omega ^{2}={\frac {4\pi ^{2}}{P^{2}}}={\frac {G(M_{S}+M_{E})}{R^{3}}}}
and rearranging all terms yields the quintic
a
r
5
+
b
r
4
+
c
r
3
+
d
r
2
+
e
r
+
f
=
0
{\displaystyle ar^{5}+br^{4}+cr^{3}+dr^{2}+er+f=0}
with:
a
=
±
(
M
S
+
M
E
)
,
b
=
+
(
M
S
+
M
E
)
3
R
,
c
=
±
(
M
S
+
M
E
)
3
R
2
,
d
=
+
(
M
E
∓
M
E
)
R
3
(
thus
d
=
0
for
L
2
)
,
e
=
±
M
E
2
R
4
,
f
=
∓
M
E
R
5
.
{\displaystyle {\begin{aligned}&a=\pm (M_{S}+M_{E}),\\&b=+(M_{S}+M_{E})3R,\\&c=\pm (M_{S}+M_{E})3R^{2},\\&d=+(M_{E}\mp M_{E})R^{3}\ ({\text{thus }}d=0{\text{ for }}L_{2}),\\&e=\pm M_{E}2R^{4},\\&f=\mp M_{E}R^{5}.\end{aligned}}}
Solving these two quintics yields r = 1.501 × 109 m for L2 and r = 1.491 × 109 m for L1. The Sun–Earth Lagrangian points L2 and L1 are usually given as 1.5 million km from Earth.
If the mass of the smaller object (ME) is much smaller than the mass of the larger object (MS), then the quintic equation can be greatly reduced and L1 and L2 are at approximately the radius of the Hill sphere, given by:
r
≈
R
M
E
3
M
S
3
{\displaystyle r\approx R{\sqrt[{3}]{\frac {M_{E}}{3M_{S}}}}}
That also yields r = 1.5 × 109 m for satellites at L1 and L2 in the Sun-Earth system.
== See also ==
Sextic equation
Septic function
Theory of equations
Principal equation form
== Notes ==
== References ==
Charles Hermite, "Sur la résolution de l'équation du cinquème degré", Œuvres de Charles Hermite, 2:5–21, Gauthier-Villars, 1908.
Klein, Felix (1888). Lectures on the Icosahedron and the Solution of Equations of the Fifth Degree. Translated by Morrice, George Gavin. Trübner & Co. ISBN 0-486-49528-0. {{cite book}}: ISBN / Date incompatibility (help)
Leopold Kronecker, "Sur la résolution de l'equation du cinquième degré, extrait d'une lettre adressée à M. Hermite", Comptes Rendus de l'Académie des Sciences, 46:1:1150–1152 1858.
Blair Spearman and Kenneth S. Williams, "Characterization of solvable quintics x5 + ax + b, American Mathematical Monthly, 101:986–992 (1994).
Ian Stewart, Galois Theory 2nd Edition, Chapman and Hall, 1989. ISBN 0-412-34550-1. Discusses Galois Theory in general including a proof of insolvability of the general quintic.
Jörg Bewersdorff, Galois theory for beginners: A historical perspective, American Mathematical Society, 2006. ISBN 0-8218-3817-2. Chapter 8 (The solution of equations of the fifth degree at the Wayback Machine (archived 31 March 2010)) gives a description of the solution of solvable quintics x5 + cx + d.
Victor S. Adamchik and David J. Jeffrey, "Polynomial transformations of Tschirnhaus, Bring and Jerrard," ACM SIGSAM Bulletin, Vol. 37, No. 3, September 2003, pp. 90–94.
Ehrenfried Walter von Tschirnhaus, "A method for removing all intermediate terms from a given equation," ACM SIGSAM Bulletin, Vol. 37, No. 1, March 2003, pp. 1–3.
Lazard, Daniel (2004). "Solving quintics in radicals". In Olav Arnfinn Laudal; Ragni Piene (eds.). The Legacy of Niels Henrik Abel. Berlin. pp. 207–225. ISBN 3-540-43826-2. Archived from the original on January 6, 2005.{{cite book}}: CS1 maint: location missing publisher (link)
Tóth, Gábor (2002), Finite Möbius groups, minimal immersions of spheres, and moduli
== External links ==
Mathworld - Quintic Equation – more details on methods for solving Quintics.
Solving Solvable Quintics – a method for solving solvable quintics due to David S. Dummit.
A method for removing all intermediate terms from a given equation - a recent English translation of Tschirnhaus' 1683 paper.
Bruce Bartlett:The Quintic, the Icosahedron, and Elliptic Curves, AMS Notices (April 2024) | Wikipedia/General_quintic_equation |
Ergodic theory is a branch of mathematics that studies statistical properties of deterministic dynamical systems; it is the study of ergodicity. In this context, "statistical properties" refers to properties which are expressed through the behavior of time averages of various functions along trajectories of dynamical systems. The notion of deterministic dynamical systems assumes that the equations determining the dynamics do not contain any random perturbations, noise, etc. Thus, the statistics with which we are concerned are properties of the dynamics.
Ergodic theory, like probability theory, is based on general notions of measure theory. Its initial development was motivated by problems of statistical physics.
A central concern of ergodic theory is the behavior of a dynamical system when it is allowed to run for a long time. The first result in this direction is the Poincaré recurrence theorem, which claims that almost all points in any subset of the phase space eventually revisit the set. Systems for which the Poincaré recurrence theorem holds are conservative systems; thus all ergodic systems are conservative.
More precise information is provided by various ergodic theorems which assert that, under certain conditions, the time average of a function along the trajectories exists almost everywhere and is related to the space average. Two of the most important theorems are those of Birkhoff (1931) and von Neumann which assert the existence of a time average along each trajectory. For the special class of ergodic systems, this time average is the same for almost all initial points: statistically speaking, the system that evolves for a long time "forgets" its initial state. Stronger properties, such as mixing and equidistribution, have also been extensively studied.
The problem of metric classification of systems is another important part of the abstract ergodic theory. An outstanding role in ergodic theory and its applications to stochastic processes is played by the various notions of entropy for dynamical systems.
The concepts of ergodicity and the ergodic hypothesis are central to applications of ergodic theory. The underlying idea is that for certain systems the time average of their properties is equal to the average over the entire space. Applications of ergodic theory to other parts of mathematics usually involve establishing ergodicity properties for systems of special kind. In geometry, methods of ergodic theory have been used to study the geodesic flow on Riemannian manifolds, starting with the results of Eberhard Hopf for Riemann surfaces of negative curvature. Markov chains form a common context for applications in probability theory. Ergodic theory has fruitful connections with harmonic analysis, Lie theory (representation theory, lattices in algebraic groups), and number theory (the theory of diophantine approximations, L-functions).
== Ergodic transformations ==
Ergodic theory is often concerned with ergodic transformations. The intuition behind such transformations, which act on a given set, is that they do a thorough job "stirring" the elements of that set. E.g. if the set is a quantity of hot oatmeal in a bowl, and if a spoonful of syrup is dropped into the bowl, then iterations of the inverse of an ergodic transformation of the oatmeal will not allow the syrup to remain in a local subregion of the oatmeal, but will distribute the syrup evenly throughout. At the same time, these iterations will not compress or dilate any portion of the oatmeal: they preserve the measure that is density.
The formal definition is as follows:
Let T : X → X be a measure-preserving transformation on a measure space (X, Σ, μ), with μ(X) = 1. Then T is ergodic if for every E in Σ with μ(T−1(E) Δ E) = 0 (that is, E is invariant), either μ(E) = 0 or μ(E) = 1.
The operator Δ here is the symmetric difference of sets, equivalent to the exclusive-or operation with respect to set membership. The condition that the symmetric difference be measure zero is called being essentially invariant.
== Examples ==
An irrational rotation of the circle R/Z, T: x → x + θ, where θ is irrational, is ergodic. This transformation has even stronger properties of unique ergodicity, minimality, and equidistribution. By contrast, if θ = p/q is rational (in lowest terms) then T is periodic, with period q, and thus cannot be ergodic: for any interval I of length a, 0 < a < 1/q, its orbit under T (that is, the union of I, T(I), ..., Tq−1(I), which contains the image of I under any number of applications of T) is a T-invariant mod 0 set that is a union of q intervals of length a, hence it has measure qa strictly between 0 and 1.
Let G be a compact abelian group, μ the normalized Haar measure, and T a group automorphism of G. Let G* be the Pontryagin dual group, consisting of the continuous characters of G, and T* be the corresponding adjoint automorphism of G*. The automorphism T is ergodic if and only if the equality (T*)n(χ) = χ is possible only when n = 0 or χ is the trivial character of G. In particular, if G is the n-dimensional torus and the automorphism T is represented by a unimodular matrix A then T is ergodic if and only if no eigenvalue of A is a root of unity.
A Bernoulli shift is ergodic. More generally, ergodicity of the shift transformation associated with a sequence of i.i.d. random variables and some more general stationary processes follows from Kolmogorov's zero–one law.
Ergodicity of a continuous dynamical system means that its trajectories "spread around" the phase space. A system with a compact phase space which has a non-constant first integral cannot be ergodic. This applies, in particular, to Hamiltonian systems with a first integral I functionally independent from the Hamilton function H and a compact level set X = {(p,q): H(p,q) = E} of constant energy. Liouville's theorem implies the existence of a finite invariant measure on X, but the dynamics of the system is constrained to the level sets of I on X, hence the system possesses invariant sets of positive but less than full measure. A property of continuous dynamical systems that is the opposite of ergodicity is complete integrability.
== Ergodic theorems ==
Let T: X → X be a measure-preserving transformation on a measure space (X, Σ, μ) and suppose ƒ is a μ-integrable function, i.e. ƒ ∈ L1(μ). Then we define the following averages:
Time average: This is defined as the average (if it exists) over iterations of T starting from some initial point x:
f
^
(
x
)
=
lim
n
→
∞
1
n
∑
k
=
0
n
−
1
f
(
T
k
x
)
.
{\displaystyle {\hat {f}}(x)=\lim _{n\rightarrow \infty }\;{\frac {1}{n}}\sum _{k=0}^{n-1}f(T^{k}x).}
Space average: If μ(X) is finite and nonzero, we can consider the space or phase average of ƒ:
f
¯
=
1
μ
(
X
)
∫
f
d
μ
.
(For a probability space,
μ
(
X
)
=
1.
)
{\displaystyle {\bar {f}}={\frac {1}{\mu (X)}}\int f\,d\mu .\quad {\text{ (For a probability space, }}\mu (X)=1.)}
In general the time average and space average may be different. But if the transformation is ergodic, and the measure is invariant, then the time average is equal to the space average almost everywhere. This is the celebrated ergodic theorem, in an abstract form due to George David Birkhoff. (Actually, Birkhoff's paper considers not the abstract general case but only the case of dynamical systems arising from differential equations on a smooth manifold.) The equidistribution theorem is a special case of the ergodic theorem, dealing specifically with the distribution of probabilities on the unit interval.
More precisely, the pointwise or strong ergodic theorem states that the limit in the definition of the time average of ƒ exists for almost every x and that the (almost everywhere defined) limit function
f
^
{\displaystyle {\hat {f}}}
is integrable:
f
^
∈
L
1
(
μ
)
.
{\displaystyle {\hat {f}}\in L^{1}(\mu ).\,}
Furthermore,
f
^
{\displaystyle {\hat {f}}}
is T-invariant, that is to say
f
^
∘
T
=
f
^
{\displaystyle {\hat {f}}\circ T={\hat {f}}\,}
holds almost everywhere, and if μ(X) is finite, then the normalization is the same:
∫
f
^
d
μ
=
∫
f
d
μ
.
{\displaystyle \int {\hat {f}}\,d\mu =\int f\,d\mu .}
In particular, if T is ergodic, then
f
^
{\displaystyle {\hat {f}}}
must be a constant (almost everywhere), and so one has that
f
¯
=
f
^
{\displaystyle {\bar {f}}={\hat {f}}\,}
almost everywhere. Joining the first to the last claim and assuming that μ(X) is finite and nonzero, one has that
lim
n
→
∞
1
n
∑
k
=
0
n
−
1
f
(
T
k
x
)
=
1
μ
(
X
)
∫
f
d
μ
{\displaystyle \lim _{n\rightarrow \infty }\;{\frac {1}{n}}\sum _{k=0}^{n-1}f(T^{k}x)={\frac {1}{\mu (X)}}\int f\,d\mu }
for almost all x, i.e., for all x except for a set of measure zero.
For an ergodic transformation, the time average equals the space average almost surely.
As an example, assume that the measure space (X, Σ, μ) models the particles of a gas as above, and let ƒ(x) denote the velocity of the particle at position x. Then the pointwise ergodic theorems says that the average velocity of all particles at some given time is equal to the average velocity of one particle over time.
A generalization of Birkhoff's theorem is Kingman's subadditive ergodic theorem.
== Probabilistic formulation: Birkhoff–Khinchin theorem ==
Birkhoff–Khinchin theorem. Let ƒ be measurable, E(|ƒ|) < ∞, and T be a measure-preserving map. Then with probability 1:
lim
n
→
∞
1
n
∑
k
=
0
n
−
1
f
(
T
k
x
)
=
E
(
f
∣
C
)
(
x
)
,
{\displaystyle \lim _{n\rightarrow \infty }\;{\frac {1}{n}}\sum _{k=0}^{n-1}f(T^{k}x)=E(f\mid {\mathcal {C}})(x),}
where
E
(
f
|
C
)
{\displaystyle E(f|{\mathcal {C}})}
is the conditional expectation given the σ-algebra
C
{\displaystyle {\mathcal {C}}}
of invariant sets of T.
Corollary (Pointwise Ergodic Theorem): In particular, if T is also ergodic, then
C
{\displaystyle {\mathcal {C}}}
is the trivial σ-algebra, and thus with probability 1:
lim
n
→
∞
1
n
∑
k
=
0
n
−
1
f
(
T
k
x
)
=
E
(
f
)
.
{\displaystyle \lim _{n\rightarrow \infty }\;{\frac {1}{n}}\sum _{k=0}^{n-1}f(T^{k}x)=E(f).}
== Mean ergodic theorem ==
Von Neumann's mean ergodic theorem, holds in Hilbert spaces.
Let U be a unitary operator on a Hilbert space H; more generally, an isometric linear operator (that is, a not necessarily surjective linear operator satisfying ‖Ux‖ = ‖x‖ for all x in H, or equivalently, satisfying U*U = I, but not necessarily UU* = I). Let P be the orthogonal projection onto {ψ ∈ H | Uψ = ψ} = ker(I − U).
Then, for any x in H, we have:
lim
N
→
∞
1
N
∑
n
=
0
N
−
1
U
n
x
=
P
x
,
{\displaystyle \lim _{N\to \infty }{1 \over N}\sum _{n=0}^{N-1}U^{n}x=Px,}
where the limit is with respect to the norm on H. In other words, the sequence of averages
1
N
∑
n
=
0
N
−
1
U
n
{\displaystyle {\frac {1}{N}}\sum _{n=0}^{N-1}U^{n}}
converges to P in the strong operator topology.
Indeed, it is not difficult to see that in this case any
x
∈
H
{\displaystyle x\in H}
admits an orthogonal decomposition into parts from
ker
(
I
−
U
)
{\displaystyle \ker(I-U)}
and
ran
(
I
−
U
)
¯
{\displaystyle {\overline {\operatorname {ran} (I-U)}}}
respectively. The former part is invariant in all the partial sums as
N
{\displaystyle N}
grows, while for the latter part, from the telescoping series one would have:
lim
N
→
∞
1
N
∑
n
=
0
N
−
1
U
n
(
I
−
U
)
=
lim
N
→
∞
1
N
(
I
−
U
N
)
=
0
{\displaystyle \lim _{N\to \infty }{1 \over N}\sum _{n=0}^{N-1}U^{n}(I-U)=\lim _{N\to \infty }{1 \over N}(I-U^{N})=0}
This theorem specializes to the case in which the Hilbert space H consists of L2 functions on a measure space and U is an operator of the form
U
f
(
x
)
=
f
(
T
x
)
{\displaystyle Uf(x)=f(Tx)\,}
where T is a measure-preserving endomorphism of X, thought of in applications as representing a time-step of a discrete dynamical system. The ergodic theorem then asserts that the average behavior of a function ƒ over sufficiently large time-scales is approximated by the orthogonal component of ƒ which is time-invariant.
In another form of the mean ergodic theorem, let Ut be a strongly continuous one-parameter group of unitary operators on H. Then the operator
1
T
∫
0
T
U
t
d
t
{\displaystyle {\frac {1}{T}}\int _{0}^{T}U_{t}\,dt}
converges in the strong operator topology as T → ∞. In fact, this result also extends to the case of strongly continuous one-parameter semigroup of contractive operators on a reflexive space.
Remark: Some intuition for the mean ergodic theorem can be developed by considering the case where complex numbers of unit length are regarded as unitary transformations on the complex plane (by left multiplication). If we pick a single complex number of unit length (which we think of as U), it is intuitive that its powers will fill up the circle. Since the circle is symmetric around 0, it makes sense that the averages of the powers of U will converge to 0. Also, 0 is the only fixed point of U, and so the projection onto the space of fixed points must be the zero operator (which agrees with the limit just described).
== Convergence of the ergodic means in the Lp norms ==
Let (X, Σ, μ) be as above a probability space with a measure preserving transformation T, and let 1 ≤ p ≤ ∞. The conditional expectation with respect to the sub-σ-algebra ΣT of the T-invariant sets is a linear projector ET of norm 1 of the Banach space Lp(X, Σ, μ) onto its closed subspace Lp(X, ΣT, μ). The latter may also be characterized as the space of all T-invariant Lp-functions on X. The ergodic means, as linear operators on Lp(X, Σ, μ) also have unit operator norm; and, as a simple consequence of the Birkhoff–Khinchin theorem, converge to the projector ET in the strong operator topology of Lp if 1 ≤ p ≤ ∞, and in the weak operator topology if p = ∞. More is true if 1 < p ≤ ∞ then the Wiener–Yoshida–Kakutani ergodic dominated convergence theorem states that the ergodic means of ƒ ∈ Lp are dominated in Lp; however, if ƒ ∈ L1, the ergodic means may fail to be equidominated in Lp. Finally, if ƒ is assumed to be in the Zygmund class, that is |ƒ| log+(|ƒ|) is integrable, then the ergodic means are even dominated in L1.
== Sojourn time ==
Let (X, Σ, μ) be a measure space such that μ(X) is finite and nonzero. The time spent in a measurable set A is called the sojourn time. An immediate consequence of the ergodic theorem is that, in an ergodic system, the relative measure of A is equal to the mean sojourn time:
μ
(
A
)
μ
(
X
)
=
1
μ
(
X
)
∫
χ
A
d
μ
=
lim
n
→
∞
1
n
∑
k
=
0
n
−
1
χ
A
(
T
k
x
)
{\displaystyle {\frac {\mu (A)}{\mu (X)}}={\frac {1}{\mu (X)}}\int \chi _{A}\,d\mu =\lim _{n\rightarrow \infty }\;{\frac {1}{n}}\sum _{k=0}^{n-1}\chi _{A}(T^{k}x)}
for all x except for a set of measure zero, where χA is the indicator function of A.
The occurrence times of a measurable set A is defined as the set k1, k2, k3, ..., of times k such that Tk(x) is in A, sorted in increasing order. The differences between consecutive occurrence times Ri = ki − ki−1 are called the recurrence times of A. Another consequence of the ergodic theorem is that the average recurrence time of A is inversely proportional to the measure of A, assuming that the initial point x is in A, so that k0 = 0.
R
1
+
⋯
+
R
n
n
→
μ
(
X
)
μ
(
A
)
(almost surely)
{\displaystyle {\frac {R_{1}+\cdots +R_{n}}{n}}\rightarrow {\frac {\mu (X)}{\mu (A)}}\quad {\text{(almost surely)}}}
(See almost surely.) That is, the smaller A is, the longer it takes to return to it.
== Ergodic flows on manifolds ==
The ergodicity of the geodesic flow on compact Riemann surfaces of variable negative curvature and on compact manifolds of constant negative curvature of any dimension was proved by Eberhard Hopf in 1939, although special cases had been studied earlier: see for example, Hadamard's billiards (1898) and Artin billiard (1924). The relation between geodesic flows on Riemann surfaces and one-parameter subgroups on SL(2, R) was described in 1952 by S. V. Fomin and I. M. Gelfand. The article on Anosov flows provides an example of ergodic flows on SL(2, R) and on Riemann surfaces of negative curvature. Much of the development described there generalizes to hyperbolic manifolds, since they can be viewed as quotients of the hyperbolic space by the action of a lattice in the semisimple Lie group SO(n,1). Ergodicity of the geodesic flow on Riemannian symmetric spaces was demonstrated by F. I. Mautner in 1957. In 1967 D. V. Anosov and Ya. G. Sinai proved ergodicity of the geodesic flow on compact manifolds of variable negative sectional curvature. A simple criterion for the ergodicity of a homogeneous flow on a homogeneous space of a semisimple Lie group was given by Calvin C. Moore in 1966. Many of the theorems and results from this area of study are typical of rigidity theory.
In the 1930s G. A. Hedlund proved that the horocycle flow on a compact hyperbolic surface is minimal and ergodic. Unique ergodicity of the flow was established by Hillel Furstenberg in 1972. Ratner's theorems provide a major generalization of ergodicity for unipotent flows on the homogeneous spaces of the form Γ \ G, where G is a Lie group and Γ is a lattice in G.
In the last 20 years, there have been many works trying to find a measure-classification theorem similar to Ratner's theorems but for diagonalizable actions, motivated by conjectures of Furstenberg and Margulis. An important partial result (solving those conjectures with an extra assumption of positive entropy) was proved by Elon Lindenstrauss, and he was awarded the Fields medal in 2010 for this result.
== See also ==
Chaos theory
Ergodic hypothesis
Ergodic process
Kruskal principle
Lindy effect
Lyapunov time – the time limit to the predictability of the system
Maximal ergodic theorem
Ornstein isomorphism theorem
Statistical mechanics
Symbolic dynamics
== References ==
== Historical references ==
Birkhoff, George David (1931), "Proof of the ergodic theorem", Proc. Natl. Acad. Sci. USA, vol. 17, no. 12, pp. 656–660, Bibcode:1931PNAS...17..656B, doi:10.1073/pnas.17.12.656, PMC 1076138, PMID 16577406.
Birkhoff, George David (1942), "What is the ergodic theorem?", Amer. Math. Monthly, vol. 49, no. 4, pp. 222–226, doi:10.2307/2303229, JSTOR 2303229.
von Neumann, John (1932), "Proof of the Quasi-ergodic Hypothesis", Proc. Natl. Acad. Sci. USA, vol. 18, no. 1, pp. 70–82, Bibcode:1932PNAS...18...70N, doi:10.1073/pnas.18.1.70, PMC 1076162, PMID 16577432.
von Neumann, John (1932), "Physical Applications of the Ergodic Hypothesis", Proc. Natl. Acad. Sci. USA, vol. 18, no. 3, pp. 263–266, Bibcode:1932PNAS...18..263N, doi:10.1073/pnas.18.3.263, JSTOR 86260, PMC 1076204, PMID 16587674.
Hopf, Eberhard (1939), "Statistik der geodätischen Linien in Mannigfaltigkeiten negativer Krümmung", Leipzig Ber. Verhandl. Sächs. Akad. Wiss., vol. 91, pp. 261–304.
Fomin, Sergei V.; Gelfand, I. M. (1952), "Geodesic flows on manifolds of constant negative curvature", Uspekhi Mat. Nauk, vol. 7, no. 1, pp. 118–137.
Mautner, F. I. (1957), "Geodesic flows on symmetric Riemann spaces", Ann. Math., vol. 65, no. 3, pp. 416–431, doi:10.2307/1970054, JSTOR 1970054.
Moore, C. C. (1966), "Ergodicity of flows on homogeneous spaces", Amer. J. Math., vol. 88, no. 1, pp. 154–178, doi:10.2307/2373052, JSTOR 2373052.
== Modern references ==
D.V. Anosov (2001) [1994], "Ergodic theory", Encyclopedia of Mathematics, EMS Press
This article incorporates material from ergodic theorem on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Vladimir Igorevich Arnol'd and André Avez, Ergodic Problems of Classical Mechanics. New York: W.A. Benjamin. 1968.
Leo Breiman, Probability. Original edition published by Addison–Wesley, 1968; reprinted by Society for Industrial and Applied Mathematics, 1992. ISBN 0-89871-296-3. (See Chapter 6.)
Walters, Peter (1982), An introduction to ergodic theory, Graduate Texts in Mathematics, vol. 79, Springer-Verlag, ISBN 0-387-95152-0, Zbl 0475.28009
Bedford, Tim; Keane, Michael; Series, Caroline, eds. (1991), Ergodic theory, symbolic dynamics and hyperbolic spaces, Oxford University Press, ISBN 0-19-853390-X (A survey of topics in ergodic theory; with exercises.)
Karl Petersen. Ergodic Theory (Cambridge Studies in Advanced Mathematics). Cambridge: Cambridge University Press. 1990.
Françoise Pène, Stochastic properties of dynamical systems, Cours spécialisés de la SMF, Volume 30, 2022
Joseph M. Rosenblatt and Máté Weirdl, Pointwise ergodic theorems via harmonic analysis, (1993) appearing in Ergodic Theory and its Connections with Harmonic Analysis, Proceedings of the 1993 Alexandria Conference, (1995) Karl E. Petersen and Ibrahim A. Salama, eds., Cambridge University Press, Cambridge, ISBN 0-521-45999-0. (An extensive survey of the ergodic properties of generalizations of the equidistribution theorem of shift maps on the unit interval. Focuses on methods developed by Bourgain.)
A. N. Shiryaev, Probability, 2nd ed., Springer 1996, Sec. V.3. ISBN 0-387-94549-0.
Zund, Joseph D. (2002), "George David Birkhoff and John von Neumann: A Question of Priority and the Ergodic Theorems, 1931–1932", Historia Mathematica, 29 (2): 138–156, doi:10.1006/hmat.2001.2338 (A detailed discussion about the priority of the discovery and publication of the ergodic theorems by Birkhoff and von Neumann, based on a letter of the latter to his friend Howard Percy Robertson.)
Andrzej Lasota, Michael C. Mackey, Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics. Second Edition, Springer, 1994.
Manfred Einsiedler and Thomas Ward, Ergodic Theory with a view towards Number Theory. Springer, 2011.
Jane Hawkins, Ergodic Dynamics: From Basic Theory to Applications, Springer, 2021. ISBN 978-3-030-59242-4
== External links ==
Ergodic Theory (16 June 2015) Notes by Cosma Rohilla Shalizi
Ergodic theorem passes the test From Physics World | Wikipedia/Ergodic_theory |
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences like economics, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology.
Before modern computers, numerical methods often relied on hand interpolation formulas, using data from large printed tables. Since the mid-20th century, computers calculate the required functions instead, but many of the same formulas continue to be used in software algorithms.
The numerical point of view goes back to the earliest mathematical writings. A tablet from the Yale Babylonian Collection (YBC 7289), gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square.
Numerical analysis continues this long tradition: rather than giving exact symbolic answers translated into digits and applicable only to real-world measurements, approximate solutions within specified error bounds are used.
== Applications ==
The overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to a wide variety of hard problems, many of which are infeasible to solve symbolically:
Advanced numerical methods are essential in making numerical weather prediction feasible.
Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations.
Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving partial differential equations numerically.
In the financial field, (private investment funds) and other financial institutions use quantitative finance tools from numerical analysis to attempt to calculate the value of stocks and derivatives more precisely than other market participants.
Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. Historically, such algorithms were developed within the overlapping field of operations research.
Insurance companies use numerical programs for actuarial analysis.
== History ==
The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method. The origins of modern numerical analysis are often linked to a 1947 paper by John von Neumann and Herman Goldstine,
but others consider modern numerical analysis to go back to work by E. T. Whittaker in 1912.
To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy.
The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done.
The Leslie Fox Prize for Numerical Analysis was initiated in 1985 by the Institute of Mathematics and its Applications.
== Key concepts ==
=== Direct and iterative methods ===
Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed in infinite precision arithmetic. Examples include Gaussian elimination, the QR factorization method for solving systems of linear equations, and the simplex method of linear programming. In practice, finite precision is used and the result is an approximation of the true solution (assuming stability).
In contrast to direct methods, iterative methods are not expected to terminate in a finite number of steps, even if infinite precision were possible. Starting from an initial guess, iterative methods form successive approximations that converge to the exact solution only in the limit. A convergence test, often involving the residual, is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples include Newton's method, the bisection method, and Jacobi iteration. In computational matrix algebra, iterative methods are generally needed for large problems.
Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in principle but are usually used as though they were not, e.g. GMRES and the conjugate gradient method. For these methods the number of steps needed to obtain the exact solution is so large that an approximation is accepted in the same manner as for an iterative method.
As an example, consider the problem of solving
3x3 + 4 = 28
for the unknown quantity x.
For the iterative method, apply the bisection method to f(x) = 3x3 − 24. The initial values are a = 0, b = 3, f(a) = −24, f(b) = 57.
From this table it can be concluded that the solution is between 1.875 and 2.0625. The algorithm might return any number in that range with an error less than 0.2.
=== Conditioning ===
Ill-conditioned problem: Take the function f(x) = 1/(x − 1). Note that f(1.1) = 10 and f(1.001) = 1000: a change in x of less than 0.1 turns into a change in f(x) of nearly 1000. Evaluating f(x) near x = 1 is an ill-conditioned problem.
Well-conditioned problem: By contrast, evaluating the same function f(x) = 1/(x − 1) near x = 10 is a well-conditioned problem. For instance, f(10) = 1/9 ≈ 0.111 and f(11) = 0.1: a modest change in x leads to a modest change in f(x).
=== Discretization ===
Furthermore, continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem; this process is called 'discretization'. For example, the solution of a differential equation is a function. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a continuum.
== Generation and propagation of errors ==
The study of errors forms an important part of numerical analysis. There are several ways in which error can be introduced in the solution of the problem.
=== Round-off ===
Round-off errors arise because it is impossible to represent all real numbers exactly on a machine with finite memory (which is what all practical digital computers are).
=== Truncation and discretization error ===
Truncation errors are committed when an iterative method is terminated or a mathematical procedure is approximated and the approximate solution differs from the exact solution. Similarly, discretization induces a discretization error because the solution of the discrete problem does not coincide with the solution of the continuous problem. In the example above to compute the solution of
3
x
3
+
4
=
28
{\displaystyle 3x^{3}+4=28}
, after ten iterations, the calculated root is roughly 1.99. Therefore, the truncation error is roughly 0.01.
Once an error is generated, it propagates through the calculation. For example, the operation + on a computer is inexact. A calculation of the type
a
+
b
+
c
+
d
+
e
{\displaystyle a+b+c+d+e}
is even more inexact.
A truncation error is created when a mathematical procedure is approximated. To integrate a function exactly, an infinite sum of regions must be found, but numerically only a finite sum of regions can be found, and hence the approximation of the exact solution. Similarly, to differentiate a function, the differential element approaches zero, but numerically only a nonzero value of the differential element can be chosen.
=== Numerical stability and well-posed problems ===
An algorithm is called numerically stable if an error, whatever its cause, does not grow to be much larger during the calculation. This happens if the problem is well-conditioned, meaning that the solution changes by only a small amount if the problem data are changed by a small amount. To the contrary, if a problem is 'ill-conditioned', then any small error in the data will grow to be a large error.
Both the original problem and the algorithm used to solve that problem can be well-conditioned or ill-conditioned, and any combination is possible.
So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem.
== Areas of study ==
The field of numerical analysis includes many sub-disciplines. Some of the major ones are:
=== Computing values of functions ===
One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the Horner scheme, since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control round-off errors arising from the use of floating-point arithmetic.
=== Interpolation, extrapolation, and regression ===
Interpolation solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points?
Extrapolation is very similar to interpolation, except that now the value of the unknown function at a point which is outside the given points must be found.
Regression is also similar, but it takes into account that the data are imprecise. Given some points, and a measurement of the value of some function at these points (with an error), the unknown function can be found. The least squares-method is one way to achieve this.
=== Solving equations and systems of equations ===
Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation
2
x
+
5
=
3
{\displaystyle 2x+5=3}
is linear while
2
x
2
+
5
=
3
{\displaystyle 2x^{2}+5=3}
is not.
Much effort has been put in the development of methods for solving systems of linear equations. Standard direct methods, i.e., methods that use some matrix decomposition are Gaussian elimination, LU decomposition, Cholesky decomposition for symmetric (or hermitian) and positive-definite matrix, and QR decomposition for non-square matrices. Iterative methods such as the Jacobi method, Gauss–Seidel method, successive over-relaxation and conjugate gradient method are usually preferred for large systems. General iterative methods can be developed using a matrix splitting.
Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is differentiable and the derivative is known, then Newton's method is a popular choice. Linearization is another technique for solving nonlinear equations.
=== Solving eigenvalue or singular value problems ===
Several important problems can be phrased in terms of eigenvalue decompositions or singular value decompositions. For instance, the spectral image compression algorithm is based on the singular value decomposition. The corresponding tool in statistics is called principal component analysis.
=== Optimization ===
Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some constraints.
The field of optimization is further split in several subfields, depending on the form of the objective function and the constraint. For instance, linear programming deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the simplex method.
The method of Lagrange multipliers can be used to reduce optimization problems with constraints to unconstrained optimization problems.
=== Evaluating integrals ===
Numerical integration, in some instances also known as numerical quadrature, asks for the value of a definite integral. Popular methods use one of the Newton–Cotes formulas (like the midpoint rule or Simpson's rule) or Gaussian quadrature. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use Monte Carlo or quasi-Monte Carlo methods (see Monte Carlo integration), or, in modestly large dimensions, the method of sparse grids.
=== Differential equations ===
Numerical analysis is also concerned with computing (in an approximate way) the solution of differential equations, both ordinary differential equations and partial differential equations.
Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace. This can be done by a finite element method, a finite difference method, or (particularly in engineering) a finite volume method. The theoretical justification of these methods often involves theorems from functional analysis. This reduces the problem to the solution of an algebraic equation.
== Software ==
Since the late twentieth century, most algorithms are implemented in a variety of programming languages. The Netlib repository contains various collections of software routines for numerical problems, mostly in Fortran and C. Commercial products implementing many different numerical algorithms include the IMSL and NAG libraries; a free-software alternative is the GNU Scientific Library.
Over the years the Royal Statistical Society published numerous algorithms in its Applied Statistics (code for these "AS" functions is here);
ACM similarly, in its Transactions on Mathematical Software ("TOMS" code is here).
The Naval Surface Warfare Center several times published its Library of Mathematics Subroutines (code here).
There are several popular numerical computing applications such as MATLAB, TK Solver, S-PLUS, and IDL as well as free and open-source alternatives such as FreeMat, Scilab, GNU Octave (similar to Matlab), and IT++ (a C++ library). There are also programming languages such as R (similar to S-PLUS), Julia, and Python with libraries such as NumPy, SciPy and SymPy. Performance varies widely: while vector and matrix operations are usually fast, scalar loops may vary in speed by more than an order of magnitude.
Many computer algebra systems such as Mathematica also benefit from the availability of arbitrary-precision arithmetic which can provide more accurate results.
Also, any spreadsheet software can be used to solve simple problems relating to numerical analysis.
Excel, for example, has hundreds of available functions, including for matrices, which may be used in conjunction with its built in "solver".
== See also ==
== Notes ==
== References ==
=== Citations ===
=== Sources ===
== External links ==
=== Journals ===
Numerische Mathematik, volumes 1–..., Springer, 1959–
volumes 1–66, 1959–1994 (searchable; pages are images). (in English and German)
Journal on Numerical Analysis (SINUM), volumes 1–..., SIAM, 1964–
=== Online texts ===
"Numerical analysis", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Numerical Recipes, William H. Press (free, downloadable previous editions)
First Steps in Numerical Analysis (archived), R.J.Hosking, S.Joe, D.C.Joyce, and J.C.Turner
CSEP (Computational Science Education Project), U.S. Department of Energy (archived 2017-08-01)
Numerical Methods, ch 3. in the Digital Library of Mathematical Functions
Numerical Interpolation, Differentiation and Integration, ch 25. in the Handbook of Mathematical Functions (Abramowitz and Stegun)
Tobin A. Driscoll and Richard J. Braun: Fundamentals of Numerical Computation (free online version)
=== Online course material ===
Numerical Methods (Archived 28 July 2009 at the Wayback Machine), Stuart Dalziel University of Cambridge
Lectures on Numerical Analysis, Dennis Deturck and Herbert S. Wilf University of Pennsylvania
Numerical methods, John D. Fenton University of Karlsruhe
Numerical Methods for Physicists, Anthony O’Hare Oxford University
Lectures in Numerical Analysis (archived), R. Radok Mahidol University
Introduction to Numerical Analysis for Engineering, Henrik Schmidt Massachusetts Institute of Technology
Numerical Analysis for Engineering, D. W. Harder University of Waterloo
Introduction to Numerical Analysis, Doron Levy University of Maryland
Numerical Analysis - Numerical Methods (archived), John H. Mathews California State University Fullerton | Wikipedia/Numerical_methods |
In mathematics, the power series method is used to seek a power series solution to certain differential equations. In general, such a solution assumes a power series with unknown coefficients, then substitutes that solution into the differential equation to find a recurrence relation for the coefficients.
== Method ==
Consider the second-order linear differential equation
a
2
(
z
)
f
″
(
z
)
+
a
1
(
z
)
f
′
(
z
)
+
a
0
(
z
)
f
(
z
)
=
0.
{\displaystyle a_{2}(z)f''(z)+a_{1}(z)f'(z)+a_{0}(z)f(z)=0.}
Suppose a2 is nonzero for all z. Then we can divide throughout to obtain
f
″
+
a
1
(
z
)
a
2
(
z
)
f
′
+
a
0
(
z
)
a
2
(
z
)
f
=
0.
{\displaystyle f''+{a_{1}(z) \over a_{2}(z)}f'+{a_{0}(z) \over a_{2}(z)}f=0.}
Suppose further that a1/a2 and a0/a2 are analytic functions.
The power series method calls for the construction of a power series solution
f
=
∑
k
=
0
∞
A
k
z
k
.
{\displaystyle f=\sum _{k=0}^{\infty }A_{k}z^{k}.}
If a2 is zero for some z, then the Frobenius method, a variation on this method, is suited to deal with so called "singular points". The method works analogously for higher order equations as well as for systems.
== Example usage ==
Let us look at the Hermite differential equation,
f
″
−
2
z
f
′
+
λ
f
=
0
;
λ
=
1
{\displaystyle f''-2zf'+\lambda f=0;\;\lambda =1}
We can try to construct a series solution
f
=
∑
k
=
0
∞
A
k
z
k
f
′
=
∑
k
=
1
∞
k
A
k
z
k
−
1
f
″
=
∑
k
=
2
∞
k
(
k
−
1
)
A
k
z
k
−
2
{\displaystyle {\begin{aligned}f&=\sum _{k=0}^{\infty }A_{k}z^{k}\\f'&=\sum _{k=1}^{\infty }kA_{k}z^{k-1}\\f''&=\sum _{k=2}^{\infty }k(k-1)A_{k}z^{k-2}\end{aligned}}}
Substituting these in the differential equation
∑
k
=
2
∞
k
(
k
−
1
)
A
k
z
k
−
2
−
2
z
∑
k
=
1
∞
k
A
k
z
k
−
1
+
∑
k
=
0
∞
A
k
z
k
=
0
=
∑
k
=
2
∞
k
(
k
−
1
)
A
k
z
k
−
2
−
∑
k
=
1
∞
2
k
A
k
z
k
+
∑
k
=
0
∞
A
k
z
k
{\displaystyle {\begin{aligned}&\sum _{k=2}^{\infty }k(k-1)A_{k}z^{k-2}-2z\sum _{k=1}^{\infty }kA_{k}z^{k-1}+\sum _{k=0}^{\infty }A_{k}z^{k}=0\\=&\sum _{k=2}^{\infty }k(k-1)A_{k}z^{k-2}-\sum _{k=1}^{\infty }2kA_{k}z^{k}+\sum _{k=0}^{\infty }A_{k}z^{k}\end{aligned}}}
Making a shift on the first sum
=
∑
k
=
0
∞
(
k
+
2
)
(
k
+
1
)
A
k
+
2
z
k
−
∑
k
=
1
∞
2
k
A
k
z
k
+
∑
k
=
0
∞
A
k
z
k
=
2
A
2
+
∑
k
=
1
∞
(
k
+
2
)
(
k
+
1
)
A
k
+
2
z
k
−
∑
k
=
1
∞
2
k
A
k
z
k
+
A
0
+
∑
k
=
1
∞
A
k
z
k
=
2
A
2
+
A
0
+
∑
k
=
1
∞
(
(
k
+
2
)
(
k
+
1
)
A
k
+
2
+
(
−
2
k
+
1
)
A
k
)
z
k
{\displaystyle {\begin{aligned}&=\sum _{k=0}^{\infty }(k+2)(k+1)A_{k+2}z^{k}-\sum _{k=1}^{\infty }2kA_{k}z^{k}+\sum _{k=0}^{\infty }A_{k}z^{k}\\&=2A_{2}+\sum _{k=1}^{\infty }(k+2)(k+1)A_{k+2}z^{k}-\sum _{k=1}^{\infty }2kA_{k}z^{k}+A_{0}+\sum _{k=1}^{\infty }A_{k}z^{k}\\&=2A_{2}+A_{0}+\sum _{k=1}^{\infty }\left((k+2)(k+1)A_{k+2}+(-2k+1)A_{k}\right)z^{k}\end{aligned}}}
If this series is a solution, then all these coefficients must be zero, so for both k=0 and k>0:
(
k
+
2
)
(
k
+
1
)
A
k
+
2
+
(
−
2
k
+
1
)
A
k
=
0
{\displaystyle (k+2)(k+1)A_{k+2}+(-2k+1)A_{k}=0}
We can rearrange this to get a recurrence relation for Ak+2.
(
k
+
2
)
(
k
+
1
)
A
k
+
2
=
−
(
−
2
k
+
1
)
A
k
{\displaystyle (k+2)(k+1)A_{k+2}=-(-2k+1)A_{k}}
A
k
+
2
=
(
2
k
−
1
)
(
k
+
2
)
(
k
+
1
)
A
k
{\displaystyle A_{k+2}={(2k-1) \over (k+2)(k+1)}A_{k}}
Now, we have
A
2
=
−
1
(
2
)
(
1
)
A
0
=
−
1
2
A
0
,
A
3
=
1
(
3
)
(
2
)
A
1
=
1
6
A
1
{\displaystyle A_{2}={-1 \over (2)(1)}A_{0}={-1 \over 2}A_{0},\,A_{3}={1 \over (3)(2)}A_{1}={1 \over 6}A_{1}}
We can determine A0 and A1 if there are initial conditions, i.e. if we have an initial value problem.
So we have
A
4
=
1
4
A
2
=
(
1
4
)
(
−
1
2
)
A
0
=
−
1
8
A
0
A
5
=
1
4
A
3
=
(
1
4
)
(
1
6
)
A
1
=
1
24
A
1
A
6
=
7
30
A
4
=
(
7
30
)
(
−
1
8
)
A
0
=
−
7
240
A
0
A
7
=
3
14
A
5
=
(
3
14
)
(
1
24
)
A
1
=
1
112
A
1
{\displaystyle {\begin{aligned}A_{4}&={1 \over 4}A_{2}=\left({1 \over 4}\right)\left({-1 \over 2}\right)A_{0}={-1 \over 8}A_{0}\\[8pt]A_{5}&={1 \over 4}A_{3}=\left({1 \over 4}\right)\left({1 \over 6}\right)A_{1}={1 \over 24}A_{1}\\[8pt]A_{6}&={7 \over 30}A_{4}=\left({7 \over 30}\right)\left({-1 \over 8}\right)A_{0}={-7 \over 240}A_{0}\\[8pt]A_{7}&={3 \over 14}A_{5}=\left({3 \over 14}\right)\left({1 \over 24}\right)A_{1}={1 \over 112}A_{1}\end{aligned}}}
and the series solution is
f
=
A
0
z
0
+
A
1
z
1
+
A
2
z
2
+
A
3
z
3
+
A
4
z
4
+
A
5
z
5
+
A
6
z
6
+
A
7
z
7
+
⋯
=
A
0
z
0
+
A
1
z
1
+
−
1
2
A
0
z
2
+
1
6
A
1
z
3
+
−
1
8
A
0
z
4
+
1
24
A
1
z
5
+
−
7
240
A
0
z
6
+
1
112
A
1
z
7
+
⋯
=
A
0
z
0
+
−
1
2
A
0
z
2
+
−
1
8
A
0
z
4
+
−
7
240
A
0
z
6
+
A
1
z
+
1
6
A
1
z
3
+
1
24
A
1
z
5
+
1
112
A
1
z
7
+
⋯
{\displaystyle {\begin{aligned}f&=A_{0}z^{0}+A_{1}z^{1}+A_{2}z^{2}+A_{3}z^{3}+A_{4}z^{4}+A_{5}z^{5}+A_{6}z^{6}+A_{7}z^{7}+\cdots \\[8pt]&=A_{0}z^{0}+A_{1}z^{1}+{-1 \over 2}A_{0}z^{2}+{1 \over 6}A_{1}z^{3}+{-1 \over 8}A_{0}z^{4}+{1 \over 24}A_{1}z^{5}+{-7 \over 240}A_{0}z^{6}+{1 \over 112}A_{1}z^{7}+\cdots \\[8pt]&=A_{0}z^{0}+{-1 \over 2}A_{0}z^{2}+{-1 \over 8}A_{0}z^{4}+{-7 \over 240}A_{0}z^{6}+A_{1}z+{1 \over 6}A_{1}z^{3}+{1 \over 24}A_{1}z^{5}+{1 \over 112}A_{1}z^{7}+\cdots \end{aligned}}}
which we can break up into the sum of two linearly independent series solutions:
f
=
A
0
(
1
+
−
1
2
z
2
+
−
1
8
z
4
+
−
7
240
z
6
+
⋯
)
+
A
1
(
z
+
1
6
z
3
+
1
24
z
5
+
1
112
z
7
+
⋯
)
{\displaystyle f=A_{0}\left(1+{-1 \over 2}z^{2}+{-1 \over 8}z^{4}+{-7 \over 240}z^{6}+\cdots \right)+A_{1}\left(z+{1 \over 6}z^{3}+{1 \over 24}z^{5}+{1 \over 112}z^{7}+\cdots \right)}
which can be further simplified by the use of hypergeometric series.
== A simpler way using Taylor series ==
A much simpler way of solving this equation (and power series solution in general) using the Taylor series form of the expansion.
Here we assume the answer is of the form
f
=
∑
k
=
0
∞
A
k
z
k
k
!
{\displaystyle f=\sum _{k=0}^{\infty }{A_{k}z^{k} \over {k!}}}
If we do this, the general rule for obtaining the recurrence relationship for the coefficients is
y
[
n
]
→
A
k
+
n
{\displaystyle y^{[n]}\to A_{k+n}}
and
x
m
y
[
n
]
→
(
k
)
(
k
−
1
)
⋯
(
k
−
m
+
1
)
A
k
+
n
−
m
{\displaystyle x^{m}y^{[n]}\to (k)(k-1)\cdots (k-m+1)A_{k+n-m}}
In this case we can solve the Hermite equation in fewer steps:
f
″
−
2
z
f
′
+
λ
f
=
0
;
λ
=
1
{\displaystyle f''-2zf'+\lambda f=0;\;\lambda =1}
becomes
A
k
+
2
−
2
k
A
k
+
λ
A
k
=
0
{\displaystyle A_{k+2}-2kA_{k}+\lambda A_{k}=0}
or
A
k
+
2
=
(
2
k
−
λ
)
A
k
{\displaystyle A_{k+2}=(2k-\lambda )A_{k}}
in the series
f
=
∑
k
=
0
∞
A
k
z
k
k
!
{\displaystyle f=\sum _{k=0}^{\infty }{A_{k}z^{k} \over {k!}}}
== Nonlinear equations ==
The power series method can be applied to certain nonlinear differential equations, though with less flexibility. A very large class of nonlinear equations can be solved analytically by using the Parker–Sochacki method. Since the Parker–Sochacki method involves an expansion of the original system of ordinary differential equations through auxiliary equations, it is not simply referred to as the power series method. The Parker–Sochacki method is done before the power series method to make the power series method possible on many nonlinear problems. An ODE problem can be expanded with the auxiliary variables which make the power series method trivial for an equivalent, larger system. Expanding the ODE problem with auxiliary variables produces the same coefficients (since the power series for a function is unique) at the cost of also calculating the coefficients of auxiliary equations. Many times, without using auxiliary variables, there is no known way to get the power series for the solution to a system, hence the power series method alone is difficult to apply to most nonlinear equations.
The power series method will give solutions only to initial value problems (opposed to boundary value problems), this is not an issue when dealing with linear equations since the solution may turn up multiple linearly independent solutions which may be combined (by superposition) to solve boundary value problems as well. A further restriction is that the series coefficients will be specified by a nonlinear recurrence (the nonlinearities are inherited from the differential equation).
In order for the solution method to work, as in linear equations, it is necessary to express every term in the nonlinear equation as a power series so that all of the terms may be combined into one power series.
As an example, consider the initial value problem
F
F
″
+
2
F
′
2
+
η
F
′
=
0
;
F
(
1
)
=
0
,
F
′
(
1
)
=
−
1
2
{\displaystyle FF''+2F'^{2}+\eta F'=0\quad ;\quad F(1)=0\ ,\ F'(1)=-{\frac {1}{2}}}
which describes a solution to capillary-driven flow in a groove. There are two nonlinearities: the first and second terms involve products. The initial values are given at
η
=
1
{\displaystyle \eta =1}
, which hints that the power series must be set up as:
F
(
η
)
=
∑
i
=
0
∞
c
i
(
η
−
1
)
i
{\displaystyle F(\eta )=\sum _{i=0}^{\infty }c_{i}(\eta -1)^{i}}
since in this way
d
n
F
d
η
n
|
η
=
1
=
n
!
c
n
{\displaystyle \left.{\frac {d^{n}F}{d\eta ^{n}}}\right|_{\eta =1}=n!\ c_{n}}
which makes the initial values very easy to evaluate. It is necessary to rewrite the equation slightly in light of the definition of the power series,
F
F
″
+
2
F
′
2
+
(
η
−
1
)
F
′
+
F
′
=
0
;
F
(
1
)
=
0
,
F
′
(
1
)
=
−
1
2
{\displaystyle FF''+2F'^{2}+(\eta -1)F'+F'=0\quad ;\quad F(1)=0\ ,\ F'(1)=-{\frac {1}{2}}}
so that the third term contains the same form
η
−
1
{\displaystyle \eta -1}
that shows in the power series.
The last consideration is what to do with the products; substituting the power series in would result in products of power series when it's necessary that each term be its own power series. This is where the Cauchy product
(
∑
i
=
0
∞
a
i
x
i
)
(
∑
i
=
0
∞
b
i
x
i
)
=
∑
i
=
0
∞
x
i
∑
j
=
0
i
a
i
−
j
b
j
{\displaystyle \left(\sum _{i=0}^{\infty }a_{i}x^{i}\right)\left(\sum _{i=0}^{\infty }b_{i}x^{i}\right)=\sum _{i=0}^{\infty }x^{i}\sum _{j=0}^{i}a_{i-j}b_{j}}
is useful; substituting the power series into the differential equation and applying this identity leads to an equation where every term is a power series. After much rearrangement, the recurrence
∑
j
=
0
i
(
(
j
+
1
)
(
j
+
2
)
c
i
−
j
c
j
+
2
+
2
(
i
−
j
+
1
)
(
j
+
1
)
c
i
−
j
+
1
c
j
+
1
)
+
i
c
i
+
(
i
+
1
)
c
i
+
1
=
0
{\displaystyle \sum _{j=0}^{i}\left((j+1)(j+2)c_{i-j}c_{j+2}+2(i-j+1)(j+1)c_{i-j+1}c_{j+1}\right)+ic_{i}+(i+1)c_{i+1}=0}
is obtained, specifying exact values of the series coefficients. From the initial values,
c
0
=
0
{\displaystyle c_{0}=0}
and
c
1
=
−
1
/
2
{\displaystyle c_{1}=-1/2}
, thereafter the above recurrence is used. For example, the next few coefficients:
c
2
=
−
1
6
;
c
3
=
−
1
108
;
c
4
=
7
3240
;
c
5
=
−
19
48600
…
{\displaystyle c_{2}=-{\frac {1}{6}}\quad ;\quad c_{3}=-{\frac {1}{108}}\quad ;\quad c_{4}={\frac {7}{3240}}\quad ;\quad c_{5}=-{\frac {19}{48600}}\ \dots }
A limitation of the power series solution shows itself in this example. A numeric solution of the problem shows that the function is smooth and always decreasing to the left of
η
=
1
{\displaystyle \eta =1}
, and zero to the right. At
η
=
1
{\displaystyle \eta =1}
, a slope discontinuity exists, a feature which the power series is incapable of rendering, for this reason the series solution continues decreasing to the right of
η
=
1
{\displaystyle \eta =1}
instead of suddenly becoming zero.
== External links ==
Weisstein, Eric W. "Frobenius Method". MathWorld.
== References ==
Coddington, Earl A.; Levinson, Norman (1955). Theory of Ordinary Differential Equations. New York: McGraw–Hill.
Hille, Einar (1976). Ordinary Differential Equations in the Complex Domain. Mineola: Dover Publications.
Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
Lozi, R.; Pogonin, V.A.; Pchelintsev, A.N. (2016). "A new accurate numerical method of approximation of chaotic solutions of dynamical model equations with quadratic nonlinearities" (PDF). Chaos, Solitons & Fractals. 91: 108–114. Bibcode:2016CSF....91..108L. doi:10.1016/j.chaos.2016.05.010. | Wikipedia/Power_series_solution_of_differential_equations |
In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with any other DE, its unknown(s) consists of one (or more) function(s) and involves the derivatives of those functions. The term "ordinary" is used in contrast with partial differential equations (PDEs) which may be with respect to more than one independent variable, and, less commonly, in contrast with stochastic differential equations (SDEs) where the progression is random.
== Differential equations ==
A linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form
a
0
(
x
)
y
+
a
1
(
x
)
y
′
+
a
2
(
x
)
y
″
+
⋯
+
a
n
(
x
)
y
(
n
)
+
b
(
x
)
=
0
,
{\displaystyle a_{0}(x)y+a_{1}(x)y'+a_{2}(x)y''+\cdots +a_{n}(x)y^{(n)}+b(x)=0,}
where
a
0
(
x
)
,
…
,
a
n
(
x
)
{\displaystyle a_{0}(x),\ldots ,a_{n}(x)}
and
b
(
x
)
{\displaystyle b(x)}
are arbitrary differentiable functions that do not need to be linear, and
y
′
,
…
,
y
(
n
)
{\displaystyle y',\ldots ,y^{(n)}}
are the successive derivatives of the unknown function
y
{\displaystyle y}
of the variable
x
{\displaystyle x}
.
Among ordinary differential equations, linear differential equations play a prominent role for several reasons. Most elementary and special functions that are encountered in physics and applied mathematics are solutions of linear differential equations (see Holonomic function). When physical phenomena are modeled with non-linear equations, they are generally approximated by linear differential equations for an easier solution. The few non-linear ODEs that can be solved explicitly are generally solved by transforming the equation into an equivalent linear ODE (see, for example Riccati equation).
Some ODEs can be solved explicitly in terms of known functions and integrals. When that is not possible, the equation for computing the Taylor series of the solutions may be useful. For applied problems, numerical methods for ordinary differential equations can supply an approximation of the solution.
== Background ==
Ordinary differential equations (ODEs) arise in many contexts of mathematics and social and natural sciences. Mathematical descriptions of change use differentials and derivatives. Various differentials, derivatives, and functions become related via equations, such that a differential equation is a result that describes dynamically changing phenomena, evolution, and variation. Often, quantities are defined as the rate of change of other quantities (for example, derivatives of displacement with respect to time), or gradients of quantities, which is how they enter differential equations.
Specific mathematical fields include geometry and analytical mechanics. Scientific fields include much of physics and astronomy (celestial mechanics), meteorology (weather modeling), chemistry (reaction rates), biology (infectious diseases, genetic variation), ecology and population modeling (population competition), economics (stock trends, interest rates and the market equilibrium price changes).
Many mathematicians have studied differential equations and contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, d'Alembert, and Euler.
A simple example is Newton's second law of motion—the relationship between the displacement
x
{\displaystyle x}
and the time
t
{\displaystyle t}
of an object under the force
F
{\displaystyle F}
, is given by the differential equation
m
d
2
x
(
t
)
d
t
2
=
F
(
x
(
t
)
)
{\displaystyle m{\frac {\mathrm {d} ^{2}x(t)}{\mathrm {d} t^{2}}}=F(x(t))\,}
which constrains the motion of a particle of constant mass
m
{\displaystyle m}
. In general,
F
{\displaystyle F}
is a function of the position
x
(
t
)
{\displaystyle x(t)}
of the particle at time
t
{\displaystyle t}
. The unknown function
x
(
t
)
{\displaystyle x(t)}
appears on both sides of the differential equation, and is indicated in the notation
F
(
x
(
t
)
)
{\displaystyle F(x(t))}
.
== Definitions ==
In what follows,
y
{\displaystyle y}
is a dependent variable representing an unknown function
y
=
f
(
x
)
{\displaystyle y=f(x)}
of the independent variable
x
{\displaystyle x}
. The notation for differentiation varies depending upon the author and upon which notation is most useful for the task at hand. In this context, the Leibniz's notation
d
y
d
x
,
d
2
y
d
x
2
,
…
,
d
n
y
d
x
n
{\displaystyle {\frac {dy}{dx}},{\frac {d^{2}y}{dx^{2}}},\ldots ,{\frac {d^{n}y}{dx^{n}}}}
is more useful for differentiation and integration, whereas Lagrange's notation
y
′
,
y
″
,
…
,
y
(
n
)
{\displaystyle y',y'',\ldots ,y^{(n)}}
is more useful for representing higher-order derivatives compactly, and Newton's notation
(
y
˙
,
y
¨
,
y
.
.
.
)
{\displaystyle ({\dot {y}},{\ddot {y}},{\overset {...}{y}})}
is often used in physics for representing derivatives of low order with respect to time.
=== General definition ===
Given
F
{\displaystyle F}
, a function of
x
{\displaystyle x}
,
y
{\displaystyle y}
, and derivatives of
y
{\displaystyle y}
. Then an equation of the form
F
(
x
,
y
,
y
′
,
…
,
y
(
n
−
1
)
)
=
y
(
n
)
{\displaystyle F\left(x,y,y',\ldots ,y^{(n-1)}\right)=y^{(n)}}
is called an explicit ordinary differential equation of order
n
{\displaystyle n}
.
More generally, an implicit ordinary differential equation of order
n
{\displaystyle n}
takes the form:
F
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
=
0
{\displaystyle F\left(x,y,y',y'',\ \ldots ,\ y^{(n)}\right)=0}
There are further classifications:
AutonomousA differential equation is autonomous if it does not depend on the variable x.
Linear
A differential equation is linear if
F
{\displaystyle F}
can be written as a linear combination of the derivatives of
y
{\displaystyle y}
; that is, it can be rewritten as
y
(
n
)
=
∑
i
=
0
n
−
1
a
i
(
x
)
y
(
i
)
+
r
(
x
)
{\displaystyle y^{(n)}=\sum _{i=0}^{n-1}a_{i}(x)y^{(i)}+r(x)}
where
a
i
(
x
)
{\displaystyle a_{i}(x)}
and
r
(
x
)
{\displaystyle r(x)}
are continuous functions of
x
{\displaystyle x}
.
The function
r
(
x
)
{\displaystyle r(x)}
is called the source term, leading to further classification.
HomogeneousA linear differential equation is homogeneous if
r
(
x
)
=
0
{\displaystyle r(x)=0}
. In this case, there is always the "trivial solution"
y
=
0
{\displaystyle y=0}
.
Nonhomogeneous (or inhomogeneous)A linear differential equation is nonhomogeneous if
r
(
x
)
≠
0
{\displaystyle r(x)\neq 0}
.
Non-linearA differential equation that is not linear.
=== System of ODEs ===
A number of coupled differential equations form a system of equations. If
y
{\displaystyle \mathbf {y} }
is a vector whose elements are functions;
y
(
x
)
=
[
y
1
(
x
)
,
y
2
(
x
)
,
…
,
y
m
(
x
)
]
{\displaystyle \mathbf {y} (x)=[y_{1}(x),y_{2}(x),\ldots ,y_{m}(x)]}
, and
F
{\displaystyle \mathbf {F} }
is a vector-valued function of
y
{\displaystyle \mathbf {y} }
and its derivatives, then
y
(
n
)
=
F
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
{\displaystyle \mathbf {y} ^{(n)}=\mathbf {F} \left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)}
is an explicit system of ordinary differential equations of order
n
{\displaystyle n}
and dimension
m
{\displaystyle m}
. In column vector form:
(
y
1
(
n
)
y
2
(
n
)
⋮
y
m
(
n
)
)
=
(
f
1
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
f
2
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
⋮
f
m
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
)
{\displaystyle {\begin{pmatrix}y_{1}^{(n)}\\y_{2}^{(n)}\\\vdots \\y_{m}^{(n)}\end{pmatrix}}={\begin{pmatrix}f_{1}\left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)\\f_{2}\left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)\\\vdots \\f_{m}\left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)\end{pmatrix}}}
These are not necessarily linear. The implicit analogue is:
F
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
=
0
{\displaystyle \mathbf {F} \left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)}\right)={\boldsymbol {0}}}
where
0
=
(
0
,
0
,
…
,
0
)
{\displaystyle {\boldsymbol {0}}=(0,0,\ldots ,0)}
is the zero vector. In matrix form
(
f
1
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
f
2
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
⋮
f
m
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
)
=
(
0
0
⋮
0
)
{\displaystyle {\begin{pmatrix}f_{1}(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)})\\f_{2}(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)})\\\vdots \\f_{m}(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)})\end{pmatrix}}={\begin{pmatrix}0\\0\\\vdots \\0\end{pmatrix}}}
For a system of the form
F
(
x
,
y
,
y
′
)
=
0
{\displaystyle \mathbf {F} \left(x,\mathbf {y} ,\mathbf {y} '\right)={\boldsymbol {0}}}
, some sources also require that the Jacobian matrix
∂
F
(
x
,
u
,
v
)
∂
v
{\displaystyle {\frac {\partial \mathbf {F} (x,\mathbf {u} ,\mathbf {v} )}{\partial \mathbf {v} }}}
be non-singular in order to call this an implicit ODE [system]; an implicit ODE system satisfying this Jacobian non-singularity condition can be transformed into an explicit ODE system. In the same sources, implicit ODE systems with a singular Jacobian are termed differential algebraic equations (DAEs). This distinction is not merely one of terminology; DAEs have fundamentally different characteristics and are generally more involved to solve than (nonsingular) ODE systems. Presumably for additional derivatives, the Hessian matrix and so forth are also assumed non-singular according to this scheme, although note that any ODE of order greater than one can be (and usually is) rewritten as system of ODEs of first order, which makes the Jacobian singularity criterion sufficient for this taxonomy to be comprehensive at all orders.
The behavior of a system of ODEs can be visualized through the use of a phase portrait.
=== Solutions ===
Given a differential equation
F
(
x
,
y
,
y
′
,
…
,
y
(
n
)
)
=
0
{\displaystyle F\left(x,y,y',\ldots ,y^{(n)}\right)=0}
a function
u
:
I
⊂
R
→
R
{\displaystyle u:I\subset \mathbb {R} \to \mathbb {R} }
, where
I
{\displaystyle I}
is an interval, is called a solution or integral curve for
F
{\displaystyle F}
, if
u
{\displaystyle u}
is
n
{\displaystyle n}
-times differentiable on
I
{\displaystyle I}
, and
F
(
x
,
u
,
u
′
,
…
,
u
(
n
)
)
=
0
x
∈
I
.
{\displaystyle F(x,u,u',\ \ldots ,\ u^{(n)})=0\quad x\in I.}
Given two solutions
u
:
J
⊂
R
→
R
{\displaystyle u:J\subset \mathbb {R} \to \mathbb {R} }
and
v
:
I
⊂
R
→
R
{\displaystyle v:I\subset \mathbb {R} \to \mathbb {R} }
,
u
{\displaystyle u}
is called an extension of
v
{\displaystyle v}
if
I
⊂
J
{\displaystyle I\subset J}
and
u
(
x
)
=
v
(
x
)
x
∈
I
.
{\displaystyle u(x)=v(x)\quad x\in I.\,}
A solution that has no extension is called a maximal solution. A solution defined on all of
R
{\displaystyle \mathbb {R} }
is called a global solution.
A general solution of an
n
{\displaystyle n}
th-order equation is a solution containing
n
{\displaystyle n}
arbitrary independent constants of integration. A particular solution is derived from the general solution by setting the constants to particular values, often chosen to fulfill set 'initial conditions or boundary conditions'. A singular solution is a solution that cannot be obtained by assigning definite values to the arbitrary constants in the general solution.
In the context of linear ODE, the terminology particular solution can also refer to any solution of the ODE (not necessarily satisfying the initial conditions), which is then added to the homogeneous solution (a general solution of the homogeneous ODE), which then forms a general solution of the original ODE. This is the terminology used in the guessing method section in this article, and is frequently used when discussing the method of undetermined coefficients and variation of parameters.
=== Solutions of finite duration ===
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that from its own dynamics, the system will reach the value zero at an ending time and stays there in zero forever after. These finite-duration solutions can't be analytical functions on the whole real line, and because they will be non-Lipschitz functions at their ending time, they are not included in the uniqueness theorem of solutions of Lipschitz differential equations.
As example, the equation:
y
′
=
−
sgn
(
y
)
|
y
|
,
y
(
0
)
=
1
{\displaystyle y'=-{\text{sgn}}(y){\sqrt {|y|}},\,\,y(0)=1}
Admits the finite duration solution:
y
(
x
)
=
1
4
(
1
−
x
2
+
|
1
−
x
2
|
)
2
{\displaystyle y(x)={\frac {1}{4}}\left(1-{\frac {x}{2}}+\left|1-{\frac {x}{2}}\right|\right)^{2}}
== Theories ==
=== Singular solutions ===
The theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century has it received special attention. A valuable but little-known work on the subject is that of Houtain (1854). Darboux (from 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.
=== Reduction to quadratures ===
The primitive attempt in dealing with differential equations had in view a reduction to quadratures, that is, expressing the solutions in terms of known function and their integrals. This is possible for linear equations with constant coefficients, it appeared in the 19th century that this is generally impossible in other cases. Hence, analysts began the study (for their own) of functions that are solutions of differential equations, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter, the real question was no longer whether a solution is possible by quadratures, but whether a given differential equation suffices for the definition of a function, and, if so, what are the characteristic properties of such functions.
=== Fuchsian theory ===
Two memoirs by Fuchs inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869. His method for integrating a non-linear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve that remains unchanged under a rational transformation, Clebsch proposed to classify the transcendent functions defined by differential equations according to the invariant properties of the corresponding surfaces
f
=
0
{\displaystyle f=0}
under rational one-to-one transformations.
=== Lie's theory ===
From 1870, Sophus Lie's work put the theory of differential equations on a better foundation. He showed that the integration theories of the older mathematicians can, using Lie groups, be referred to a common source, and that ordinary differential equations that admit the same infinitesimal transformations present comparable integration difficulties. He also emphasized the subject of transformations of contact.
Lie's group theory of differential equations has been certified, namely: (1) that it unifies the many ad hoc methods known for solving differential equations, and (2) that it provides powerful new ways to find solutions. The theory has applications to both ordinary and partial differential equations.
A general solution approach uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras, and differential geometry are used to understand the structure of linear and non-linear (partial) differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform, and finally finding exact analytic solutions to DE.
Symmetry methods have been applied to differential equations that arise in mathematics, physics, engineering, and other disciplines.
=== Sturm–Liouville theory ===
Sturm–Liouville theory is a theory of a special type of second-order linear ordinary differential equation. Their solutions are based on eigenvalues and corresponding eigenfunctions of linear operators defined via second-order homogeneous linear equations. The problems are identified as Sturm–Liouville problems (SLP) and are named after J. C. F. Sturm and J. Liouville, who studied them in the mid-1800s. SLPs have an infinite number of eigenvalues, and the corresponding eigenfunctions form a complete, orthogonal set, which makes orthogonal expansions possible. This is a key idea in applied mathematics, physics, and engineering. SLPs are also useful in the analysis of certain partial differential equations.
== Existence and uniqueness of solutions ==
There are several theorems that establish existence and uniqueness of solutions to initial value problems involving ODEs both locally and globally. The two main theorems are
In their basic form both of these theorems only guarantee local results, though the latter can be extended to give a global result, for example, if the conditions of Grönwall's inequality are met.
Also, uniqueness theorems like the Lipschitz one above do not apply to DAE systems, which may have multiple solutions stemming from their (non-linear) algebraic part alone.
=== Local existence and uniqueness theorem simplified ===
The theorem can be stated simply as follows. For the equation and initial value problem:
y
′
=
F
(
x
,
y
)
,
y
0
=
y
(
x
0
)
{\displaystyle y'=F(x,y)\,,\quad y_{0}=y(x_{0})}
if
F
{\displaystyle F}
and
∂
F
/
∂
y
{\displaystyle \partial F/\partial y}
are continuous in a closed rectangle
R
=
[
x
0
−
a
,
x
0
+
a
]
×
[
y
0
−
b
,
y
0
+
b
]
{\displaystyle R=[x_{0}-a,x_{0}+a]\times [y_{0}-b,y_{0}+b]}
in the
x
−
y
{\displaystyle x-y}
plane, where
a
{\displaystyle a}
and
b
{\displaystyle b}
are real (symbolically:
a
,
b
∈
R
{\displaystyle a,b\in \mathbb {R} }
) and
x
{\displaystyle x}
denotes the Cartesian product, square brackets denote closed intervals, then there is an interval
I
=
[
x
0
−
h
,
x
0
+
h
]
⊂
[
x
0
−
a
,
x
0
+
a
]
{\displaystyle I=[x_{0}-h,x_{0}+h]\subset [x_{0}-a,x_{0}+a]}
for some
h
∈
R
{\displaystyle h\in \mathbb {R} }
where the solution to the above equation and initial value problem can be found. That is, there is a solution and it is unique. Since there is no restriction on
F
{\displaystyle F}
to be linear, this applies to non-linear equations that take the form
F
(
x
,
y
)
{\displaystyle F(x,y)}
, and it can also be applied to systems of equations.
=== Global uniqueness and maximum domain of solution ===
When the hypotheses of the Picard–Lindelöf theorem are satisfied, then local existence and uniqueness can be extended to a global result. More precisely:
For each initial condition
(
x
0
,
y
0
)
{\displaystyle (x_{0},y_{0})}
there exists a unique maximum (possibly infinite) open interval
I
max
=
(
x
−
,
x
+
)
,
x
±
∈
R
∪
{
±
∞
}
,
x
0
∈
I
max
{\displaystyle I_{\max }=(x_{-},x_{+}),x_{\pm }\in \mathbb {R} \cup \{\pm \infty \},x_{0}\in I_{\max }}
such that any solution that satisfies this initial condition is a restriction of the solution that satisfies this initial condition with domain
I
max
{\displaystyle I_{\max }}
.
In the case that
x
±
≠
±
∞
{\displaystyle x_{\pm }\neq \pm \infty }
, there are exactly two possibilities
explosion in finite time:
lim sup
x
→
x
±
‖
y
(
x
)
‖
→
∞
{\displaystyle \limsup _{x\to x_{\pm }}\|y(x)\|\to \infty }
leaves domain of definition:
lim
x
→
x
±
y
(
x
)
∈
∂
Ω
¯
{\displaystyle \lim _{x\to x_{\pm }}y(x)\ \in \partial {\bar {\Omega }}}
where
Ω
{\displaystyle \Omega }
is the open set in which
F
{\displaystyle F}
is defined, and
∂
Ω
¯
{\displaystyle \partial {\bar {\Omega }}}
is its boundary.
Note that the maximum domain of the solution
is always an interval (to have uniqueness)
may be smaller than
R
{\displaystyle \mathbb {R} }
may depend on the specific choice of
(
x
0
,
y
0
)
{\displaystyle (x_{0},y_{0})}
.
Example.
y
′
=
y
2
{\displaystyle y'=y^{2}}
This means that
F
(
x
,
y
)
=
y
2
{\displaystyle F(x,y)=y^{2}}
, which is
C
1
{\displaystyle C^{1}}
and therefore locally Lipschitz continuous, satisfying the Picard–Lindelöf theorem.
Even in such a simple setting, the maximum domain of solution cannot be all
R
{\displaystyle \mathbb {R} }
since the solution is
y
(
x
)
=
y
0
(
x
0
−
x
)
y
0
+
1
{\displaystyle y(x)={\frac {y_{0}}{(x_{0}-x)y_{0}+1}}}
which has maximum domain:
{
R
y
0
=
0
(
−
∞
,
x
0
+
1
y
0
)
y
0
>
0
(
x
0
+
1
y
0
,
+
∞
)
y
0
<
0
{\displaystyle {\begin{cases}\mathbb {R} &y_{0}=0\\[4pt]\left(-\infty ,x_{0}+{\frac {1}{y_{0}}}\right)&y_{0}>0\\[4pt]\left(x_{0}+{\frac {1}{y_{0}}},+\infty \right)&y_{0}<0\end{cases}}}
This shows clearly that the maximum interval may depend on the initial conditions. The domain of
y
{\displaystyle y}
could be taken as being
R
∖
(
x
0
+
1
/
y
0
)
,
{\displaystyle \mathbb {R} \setminus (x_{0}+1/y_{0}),}
but this would lead to a domain that is not an interval, so that the side opposite to the initial condition would be disconnected from the initial condition, and therefore not uniquely determined by it.
The maximum domain is not
R
{\displaystyle \mathbb {R} }
because
lim
x
→
x
±
‖
y
(
x
)
‖
→
∞
,
{\displaystyle \lim _{x\to x_{\pm }}\|y(x)\|\to \infty ,}
which is one of the two possible cases according to the above theorem.
== Reduction of order ==
Differential equations are usually easier to solve if the order of the equation can be reduced.
=== Reduction to a first-order system ===
Any explicit differential equation of order
n
{\displaystyle n}
,
F
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
=
y
(
n
)
{\displaystyle F\left(x,y,y',y'',\ \ldots ,\ y^{(n-1)}\right)=y^{(n)}}
can be written as a system of
n
{\displaystyle n}
first-order differential equations by defining a new family of unknown functions
y
i
=
y
(
i
−
1
)
.
{\displaystyle y_{i}=y^{(i-1)}.\!}
for
i
=
1
,
2
,
…
,
n
{\displaystyle i=1,2,\ldots ,n}
. The
n
{\displaystyle n}
-dimensional system of first-order coupled differential equations is then
y
1
′
=
y
2
y
2
′
=
y
3
⋮
y
n
−
1
′
=
y
n
y
n
′
=
F
(
x
,
y
1
,
…
,
y
n
)
.
{\displaystyle {\begin{array}{rcl}y_{1}'&=&y_{2}\\y_{2}'&=&y_{3}\\&\vdots &\\y_{n-1}'&=&y_{n}\\y_{n}'&=&F(x,y_{1},\ldots ,y_{n}).\end{array}}}
more compactly in vector notation:
y
′
=
F
(
x
,
y
)
{\displaystyle \mathbf {y} '=\mathbf {F} (x,\mathbf {y} )}
where
y
=
(
y
1
,
…
,
y
n
)
,
F
(
x
,
y
1
,
…
,
y
n
)
=
(
y
2
,
…
,
y
n
,
F
(
x
,
y
1
,
…
,
y
n
)
)
.
{\displaystyle \mathbf {y} =(y_{1},\ldots ,y_{n}),\quad \mathbf {F} (x,y_{1},\ldots ,y_{n})=(y_{2},\ldots ,y_{n},F(x,y_{1},\ldots ,y_{n})).}
== Summary of exact solutions ==
Some differential equations have solutions that can be written in an exact and closed form. Several important classes are given here.
In the table below,
P
(
x
)
{\displaystyle P(x)}
,
Q
(
x
)
{\displaystyle Q(x)}
,
P
(
y
)
{\displaystyle P(y)}
,
Q
(
y
)
{\displaystyle Q(y)}
, and
M
(
x
,
y
)
{\displaystyle M(x,y)}
,
N
(
x
,
y
)
{\displaystyle N(x,y)}
are any integrable functions of
x
{\displaystyle x}
,
y
{\displaystyle y}
;
b
{\displaystyle b}
and
c
{\displaystyle c}
are real given constants;
C
1
,
C
2
,
…
{\displaystyle C_{1},C_{2},\ldots }
are arbitrary constants (complex in general). The differential equations are in their equivalent and alternative forms that lead to the solution through integration.
In the integral solutions,
λ
{\displaystyle \lambda }
and
ε
{\displaystyle \varepsilon }
are dummy variables of integration (the continuum analogues of indices in summation), and the notation
∫
x
F
(
λ
)
d
λ
{\displaystyle \int ^{x}F(\lambda )\,d\lambda }
just means to integrate
F
(
λ
)
{\displaystyle F(\lambda )}
with respect to
λ
{\displaystyle \lambda }
, then after the integration substitute
λ
=
x
{\displaystyle \lambda =x}
, without adding constants (explicitly stated).
=== Separable equations ===
=== General first-order equations ===
=== General second-order equations ===
=== Linear to the nth order equations ===
== The guessing method ==
When all other methods for solving an ODE fail, or in the cases where we have some intuition about what the solution to a DE might look like, it is sometimes possible to solve a DE simply by guessing the solution and validating it is correct. To use this method, we simply guess a solution to the differential equation, and then plug the solution into the differential equation to validate if it satisfies the equation. If it does then we have a particular solution to the DE, otherwise we start over again and try another guess. For instance we could guess that the solution to a DE has the form:
y
=
A
e
α
t
{\displaystyle y=Ae^{\alpha t}}
since this is a very common solution that physically behaves in a sinusoidal way.
In the case of a first order ODE that is non-homogeneous we need to first find a solution to the homogeneous portion of the DE, otherwise known as the associated homogeneous equation, and then find a solution to the entire non-homogeneous equation by guessing. Finally, we add both of these solutions together to obtain the general solution to the ODE, that is:
general solution
=
general solution of the associated homogeneous equation
+
particular solution
{\displaystyle {\text{general solution}}={\text{general solution of the associated homogeneous equation}}+{\text{particular solution}}}
== Software for ODE solving ==
Maxima, an open-source computer algebra system.
COPASI, a free (Artistic License 2.0) software package for the integration and analysis of ODEs.
MATLAB, a technical computing application (MATrix LABoratory)
GNU Octave, a high-level language, primarily intended for numerical computations.
Scilab, an open source application for numerical computation.
Maple, a proprietary application for symbolic calculations.
Mathematica, a proprietary application primarily intended for symbolic calculations.
SymPy, a Python package that can solve ODEs symbolically
Julia (programming language), a high-level language primarily intended for numerical computations.
SageMath, an open-source application that uses a Python-like syntax with a wide range of capabilities spanning several branches of mathematics.
SciPy, a Python package that includes an ODE integration module.
Chebfun, an open-source package, written in MATLAB, for computing with functions to 15-digit accuracy.
GNU R, an open source computational environment primarily intended for statistics, which includes packages for ODE solving.
== See also ==
Boundary value problem
Examples of differential equations
Laplace transform applied to differential equations
List of dynamical systems and differential equations topics
Matrix differential equation
Method of undetermined coefficients
Recurrence relation
== Notes ==
== References ==
Halliday, David; Resnick, Robert (1977), Physics (3rd ed.), New York: Wiley, ISBN 0-471-71716-9
Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN 0-13-487538-9
Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: Wiley, ISBN 0-471-50728-8.
Polyanin, A. D. and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition), Chapman & Hall/CRC Press, Boca Raton, 2003. ISBN 1-58488-297-2
Simmons, George F. (1972), Differential Equations with Applications and Historical Notes, New York: McGraw-Hill, LCCN 75173716
Tipler, Paul A. (1991), Physics for Scientists and Engineers: Extended version (3rd ed.), New York: Worth Publishers, ISBN 0-87901-432-6
Boscain, Ugo; Chitour, Yacine (2011), Introduction à l'automatique (PDF) (in French)
Dresner, Lawrence (1999), Applications of Lie's Theory of Ordinary and Partial Differential Equations, Bristol and Philadelphia: Institute of Physics Publishing, ISBN 978-0750305303
Ascher, Uri; Petzold, Linda (1998), Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, SIAM, ISBN 978-1-61197-139-2
== Bibliography ==
Coddington, Earl A.; Levinson, Norman (1955). Theory of Ordinary Differential Equations. New York: McGraw-Hill.
Hartman, Philip (2002) [1964], Ordinary differential equations, Classics in Applied Mathematics, vol. 38, Philadelphia: Society for Industrial and Applied Mathematics, doi:10.1137/1.9780898719222, ISBN 978-0-89871-510-1, MR 1929104
W. Johnson, A Treatise on Ordinary and Partial Differential Equations, John Wiley and Sons, 1913, in University of Michigan Historical Math Collection
Ince, Edward L. (1944) [1926], Ordinary Differential Equations, Dover Publications, New York, ISBN 978-0-486-60349-0, MR 0010757 {{citation}}: ISBN / Date incompatibility (help)
Witold Hurewicz, Lectures on Ordinary Differential Equations, Dover Publications, ISBN 0-486-49510-8
Ibragimov, Nail H. (1993). CRC Handbook of Lie Group Analysis of Differential Equations Vol. 1-3. Providence: CRC-Press. ISBN 0-8493-4488-3..
Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux, Handbook of First Order Partial Differential Equations, Taylor & Francis, London, 2002. ISBN 0-415-27267-X
D. Zwillinger, Handbook of Differential Equations (3rd edition), Academic Press, Boston, 1997.
== External links ==
"Differential equation, ordinary", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
EqWorld: The World of Mathematical Equations, containing a list of ordinary differential equations with their solutions.
Online Notes / Differential Equations by Paul Dawkins, Lamar University.
Differential Equations, S.O.S. Mathematics.
A primer on analytical solution of differential equations from the Holistic Numerical Methods Institute, University of South Florida.
Ordinary Differential Equations and Dynamical Systems lecture notes by Gerald Teschl.
Notes on Diffy Qs: Differential Equations for Engineers An introductory textbook on differential equations by Jiri Lebl of UIUC.
Modeling with ODEs using Scilab A tutorial on how to model a physical system described by ODE using Scilab standard programming language by Openeering team.
Solving an ordinary differential equation in Wolfram|Alpha | Wikipedia/First-order_differential_equation |
In mathematical systems theory, a multidimensional system or m-D system is a system in which not only one independent variable exists (like time), but there are several independent variables.
Important problems such as factorization and stability of m-D systems (m > 1) have recently attracted the interest of many researchers and practitioners. The reason is that the factorization and stability is not a straightforward extension of the factorization and stability of 1-D systems because, for example, the fundamental theorem of algebra does not exist in the ring of m-D (m > 1) polynomials.
== Applications ==
Multidimensional systems or m-D systems are the necessary mathematical background for modern digital image processing with many applications in biomedicine, X-ray technology and satellite communications.
There are also some studies combining m-D systems with partial differential equations (PDEs).
== Linear multidimensional state-space model ==
A state-space model is a representation of a system in which the effect of all "prior" input values is contained by a state vector. In the case of an m-d system, each dimension has a state vector that contains the effect of prior inputs relative to that dimension. The collection of all such dimensional state vectors at a point constitutes the total state vector at the point.
Consider a uniform discrete space linear two-dimensional (2d) system that is space invariant and causal. It can be represented in matrix-vector form as follows:
Represent the input vector at each point
(
i
,
j
)
{\displaystyle (i,j)}
by
u
(
i
,
j
)
{\displaystyle u(i,j)}
, the output vector by
y
(
i
,
j
)
{\displaystyle y(i,j)}
the horizontal state vector by
R
(
i
,
j
)
{\displaystyle R(i,j)}
and the vertical state vector by
S
(
i
,
j
)
{\displaystyle S(i,j)}
. Then the operation at each point is defined by:
R
(
i
+
1
,
j
)
=
A
1
R
(
i
,
j
)
+
A
2
S
(
i
,
j
)
+
B
1
u
(
i
,
j
)
S
(
i
,
j
+
1
)
=
A
3
R
(
i
,
j
)
+
A
4
S
(
i
,
j
)
+
B
2
u
(
i
,
j
)
y
(
i
,
j
)
=
C
1
R
(
i
,
j
)
+
C
2
S
(
i
,
j
)
+
D
u
(
i
,
j
)
{\displaystyle {\begin{aligned}R(i+1,j)&=A_{1}R(i,j)+A_{2}S(i,j)+B_{1}u(i,j)\\S(i,j+1)&=A_{3}R(i,j)+A_{4}S(i,j)+B_{2}u(i,j)\\y(i,j)&=C_{1}R(i,j)+C_{2}S(i,j)+Du(i,j)\end{aligned}}}
where
A
1
,
A
2
,
A
3
,
A
4
,
B
1
,
B
2
,
C
1
,
C
2
{\displaystyle A_{1},A_{2},A_{3},A_{4},B_{1},B_{2},C_{1},C_{2}}
and
D
{\displaystyle D}
are matrices of appropriate dimensions.
These equations can be written more compactly by combining the matrices:
[
R
(
i
+
1
,
j
)
S
(
i
,
j
+
1
)
y
(
i
,
j
)
]
=
[
A
1
A
2
B
1
A
3
A
4
B
2
C
1
C
2
D
]
[
R
(
i
,
j
)
S
(
i
,
j
)
u
(
i
,
j
)
]
{\displaystyle {\begin{bmatrix}R(i+1,j)\\S(i,j+1)\\y(i,j)\end{bmatrix}}={\begin{bmatrix}A_{1}&A_{2}&B_{1}\\A_{3}&A_{4}&B_{2}\\C_{1}&C_{2}&D\end{bmatrix}}{\begin{bmatrix}R(i,j)\\S(i,j)\\u(i,j)\end{bmatrix}}}
Given input vectors
u
(
i
,
j
)
{\displaystyle u(i,j)}
at each point and initial state values, the value of each output vector can be computed by recursively performing the operation above.
== Multidimensional transfer function ==
A discrete linear two-dimensional system is often described by a partial difference equation in the form:
∑
p
,
q
=
0
,
0
m
,
n
a
p
,
q
y
(
i
−
p
,
j
−
q
)
=
∑
p
,
q
=
0
,
0
m
,
n
b
p
,
q
x
(
i
−
p
,
j
−
q
)
{\displaystyle \sum _{p,q=0,0}^{m,n}a_{p,q}y(i-p,j-q)=\sum _{p,q=0,0}^{m,n}b_{p,q}x(i-p,j-q)}
where
x
(
i
,
j
)
{\displaystyle x(i,j)}
is the input and
y
(
i
,
j
)
{\displaystyle y(i,j)}
is the output at point
(
i
,
j
)
{\displaystyle (i,j)}
and
a
p
,
q
{\displaystyle a_{p,q}}
and
b
p
,
q
{\displaystyle b_{p,q}}
are constant coefficients.
To derive a transfer function for the system the 2d Z-transform is applied to both sides of the equation above.
∑
p
,
q
=
0
,
0
m
,
n
a
p
,
q
z
1
−
p
z
2
−
q
Y
(
z
1
,
z
2
)
=
∑
p
,
q
=
0
,
0
m
,
n
b
p
,
q
z
1
−
p
z
2
−
q
X
(
z
1
,
z
2
)
{\displaystyle \sum _{p,q=0,0}^{m,n}a_{p,q}z_{1}^{-p}z_{2}^{-q}Y(z_{1},z_{2})=\sum _{p,q=0,0}^{m,n}b_{p,q}z_{1}^{-p}z_{2}^{-q}X(z_{1},z_{2})}
Transposing yields the transfer function
T
(
z
1
,
z
2
)
{\displaystyle T(z_{1},z_{2})}
:
T
(
z
1
,
z
2
)
=
Y
(
z
1
,
z
2
)
X
(
z
1
,
z
2
)
=
∑
p
,
q
=
0
,
0
m
,
n
b
p
,
q
z
1
−
p
z
2
−
q
∑
p
,
q
=
0
,
0
m
,
n
a
p
,
q
z
1
−
p
z
2
−
q
{\displaystyle T(z_{1},z_{2})={Y(z_{1},z_{2}) \over X(z_{1},z_{2})}={\sum _{p,q=0,0}^{m,n}b_{p,q}z_{1}^{-p}z_{2}^{-q} \over \sum _{p,q=0,0}^{m,n}a_{p,q}z_{1}^{-p}z_{2}^{-q}}}
So given any pattern of input values, the 2d Z-transform of the pattern is computed and then multiplied by the transfer function
T
(
z
1
,
z
2
)
{\displaystyle T(z_{1},z_{2})}
to produce the Z-transform of the system output.
== Realization of a 2d transfer function ==
Often an image processing or other md computational task is described by a transfer function that has certain filtering properties, but it is desired to convert it to state-space form for more direct computation. Such conversion is referred to as realization of the transfer function.
Consider a 2d linear spatially invariant causal system having an input-output relationship described by:
Y
(
z
1
,
z
2
)
=
∑
p
,
q
=
0
,
0
m
,
n
b
p
,
q
z
1
−
p
z
2
−
q
∑
p
,
q
=
0
,
0
m
,
n
a
p
,
q
z
1
−
p
z
2
−
q
X
(
z
1
,
z
2
)
{\displaystyle Y(z_{1},z_{2})={\sum _{p,q=0,0}^{m,n}b_{p,q}z_{1}^{-p}z_{2}^{-q} \over \sum _{p,q=0,0}^{m,n}a_{p,q}z_{1}^{-p}z_{2}^{-q}}X(z_{1},z_{2})}
Two cases are individually considered 1) the bottom summation is simply the constant 1 2) the top summation is simply a constant
k
{\displaystyle k}
. Case 1 is often called the "all-zero" or "finite impulse response" case, whereas case 2 is called the "all-pole" or "infinite impulse response" case. The general situation can be implemented as a cascade of the two individual cases. The solution for case 1 is considerably simpler than case 2 and is shown below.
=== Example: all zero or finite impulse response ===
Y
(
z
1
,
z
2
)
=
∑
p
,
q
=
0
,
0
m
,
n
b
p
,
q
z
1
−
p
z
2
−
q
X
(
z
1
,
z
2
)
{\displaystyle Y(z_{1},z_{2})=\sum _{p,q=0,0}^{m,n}b_{p,q}z_{1}^{-p}z_{2}^{-q}X(z_{1},z_{2})}
The state-space vectors will have the following dimensions:
R
(
1
×
m
)
,
S
(
1
×
n
)
,
x
(
1
×
1
)
{\displaystyle R(1\times m),\quad S(1\times n),\quad x(1\times 1)}
and
y
(
1
×
1
)
{\displaystyle y(1\times 1)}
Each term in the summation involves a negative (or zero) power of
z
1
{\displaystyle z_{1}}
and of
z
2
{\displaystyle z_{2}}
which correspond to a delay (or shift) along the respective dimension of the input
x
(
i
,
j
)
{\displaystyle x(i,j)}
. This delay can be effected by placing
1
{\displaystyle 1}
’s along the super diagonal in the
A
1
{\displaystyle A_{1}}
. and
A
4
{\displaystyle A_{4}}
matrices and the multiplying coefficients
b
i
,
j
{\displaystyle b_{i,j}}
in the proper positions in the
A
2
{\displaystyle A_{2}}
. The value
b
0
,
0
{\displaystyle b_{0,0}}
is placed in the upper position of the
B
1
{\displaystyle B_{1}}
matrix, which will multiply the input
x
(
i
,
j
)
{\displaystyle x(i,j)}
and add it to the first component of the
R
i
,
j
{\displaystyle R_{i,j}}
vector. Also, a value of
b
0
,
0
{\displaystyle b_{0,0}}
is placed in the
D
{\displaystyle D}
matrix which will multiply the input
x
(
i
,
j
)
{\displaystyle x(i,j)}
and add it to the output
y
{\displaystyle y}
.
The matrices then appear as follows:
A
1
=
[
0
0
0
⋯
0
0
1
0
0
⋯
0
0
0
1
0
⋯
0
0
⋮
⋮
⋮
⋱
⋮
⋮
0
0
0
⋯
0
0
0
0
0
⋯
1
0
]
{\displaystyle A_{1}={\begin{bmatrix}0&0&0&\cdots &0&0\\1&0&0&\cdots &0&0\\0&1&0&\cdots &0&0\\\vdots &\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&0&\cdots &0&0\\0&0&0&\cdots &1&0\end{bmatrix}}}
A
2
=
[
0
0
0
⋯
0
0
0
0
0
⋯
0
0
0
0
0
⋯
0
0
⋮
⋮
⋮
⋱
⋮
⋮
0
0
0
⋯
0
0
0
0
0
⋯
0
0
]
{\displaystyle A_{2}={\begin{bmatrix}0&0&0&\cdots &0&0\\0&0&0&\cdots &0&0\\0&0&0&\cdots &0&0\\\vdots &\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&0&\cdots &0&0\\0&0&0&\cdots &0&0\end{bmatrix}}}
A
3
=
[
b
1
,
n
b
2
,
n
b
3
,
n
⋯
b
m
−
1
,
n
b
m
,
n
b
1
,
n
−
1
b
2
,
n
−
1
b
3
,
n
−
1
⋯
b
m
−
1
,
n
−
1
b
m
,
n
−
1
b
1
,
n
−
2
b
2
,
n
−
2
b
3
,
n
−
2
⋯
b
m
−
1
,
n
−
2
b
m
,
n
−
2
⋮
⋮
⋮
⋱
⋮
⋮
b
1
,
2
b
2
,
2
b
3
,
2
⋯
b
m
−
1
,
2
b
m
,
2
b
1
,
1
b
2
,
1
b
3
,
1
⋯
b
m
−
1
,
1
b
m
,
1
]
{\displaystyle A_{3}={\begin{bmatrix}b_{1,n}&b_{2,n}&b_{3,n}&\cdots &b_{m-1,n}&b_{m,n}\\b_{1,n-1}&b_{2,n-1}&b_{3,n-1}&\cdots &b_{m-1,n-1}&b_{m,n-1}\\b_{1,n-2}&b_{2,n-2}&b_{3,n-2}&\cdots &b_{m-1,n-2}&b_{m,n-2}\\\vdots &\vdots &\vdots &\ddots &\vdots &\vdots \\b_{1,2}&b_{2,2}&b_{3,2}&\cdots &b_{m-1,2}&b_{m,2}\\b_{1,1}&b_{2,1}&b_{3,1}&\cdots &b_{m-1,1}&b_{m,1}\end{bmatrix}}}
A
4
=
[
0
0
0
⋯
0
0
1
0
0
⋯
0
0
0
1
0
⋯
0
0
⋮
⋮
⋮
⋱
⋮
⋮
0
0
0
⋯
0
0
0
0
0
⋯
1
0
]
{\displaystyle A_{4}={\begin{bmatrix}0&0&0&\cdots &0&0\\1&0&0&\cdots &0&0\\0&1&0&\cdots &0&0\\\vdots &\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&0&\cdots &0&0\\0&0&0&\cdots &1&0\end{bmatrix}}}
B
1
=
[
1
0
0
0
⋮
0
0
]
{\displaystyle B_{1}={\begin{bmatrix}1\\0\\0\\0\\\vdots \\0\\0\end{bmatrix}}}
B
2
=
[
b
0
,
n
b
0
,
n
−
1
b
0
,
n
−
2
⋮
b
0
,
2
b
0
,
1
]
{\displaystyle B_{2}={\begin{bmatrix}b_{0,n}\\b_{0,n-1}\\b_{0,n-2}\\\vdots \\b_{0,2}\\b_{0,1}\end{bmatrix}}}
C
1
=
[
b
1
,
0
b
2
,
0
b
3
,
0
⋯
b
m
−
1
,
0
b
m
,
0
]
{\displaystyle C_{1}={\begin{bmatrix}b_{1,0}&b_{2,0}&b_{3,0}&\cdots &b_{m-1,0}&b_{m,0}\\\end{bmatrix}}}
C
2
=
[
0
0
0
⋯
0
1
]
{\displaystyle C_{2}={\begin{bmatrix}0&0&0&\cdots &0&1\\\end{bmatrix}}}
D
=
[
b
0
,
0
]
{\displaystyle D={\begin{bmatrix}b_{0,0}\end{bmatrix}}}
== References == | Wikipedia/Multidimensional_systems |
In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with any other DE, its unknown(s) consists of one (or more) function(s) and involves the derivatives of those functions. The term "ordinary" is used in contrast with partial differential equations (PDEs) which may be with respect to more than one independent variable, and, less commonly, in contrast with stochastic differential equations (SDEs) where the progression is random.
== Differential equations ==
A linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form
a
0
(
x
)
y
+
a
1
(
x
)
y
′
+
a
2
(
x
)
y
″
+
⋯
+
a
n
(
x
)
y
(
n
)
+
b
(
x
)
=
0
,
{\displaystyle a_{0}(x)y+a_{1}(x)y'+a_{2}(x)y''+\cdots +a_{n}(x)y^{(n)}+b(x)=0,}
where
a
0
(
x
)
,
…
,
a
n
(
x
)
{\displaystyle a_{0}(x),\ldots ,a_{n}(x)}
and
b
(
x
)
{\displaystyle b(x)}
are arbitrary differentiable functions that do not need to be linear, and
y
′
,
…
,
y
(
n
)
{\displaystyle y',\ldots ,y^{(n)}}
are the successive derivatives of the unknown function
y
{\displaystyle y}
of the variable
x
{\displaystyle x}
.
Among ordinary differential equations, linear differential equations play a prominent role for several reasons. Most elementary and special functions that are encountered in physics and applied mathematics are solutions of linear differential equations (see Holonomic function). When physical phenomena are modeled with non-linear equations, they are generally approximated by linear differential equations for an easier solution. The few non-linear ODEs that can be solved explicitly are generally solved by transforming the equation into an equivalent linear ODE (see, for example Riccati equation).
Some ODEs can be solved explicitly in terms of known functions and integrals. When that is not possible, the equation for computing the Taylor series of the solutions may be useful. For applied problems, numerical methods for ordinary differential equations can supply an approximation of the solution.
== Background ==
Ordinary differential equations (ODEs) arise in many contexts of mathematics and social and natural sciences. Mathematical descriptions of change use differentials and derivatives. Various differentials, derivatives, and functions become related via equations, such that a differential equation is a result that describes dynamically changing phenomena, evolution, and variation. Often, quantities are defined as the rate of change of other quantities (for example, derivatives of displacement with respect to time), or gradients of quantities, which is how they enter differential equations.
Specific mathematical fields include geometry and analytical mechanics. Scientific fields include much of physics and astronomy (celestial mechanics), meteorology (weather modeling), chemistry (reaction rates), biology (infectious diseases, genetic variation), ecology and population modeling (population competition), economics (stock trends, interest rates and the market equilibrium price changes).
Many mathematicians have studied differential equations and contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, d'Alembert, and Euler.
A simple example is Newton's second law of motion—the relationship between the displacement
x
{\displaystyle x}
and the time
t
{\displaystyle t}
of an object under the force
F
{\displaystyle F}
, is given by the differential equation
m
d
2
x
(
t
)
d
t
2
=
F
(
x
(
t
)
)
{\displaystyle m{\frac {\mathrm {d} ^{2}x(t)}{\mathrm {d} t^{2}}}=F(x(t))\,}
which constrains the motion of a particle of constant mass
m
{\displaystyle m}
. In general,
F
{\displaystyle F}
is a function of the position
x
(
t
)
{\displaystyle x(t)}
of the particle at time
t
{\displaystyle t}
. The unknown function
x
(
t
)
{\displaystyle x(t)}
appears on both sides of the differential equation, and is indicated in the notation
F
(
x
(
t
)
)
{\displaystyle F(x(t))}
.
== Definitions ==
In what follows,
y
{\displaystyle y}
is a dependent variable representing an unknown function
y
=
f
(
x
)
{\displaystyle y=f(x)}
of the independent variable
x
{\displaystyle x}
. The notation for differentiation varies depending upon the author and upon which notation is most useful for the task at hand. In this context, the Leibniz's notation
d
y
d
x
,
d
2
y
d
x
2
,
…
,
d
n
y
d
x
n
{\displaystyle {\frac {dy}{dx}},{\frac {d^{2}y}{dx^{2}}},\ldots ,{\frac {d^{n}y}{dx^{n}}}}
is more useful for differentiation and integration, whereas Lagrange's notation
y
′
,
y
″
,
…
,
y
(
n
)
{\displaystyle y',y'',\ldots ,y^{(n)}}
is more useful for representing higher-order derivatives compactly, and Newton's notation
(
y
˙
,
y
¨
,
y
.
.
.
)
{\displaystyle ({\dot {y}},{\ddot {y}},{\overset {...}{y}})}
is often used in physics for representing derivatives of low order with respect to time.
=== General definition ===
Given
F
{\displaystyle F}
, a function of
x
{\displaystyle x}
,
y
{\displaystyle y}
, and derivatives of
y
{\displaystyle y}
. Then an equation of the form
F
(
x
,
y
,
y
′
,
…
,
y
(
n
−
1
)
)
=
y
(
n
)
{\displaystyle F\left(x,y,y',\ldots ,y^{(n-1)}\right)=y^{(n)}}
is called an explicit ordinary differential equation of order
n
{\displaystyle n}
.
More generally, an implicit ordinary differential equation of order
n
{\displaystyle n}
takes the form:
F
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
=
0
{\displaystyle F\left(x,y,y',y'',\ \ldots ,\ y^{(n)}\right)=0}
There are further classifications:
AutonomousA differential equation is autonomous if it does not depend on the variable x.
Linear
A differential equation is linear if
F
{\displaystyle F}
can be written as a linear combination of the derivatives of
y
{\displaystyle y}
; that is, it can be rewritten as
y
(
n
)
=
∑
i
=
0
n
−
1
a
i
(
x
)
y
(
i
)
+
r
(
x
)
{\displaystyle y^{(n)}=\sum _{i=0}^{n-1}a_{i}(x)y^{(i)}+r(x)}
where
a
i
(
x
)
{\displaystyle a_{i}(x)}
and
r
(
x
)
{\displaystyle r(x)}
are continuous functions of
x
{\displaystyle x}
.
The function
r
(
x
)
{\displaystyle r(x)}
is called the source term, leading to further classification.
HomogeneousA linear differential equation is homogeneous if
r
(
x
)
=
0
{\displaystyle r(x)=0}
. In this case, there is always the "trivial solution"
y
=
0
{\displaystyle y=0}
.
Nonhomogeneous (or inhomogeneous)A linear differential equation is nonhomogeneous if
r
(
x
)
≠
0
{\displaystyle r(x)\neq 0}
.
Non-linearA differential equation that is not linear.
=== System of ODEs ===
A number of coupled differential equations form a system of equations. If
y
{\displaystyle \mathbf {y} }
is a vector whose elements are functions;
y
(
x
)
=
[
y
1
(
x
)
,
y
2
(
x
)
,
…
,
y
m
(
x
)
]
{\displaystyle \mathbf {y} (x)=[y_{1}(x),y_{2}(x),\ldots ,y_{m}(x)]}
, and
F
{\displaystyle \mathbf {F} }
is a vector-valued function of
y
{\displaystyle \mathbf {y} }
and its derivatives, then
y
(
n
)
=
F
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
{\displaystyle \mathbf {y} ^{(n)}=\mathbf {F} \left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)}
is an explicit system of ordinary differential equations of order
n
{\displaystyle n}
and dimension
m
{\displaystyle m}
. In column vector form:
(
y
1
(
n
)
y
2
(
n
)
⋮
y
m
(
n
)
)
=
(
f
1
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
f
2
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
⋮
f
m
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
)
{\displaystyle {\begin{pmatrix}y_{1}^{(n)}\\y_{2}^{(n)}\\\vdots \\y_{m}^{(n)}\end{pmatrix}}={\begin{pmatrix}f_{1}\left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)\\f_{2}\left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)\\\vdots \\f_{m}\left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)\end{pmatrix}}}
These are not necessarily linear. The implicit analogue is:
F
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
=
0
{\displaystyle \mathbf {F} \left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)}\right)={\boldsymbol {0}}}
where
0
=
(
0
,
0
,
…
,
0
)
{\displaystyle {\boldsymbol {0}}=(0,0,\ldots ,0)}
is the zero vector. In matrix form
(
f
1
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
f
2
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
⋮
f
m
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
)
=
(
0
0
⋮
0
)
{\displaystyle {\begin{pmatrix}f_{1}(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)})\\f_{2}(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)})\\\vdots \\f_{m}(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)})\end{pmatrix}}={\begin{pmatrix}0\\0\\\vdots \\0\end{pmatrix}}}
For a system of the form
F
(
x
,
y
,
y
′
)
=
0
{\displaystyle \mathbf {F} \left(x,\mathbf {y} ,\mathbf {y} '\right)={\boldsymbol {0}}}
, some sources also require that the Jacobian matrix
∂
F
(
x
,
u
,
v
)
∂
v
{\displaystyle {\frac {\partial \mathbf {F} (x,\mathbf {u} ,\mathbf {v} )}{\partial \mathbf {v} }}}
be non-singular in order to call this an implicit ODE [system]; an implicit ODE system satisfying this Jacobian non-singularity condition can be transformed into an explicit ODE system. In the same sources, implicit ODE systems with a singular Jacobian are termed differential algebraic equations (DAEs). This distinction is not merely one of terminology; DAEs have fundamentally different characteristics and are generally more involved to solve than (nonsingular) ODE systems. Presumably for additional derivatives, the Hessian matrix and so forth are also assumed non-singular according to this scheme, although note that any ODE of order greater than one can be (and usually is) rewritten as system of ODEs of first order, which makes the Jacobian singularity criterion sufficient for this taxonomy to be comprehensive at all orders.
The behavior of a system of ODEs can be visualized through the use of a phase portrait.
=== Solutions ===
Given a differential equation
F
(
x
,
y
,
y
′
,
…
,
y
(
n
)
)
=
0
{\displaystyle F\left(x,y,y',\ldots ,y^{(n)}\right)=0}
a function
u
:
I
⊂
R
→
R
{\displaystyle u:I\subset \mathbb {R} \to \mathbb {R} }
, where
I
{\displaystyle I}
is an interval, is called a solution or integral curve for
F
{\displaystyle F}
, if
u
{\displaystyle u}
is
n
{\displaystyle n}
-times differentiable on
I
{\displaystyle I}
, and
F
(
x
,
u
,
u
′
,
…
,
u
(
n
)
)
=
0
x
∈
I
.
{\displaystyle F(x,u,u',\ \ldots ,\ u^{(n)})=0\quad x\in I.}
Given two solutions
u
:
J
⊂
R
→
R
{\displaystyle u:J\subset \mathbb {R} \to \mathbb {R} }
and
v
:
I
⊂
R
→
R
{\displaystyle v:I\subset \mathbb {R} \to \mathbb {R} }
,
u
{\displaystyle u}
is called an extension of
v
{\displaystyle v}
if
I
⊂
J
{\displaystyle I\subset J}
and
u
(
x
)
=
v
(
x
)
x
∈
I
.
{\displaystyle u(x)=v(x)\quad x\in I.\,}
A solution that has no extension is called a maximal solution. A solution defined on all of
R
{\displaystyle \mathbb {R} }
is called a global solution.
A general solution of an
n
{\displaystyle n}
th-order equation is a solution containing
n
{\displaystyle n}
arbitrary independent constants of integration. A particular solution is derived from the general solution by setting the constants to particular values, often chosen to fulfill set 'initial conditions or boundary conditions'. A singular solution is a solution that cannot be obtained by assigning definite values to the arbitrary constants in the general solution.
In the context of linear ODE, the terminology particular solution can also refer to any solution of the ODE (not necessarily satisfying the initial conditions), which is then added to the homogeneous solution (a general solution of the homogeneous ODE), which then forms a general solution of the original ODE. This is the terminology used in the guessing method section in this article, and is frequently used when discussing the method of undetermined coefficients and variation of parameters.
=== Solutions of finite duration ===
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that from its own dynamics, the system will reach the value zero at an ending time and stays there in zero forever after. These finite-duration solutions can't be analytical functions on the whole real line, and because they will be non-Lipschitz functions at their ending time, they are not included in the uniqueness theorem of solutions of Lipschitz differential equations.
As example, the equation:
y
′
=
−
sgn
(
y
)
|
y
|
,
y
(
0
)
=
1
{\displaystyle y'=-{\text{sgn}}(y){\sqrt {|y|}},\,\,y(0)=1}
Admits the finite duration solution:
y
(
x
)
=
1
4
(
1
−
x
2
+
|
1
−
x
2
|
)
2
{\displaystyle y(x)={\frac {1}{4}}\left(1-{\frac {x}{2}}+\left|1-{\frac {x}{2}}\right|\right)^{2}}
== Theories ==
=== Singular solutions ===
The theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century has it received special attention. A valuable but little-known work on the subject is that of Houtain (1854). Darboux (from 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.
=== Reduction to quadratures ===
The primitive attempt in dealing with differential equations had in view a reduction to quadratures, that is, expressing the solutions in terms of known function and their integrals. This is possible for linear equations with constant coefficients, it appeared in the 19th century that this is generally impossible in other cases. Hence, analysts began the study (for their own) of functions that are solutions of differential equations, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter, the real question was no longer whether a solution is possible by quadratures, but whether a given differential equation suffices for the definition of a function, and, if so, what are the characteristic properties of such functions.
=== Fuchsian theory ===
Two memoirs by Fuchs inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869. His method for integrating a non-linear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve that remains unchanged under a rational transformation, Clebsch proposed to classify the transcendent functions defined by differential equations according to the invariant properties of the corresponding surfaces
f
=
0
{\displaystyle f=0}
under rational one-to-one transformations.
=== Lie's theory ===
From 1870, Sophus Lie's work put the theory of differential equations on a better foundation. He showed that the integration theories of the older mathematicians can, using Lie groups, be referred to a common source, and that ordinary differential equations that admit the same infinitesimal transformations present comparable integration difficulties. He also emphasized the subject of transformations of contact.
Lie's group theory of differential equations has been certified, namely: (1) that it unifies the many ad hoc methods known for solving differential equations, and (2) that it provides powerful new ways to find solutions. The theory has applications to both ordinary and partial differential equations.
A general solution approach uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras, and differential geometry are used to understand the structure of linear and non-linear (partial) differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform, and finally finding exact analytic solutions to DE.
Symmetry methods have been applied to differential equations that arise in mathematics, physics, engineering, and other disciplines.
=== Sturm–Liouville theory ===
Sturm–Liouville theory is a theory of a special type of second-order linear ordinary differential equation. Their solutions are based on eigenvalues and corresponding eigenfunctions of linear operators defined via second-order homogeneous linear equations. The problems are identified as Sturm–Liouville problems (SLP) and are named after J. C. F. Sturm and J. Liouville, who studied them in the mid-1800s. SLPs have an infinite number of eigenvalues, and the corresponding eigenfunctions form a complete, orthogonal set, which makes orthogonal expansions possible. This is a key idea in applied mathematics, physics, and engineering. SLPs are also useful in the analysis of certain partial differential equations.
== Existence and uniqueness of solutions ==
There are several theorems that establish existence and uniqueness of solutions to initial value problems involving ODEs both locally and globally. The two main theorems are
In their basic form both of these theorems only guarantee local results, though the latter can be extended to give a global result, for example, if the conditions of Grönwall's inequality are met.
Also, uniqueness theorems like the Lipschitz one above do not apply to DAE systems, which may have multiple solutions stemming from their (non-linear) algebraic part alone.
=== Local existence and uniqueness theorem simplified ===
The theorem can be stated simply as follows. For the equation and initial value problem:
y
′
=
F
(
x
,
y
)
,
y
0
=
y
(
x
0
)
{\displaystyle y'=F(x,y)\,,\quad y_{0}=y(x_{0})}
if
F
{\displaystyle F}
and
∂
F
/
∂
y
{\displaystyle \partial F/\partial y}
are continuous in a closed rectangle
R
=
[
x
0
−
a
,
x
0
+
a
]
×
[
y
0
−
b
,
y
0
+
b
]
{\displaystyle R=[x_{0}-a,x_{0}+a]\times [y_{0}-b,y_{0}+b]}
in the
x
−
y
{\displaystyle x-y}
plane, where
a
{\displaystyle a}
and
b
{\displaystyle b}
are real (symbolically:
a
,
b
∈
R
{\displaystyle a,b\in \mathbb {R} }
) and
x
{\displaystyle x}
denotes the Cartesian product, square brackets denote closed intervals, then there is an interval
I
=
[
x
0
−
h
,
x
0
+
h
]
⊂
[
x
0
−
a
,
x
0
+
a
]
{\displaystyle I=[x_{0}-h,x_{0}+h]\subset [x_{0}-a,x_{0}+a]}
for some
h
∈
R
{\displaystyle h\in \mathbb {R} }
where the solution to the above equation and initial value problem can be found. That is, there is a solution and it is unique. Since there is no restriction on
F
{\displaystyle F}
to be linear, this applies to non-linear equations that take the form
F
(
x
,
y
)
{\displaystyle F(x,y)}
, and it can also be applied to systems of equations.
=== Global uniqueness and maximum domain of solution ===
When the hypotheses of the Picard–Lindelöf theorem are satisfied, then local existence and uniqueness can be extended to a global result. More precisely:
For each initial condition
(
x
0
,
y
0
)
{\displaystyle (x_{0},y_{0})}
there exists a unique maximum (possibly infinite) open interval
I
max
=
(
x
−
,
x
+
)
,
x
±
∈
R
∪
{
±
∞
}
,
x
0
∈
I
max
{\displaystyle I_{\max }=(x_{-},x_{+}),x_{\pm }\in \mathbb {R} \cup \{\pm \infty \},x_{0}\in I_{\max }}
such that any solution that satisfies this initial condition is a restriction of the solution that satisfies this initial condition with domain
I
max
{\displaystyle I_{\max }}
.
In the case that
x
±
≠
±
∞
{\displaystyle x_{\pm }\neq \pm \infty }
, there are exactly two possibilities
explosion in finite time:
lim sup
x
→
x
±
‖
y
(
x
)
‖
→
∞
{\displaystyle \limsup _{x\to x_{\pm }}\|y(x)\|\to \infty }
leaves domain of definition:
lim
x
→
x
±
y
(
x
)
∈
∂
Ω
¯
{\displaystyle \lim _{x\to x_{\pm }}y(x)\ \in \partial {\bar {\Omega }}}
where
Ω
{\displaystyle \Omega }
is the open set in which
F
{\displaystyle F}
is defined, and
∂
Ω
¯
{\displaystyle \partial {\bar {\Omega }}}
is its boundary.
Note that the maximum domain of the solution
is always an interval (to have uniqueness)
may be smaller than
R
{\displaystyle \mathbb {R} }
may depend on the specific choice of
(
x
0
,
y
0
)
{\displaystyle (x_{0},y_{0})}
.
Example.
y
′
=
y
2
{\displaystyle y'=y^{2}}
This means that
F
(
x
,
y
)
=
y
2
{\displaystyle F(x,y)=y^{2}}
, which is
C
1
{\displaystyle C^{1}}
and therefore locally Lipschitz continuous, satisfying the Picard–Lindelöf theorem.
Even in such a simple setting, the maximum domain of solution cannot be all
R
{\displaystyle \mathbb {R} }
since the solution is
y
(
x
)
=
y
0
(
x
0
−
x
)
y
0
+
1
{\displaystyle y(x)={\frac {y_{0}}{(x_{0}-x)y_{0}+1}}}
which has maximum domain:
{
R
y
0
=
0
(
−
∞
,
x
0
+
1
y
0
)
y
0
>
0
(
x
0
+
1
y
0
,
+
∞
)
y
0
<
0
{\displaystyle {\begin{cases}\mathbb {R} &y_{0}=0\\[4pt]\left(-\infty ,x_{0}+{\frac {1}{y_{0}}}\right)&y_{0}>0\\[4pt]\left(x_{0}+{\frac {1}{y_{0}}},+\infty \right)&y_{0}<0\end{cases}}}
This shows clearly that the maximum interval may depend on the initial conditions. The domain of
y
{\displaystyle y}
could be taken as being
R
∖
(
x
0
+
1
/
y
0
)
,
{\displaystyle \mathbb {R} \setminus (x_{0}+1/y_{0}),}
but this would lead to a domain that is not an interval, so that the side opposite to the initial condition would be disconnected from the initial condition, and therefore not uniquely determined by it.
The maximum domain is not
R
{\displaystyle \mathbb {R} }
because
lim
x
→
x
±
‖
y
(
x
)
‖
→
∞
,
{\displaystyle \lim _{x\to x_{\pm }}\|y(x)\|\to \infty ,}
which is one of the two possible cases according to the above theorem.
== Reduction of order ==
Differential equations are usually easier to solve if the order of the equation can be reduced.
=== Reduction to a first-order system ===
Any explicit differential equation of order
n
{\displaystyle n}
,
F
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
=
y
(
n
)
{\displaystyle F\left(x,y,y',y'',\ \ldots ,\ y^{(n-1)}\right)=y^{(n)}}
can be written as a system of
n
{\displaystyle n}
first-order differential equations by defining a new family of unknown functions
y
i
=
y
(
i
−
1
)
.
{\displaystyle y_{i}=y^{(i-1)}.\!}
for
i
=
1
,
2
,
…
,
n
{\displaystyle i=1,2,\ldots ,n}
. The
n
{\displaystyle n}
-dimensional system of first-order coupled differential equations is then
y
1
′
=
y
2
y
2
′
=
y
3
⋮
y
n
−
1
′
=
y
n
y
n
′
=
F
(
x
,
y
1
,
…
,
y
n
)
.
{\displaystyle {\begin{array}{rcl}y_{1}'&=&y_{2}\\y_{2}'&=&y_{3}\\&\vdots &\\y_{n-1}'&=&y_{n}\\y_{n}'&=&F(x,y_{1},\ldots ,y_{n}).\end{array}}}
more compactly in vector notation:
y
′
=
F
(
x
,
y
)
{\displaystyle \mathbf {y} '=\mathbf {F} (x,\mathbf {y} )}
where
y
=
(
y
1
,
…
,
y
n
)
,
F
(
x
,
y
1
,
…
,
y
n
)
=
(
y
2
,
…
,
y
n
,
F
(
x
,
y
1
,
…
,
y
n
)
)
.
{\displaystyle \mathbf {y} =(y_{1},\ldots ,y_{n}),\quad \mathbf {F} (x,y_{1},\ldots ,y_{n})=(y_{2},\ldots ,y_{n},F(x,y_{1},\ldots ,y_{n})).}
== Summary of exact solutions ==
Some differential equations have solutions that can be written in an exact and closed form. Several important classes are given here.
In the table below,
P
(
x
)
{\displaystyle P(x)}
,
Q
(
x
)
{\displaystyle Q(x)}
,
P
(
y
)
{\displaystyle P(y)}
,
Q
(
y
)
{\displaystyle Q(y)}
, and
M
(
x
,
y
)
{\displaystyle M(x,y)}
,
N
(
x
,
y
)
{\displaystyle N(x,y)}
are any integrable functions of
x
{\displaystyle x}
,
y
{\displaystyle y}
;
b
{\displaystyle b}
and
c
{\displaystyle c}
are real given constants;
C
1
,
C
2
,
…
{\displaystyle C_{1},C_{2},\ldots }
are arbitrary constants (complex in general). The differential equations are in their equivalent and alternative forms that lead to the solution through integration.
In the integral solutions,
λ
{\displaystyle \lambda }
and
ε
{\displaystyle \varepsilon }
are dummy variables of integration (the continuum analogues of indices in summation), and the notation
∫
x
F
(
λ
)
d
λ
{\displaystyle \int ^{x}F(\lambda )\,d\lambda }
just means to integrate
F
(
λ
)
{\displaystyle F(\lambda )}
with respect to
λ
{\displaystyle \lambda }
, then after the integration substitute
λ
=
x
{\displaystyle \lambda =x}
, without adding constants (explicitly stated).
=== Separable equations ===
=== General first-order equations ===
=== General second-order equations ===
=== Linear to the nth order equations ===
== The guessing method ==
When all other methods for solving an ODE fail, or in the cases where we have some intuition about what the solution to a DE might look like, it is sometimes possible to solve a DE simply by guessing the solution and validating it is correct. To use this method, we simply guess a solution to the differential equation, and then plug the solution into the differential equation to validate if it satisfies the equation. If it does then we have a particular solution to the DE, otherwise we start over again and try another guess. For instance we could guess that the solution to a DE has the form:
y
=
A
e
α
t
{\displaystyle y=Ae^{\alpha t}}
since this is a very common solution that physically behaves in a sinusoidal way.
In the case of a first order ODE that is non-homogeneous we need to first find a solution to the homogeneous portion of the DE, otherwise known as the associated homogeneous equation, and then find a solution to the entire non-homogeneous equation by guessing. Finally, we add both of these solutions together to obtain the general solution to the ODE, that is:
general solution
=
general solution of the associated homogeneous equation
+
particular solution
{\displaystyle {\text{general solution}}={\text{general solution of the associated homogeneous equation}}+{\text{particular solution}}}
== Software for ODE solving ==
Maxima, an open-source computer algebra system.
COPASI, a free (Artistic License 2.0) software package for the integration and analysis of ODEs.
MATLAB, a technical computing application (MATrix LABoratory)
GNU Octave, a high-level language, primarily intended for numerical computations.
Scilab, an open source application for numerical computation.
Maple, a proprietary application for symbolic calculations.
Mathematica, a proprietary application primarily intended for symbolic calculations.
SymPy, a Python package that can solve ODEs symbolically
Julia (programming language), a high-level language primarily intended for numerical computations.
SageMath, an open-source application that uses a Python-like syntax with a wide range of capabilities spanning several branches of mathematics.
SciPy, a Python package that includes an ODE integration module.
Chebfun, an open-source package, written in MATLAB, for computing with functions to 15-digit accuracy.
GNU R, an open source computational environment primarily intended for statistics, which includes packages for ODE solving.
== See also ==
Boundary value problem
Examples of differential equations
Laplace transform applied to differential equations
List of dynamical systems and differential equations topics
Matrix differential equation
Method of undetermined coefficients
Recurrence relation
== Notes ==
== References ==
Halliday, David; Resnick, Robert (1977), Physics (3rd ed.), New York: Wiley, ISBN 0-471-71716-9
Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN 0-13-487538-9
Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: Wiley, ISBN 0-471-50728-8.
Polyanin, A. D. and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition), Chapman & Hall/CRC Press, Boca Raton, 2003. ISBN 1-58488-297-2
Simmons, George F. (1972), Differential Equations with Applications and Historical Notes, New York: McGraw-Hill, LCCN 75173716
Tipler, Paul A. (1991), Physics for Scientists and Engineers: Extended version (3rd ed.), New York: Worth Publishers, ISBN 0-87901-432-6
Boscain, Ugo; Chitour, Yacine (2011), Introduction à l'automatique (PDF) (in French)
Dresner, Lawrence (1999), Applications of Lie's Theory of Ordinary and Partial Differential Equations, Bristol and Philadelphia: Institute of Physics Publishing, ISBN 978-0750305303
Ascher, Uri; Petzold, Linda (1998), Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, SIAM, ISBN 978-1-61197-139-2
== Bibliography ==
Coddington, Earl A.; Levinson, Norman (1955). Theory of Ordinary Differential Equations. New York: McGraw-Hill.
Hartman, Philip (2002) [1964], Ordinary differential equations, Classics in Applied Mathematics, vol. 38, Philadelphia: Society for Industrial and Applied Mathematics, doi:10.1137/1.9780898719222, ISBN 978-0-89871-510-1, MR 1929104
W. Johnson, A Treatise on Ordinary and Partial Differential Equations, John Wiley and Sons, 1913, in University of Michigan Historical Math Collection
Ince, Edward L. (1944) [1926], Ordinary Differential Equations, Dover Publications, New York, ISBN 978-0-486-60349-0, MR 0010757 {{citation}}: ISBN / Date incompatibility (help)
Witold Hurewicz, Lectures on Ordinary Differential Equations, Dover Publications, ISBN 0-486-49510-8
Ibragimov, Nail H. (1993). CRC Handbook of Lie Group Analysis of Differential Equations Vol. 1-3. Providence: CRC-Press. ISBN 0-8493-4488-3..
Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux, Handbook of First Order Partial Differential Equations, Taylor & Francis, London, 2002. ISBN 0-415-27267-X
D. Zwillinger, Handbook of Differential Equations (3rd edition), Academic Press, Boston, 1997.
== External links ==
"Differential equation, ordinary", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
EqWorld: The World of Mathematical Equations, containing a list of ordinary differential equations with their solutions.
Online Notes / Differential Equations by Paul Dawkins, Lamar University.
Differential Equations, S.O.S. Mathematics.
A primer on analytical solution of differential equations from the Holistic Numerical Methods Institute, University of South Florida.
Ordinary Differential Equations and Dynamical Systems lecture notes by Gerald Teschl.
Notes on Diffy Qs: Differential Equations for Engineers An introductory textbook on differential equations by Jiri Lebl of UIUC.
Modeling with ODEs using Scilab A tutorial on how to model a physical system described by ODE using Scilab standard programming language by Openeering team.
Solving an ordinary differential equation in Wolfram|Alpha | Wikipedia/Non-homogeneous_differential_equation |
In mathematical analysis a pseudo-differential operator is an extension of the concept of differential operator. Pseudo-differential operators are used extensively in the theory of partial differential equations and quantum field theory, e.g. in mathematical models that include ultrametric pseudo-differential equations in a non-Archimedean space.
== History ==
The study of pseudo-differential operators began in the mid 1960s with the work of Kohn, Nirenberg, Hörmander, Unterberger and Bokobza.
They played an influential role in the second proof of the Atiyah–Singer index theorem via K-theory. Atiyah and Singer thanked Hörmander for assistance with understanding the theory of pseudo-differential operators.
== Motivation ==
=== Linear differential operators with constant coefficients ===
Consider a linear differential operator with constant coefficients,
P
(
D
)
:=
∑
α
a
α
D
α
{\displaystyle P(D):=\sum _{\alpha }a_{\alpha }\,D^{\alpha }}
which acts on smooth functions
u
{\displaystyle u}
with compact support in Rn.
This operator can be written as a composition of a Fourier transform, a simple multiplication by the
polynomial function (called the symbol)
P
(
ξ
)
=
∑
α
a
α
ξ
α
,
{\displaystyle P(\xi )=\sum _{\alpha }a_{\alpha }\,\xi ^{\alpha },}
and an inverse Fourier transform, in the form:
Here,
α
=
(
α
1
,
…
,
α
n
)
{\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n})}
is a multi-index,
a
α
{\displaystyle a_{\alpha }}
are complex numbers, and
D
α
=
(
−
i
∂
1
)
α
1
⋯
(
−
i
∂
n
)
α
n
{\displaystyle D^{\alpha }=(-i\partial _{1})^{\alpha _{1}}\cdots (-i\partial _{n})^{\alpha _{n}}}
is an iterated partial derivative, where ∂j means differentiation with respect to the j-th variable. We introduce the constants
−
i
{\displaystyle -i}
to facilitate the calculation of Fourier transforms.
Derivation of formula (1)
The Fourier transform of a smooth function u, compactly supported in Rn, is
u
^
(
ξ
)
:=
∫
e
−
i
y
ξ
u
(
y
)
d
y
{\displaystyle {\hat {u}}(\xi ):=\int e^{-iy\xi }u(y)\,dy}
and Fourier's inversion formula gives
u
(
x
)
=
1
(
2
π
)
n
∫
e
i
x
ξ
u
^
(
ξ
)
d
ξ
=
1
(
2
π
)
n
∬
e
i
(
x
−
y
)
ξ
u
(
y
)
d
y
d
ξ
{\displaystyle u(x)={\frac {1}{(2\pi )^{n}}}\int e^{ix\xi }{\hat {u}}(\xi )d\xi ={\frac {1}{(2\pi )^{n}}}\iint e^{i(x-y)\xi }u(y)\,dy\,d\xi }
By applying P(D) to this representation of u and using
P
(
D
x
)
e
i
(
x
−
y
)
ξ
=
e
i
(
x
−
y
)
ξ
P
(
ξ
)
{\displaystyle P(D_{x})\,e^{i(x-y)\xi }=e^{i(x-y)\xi }\,P(\xi )}
one obtains formula (1).
=== Representation of solutions to partial differential equations ===
To solve the partial differential equation
P
(
D
)
u
=
f
{\displaystyle P(D)\,u=f}
we (formally) apply the Fourier transform on both sides and obtain the algebraic equation
P
(
ξ
)
u
^
(
ξ
)
=
f
^
(
ξ
)
.
{\displaystyle P(\xi )\,{\hat {u}}(\xi )={\hat {f}}(\xi ).}
If the symbol P(ξ) is never zero when ξ ∈ Rn, then it is possible to divide by P(ξ):
u
^
(
ξ
)
=
1
P
(
ξ
)
f
^
(
ξ
)
{\displaystyle {\hat {u}}(\xi )={\frac {1}{P(\xi )}}{\hat {f}}(\xi )}
By Fourier's inversion formula, a solution is
u
(
x
)
=
1
(
2
π
)
n
∫
e
i
x
ξ
1
P
(
ξ
)
f
^
(
ξ
)
d
ξ
.
{\displaystyle u(x)={\frac {1}{(2\pi )^{n}}}\int e^{ix\xi }{\frac {1}{P(\xi )}}{\hat {f}}(\xi )\,d\xi .}
Here it is assumed that:
P(D) is a linear differential operator with constant coefficients,
its symbol P(ξ) is never zero,
both u and ƒ have a well defined Fourier transform.
The last assumption can be weakened by using the theory of distributions.
The first two assumptions can be weakened as follows.
In the last formula, write out the Fourier transform of ƒ to obtain
u
(
x
)
=
1
(
2
π
)
n
∬
e
i
(
x
−
y
)
ξ
1
P
(
ξ
)
f
(
y
)
d
y
d
ξ
.
{\displaystyle u(x)={\frac {1}{(2\pi )^{n}}}\iint e^{i(x-y)\xi }{\frac {1}{P(\xi )}}f(y)\,dy\,d\xi .}
This is similar to formula (1), except that 1/P(ξ) is not a polynomial function, but a function of a more general kind.
== Definition of pseudo-differential operators ==
Here we view pseudo-differential operators as a generalization of differential operators.
We extend formula (1) as follows. A pseudo-differential operator P(x,D) on Rn is an operator whose value on the function u(x) is the function of x:
where
u
^
(
ξ
)
{\displaystyle {\hat {u}}(\xi )}
is the Fourier transform of u and the symbol P(x,ξ) in the integrand belongs to a certain symbol class.
For instance, if P(x,ξ) is an infinitely differentiable function on Rn × Rn with the property
|
∂
ξ
α
∂
x
β
P
(
x
,
ξ
)
|
≤
C
α
,
β
(
1
+
|
ξ
|
)
m
−
|
α
|
{\displaystyle |\partial _{\xi }^{\alpha }\partial _{x}^{\beta }P(x,\xi )|\leq C_{\alpha ,\beta }\,(1+|\xi |)^{m-|\alpha |}}
for all x,ξ ∈Rn, all multiindices α,β, some constants Cα, β and some real number m, then P belongs to the symbol class
S
1
,
0
m
{\displaystyle \scriptstyle {S_{1,0}^{m}}}
of Hörmander. The corresponding operator P(x,D) is called a pseudo-differential operator of order m and belongs to the class
Ψ
1
,
0
m
.
{\displaystyle \Psi _{1,0}^{m}.}
== Properties ==
Linear differential operators of order m with smooth bounded coefficients are pseudo-differential
operators of order m.
The composition PQ of two pseudo-differential operators P, Q is again a pseudo-differential operator and the symbol of PQ can be calculated by using the symbols of P and Q. The adjoint and transpose of a pseudo-differential operator is a pseudo-differential operator.
If a differential operator of order m is (uniformly) elliptic (of order m)
and invertible, then its inverse is a pseudo-differential operator of order −m, and its symbol can be calculated. This means that one can solve linear elliptic differential equations more or less explicitly
by using the theory of pseudo-differential operators.
Differential operators are local in the sense that one only needs the value of a function in a neighbourhood of a point to determine the effect of the operator. Pseudo-differential operators are pseudo-local, which means informally that when applied to a distribution they do not create a singularity at points where the distribution was already smooth.
Just as a differential operator can be expressed in terms of D = −id/dx in the form
p
(
x
,
D
)
{\displaystyle p(x,D)\,}
for a polynomial p in D (which is called the symbol), a pseudo-differential operator has a symbol in a more general class of functions. Often one can reduce a problem in analysis of pseudo-differential operators to a sequence of algebraic problems involving their symbols, and this is the essence of microlocal analysis.
== Kernel of pseudo-differential operator ==
Pseudo-differential operators can be represented by kernels. The singularity of the kernel on the diagonal depends on the degree of the corresponding operator. In fact, if the symbol satisfies the above differential inequalities with m ≤ 0, it can be shown that the kernel is a singular integral kernel.
== See also ==
Differential algebra for a definition of pseudo-differential operators in the context of differential algebras and differential rings.
Fourier transform
Fourier integral operator
Oscillatory integral operator
Sato's fundamental theorem
Operational calculus
== Footnotes ==
== References ==
Stein, Elias (1993), Harmonic Analysis: Real-Variable Methods, Orthogonality and Oscillatory Integrals, Princeton University Press.
Atiyah, Michael F.; Singer, Isadore M. (1968), "The Index of Elliptic Operators I", Annals of Mathematics, 87 (3): 484–530, doi:10.2307/1970715, JSTOR 1970715
== Further reading ==
Nicolas Lerner, Metrics on the phase space and non-selfadjoint pseudo-differential operators. Pseudo-Differential Operators. Theory and Applications, 3. Birkhäuser Verlag, Basel, 2010.
Michael E. Taylor, Pseudodifferential Operators, Princeton Univ. Press 1981. ISBN 0-691-08282-0
M. A. Shubin, Pseudodifferential Operators and Spectral Theory, Springer-Verlag 2001. ISBN 3-540-41195-X
Francois Treves, Introduction to Pseudo Differential and Fourier Integral Operators, (University Series in Mathematics), Plenum Publ. Co. 1981. ISBN 0-306-40404-4
F. G. Friedlander and M. Joshi, Introduction to the Theory of Distributions, Cambridge University Press 1999. ISBN 0-521-64971-4
Hörmander, Lars (1987). The Analysis of Linear Partial Differential Operators III: Pseudo-Differential Operators. Springer. ISBN 3-540-49937-7.
André Unterberger, Pseudo-differential operators and applications: an introduction. Lecture Notes Series, 46. Aarhus Universitet, Matematisk Institut, Aarhus, 1976.
== External links ==
Lectures on Pseudo-differential Operators by Mark S. Joshi on arxiv.org.
"Pseudo-differential operator", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Pseudo-differential_operators |
In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics, its applications include many problems in a wide variety of fields such as biology, neuroscience, computer science, information theory and sociology. Its main purpose is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion.
Statistical mechanics arose out of the development of classical thermodynamics, a field for which it was successful in explaining macroscopic physical properties—such as temperature, pressure, and heat capacity—in terms of microscopic parameters that fluctuate about average values and are characterized by probability distributions.: 1–4
While classical thermodynamics is primarily concerned with thermodynamic equilibrium, statistical mechanics has been applied in non-equilibrium statistical mechanics to the issues of microscopically modeling the speed of irreversible processes that are driven by imbalances.: 3 Examples of such processes include chemical reactions and flows of particles and heat. The fluctuation–dissipation theorem is the basic knowledge obtained from applying non-equilibrium statistical mechanics to study the simplest non-equilibrium situation of a steady state current flow in a system of many particles.: 572–573
== History ==
In 1738, Swiss physicist and mathematician Daniel Bernoulli published Hydrodynamica which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, still used to this day, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience as heat is simply the kinetic energy of their motion.
The founding of the field of statistical mechanics is generally credited to three physicists:
Ludwig Boltzmann, who developed the fundamental interpretation of entropy in terms of a collection of microstates
James Clerk Maxwell, who developed models of probability distribution of such states
Josiah Willard Gibbs, who coined the name of the field in 1884
In 1859, after reading a paper on the diffusion of molecules by Rudolf Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium. Five years later, in 1864, Ludwig Boltzmann, a young student in Vienna, came across Maxwell's paper and spent much of his life developing the subject further.
Statistical mechanics was initiated in the 1870s with the work of Boltzmann, much of which was collectively published in his 1896 Lectures on Gas Theory. Boltzmann's original papers on the statistical interpretation of thermodynamics, the H-theorem, transport theory, thermal equilibrium, the equation of state of gases, and similar subjects, occupy about 2,000 pages in the proceedings of the Vienna Academy and other societies. Boltzmann introduced the concept of an equilibrium statistical ensemble and also investigated for the first time non-equilibrium statistical mechanics, with his H-theorem.
The term "statistical mechanics" was coined by the American mathematical physicist J. Willard Gibbs in 1884. According to Gibbs, the term "statistical", in the context of mechanics, i.e. statistical mechanics, was first used by the Scottish physicist James Clerk Maxwell in 1871:
"In dealing with masses of matter, while we do not perceive the individual molecules, we are compelled to adopt what I have described as the statistical method of calculation, and to abandon the strict dynamical method, in which we follow every motion by the calculus."
"Probabilistic mechanics" might today seem a more appropriate term, but "statistical mechanics" is firmly entrenched. Shortly before his death, Gibbs published in 1902 Elementary Principles in Statistical Mechanics, a book which formalized statistical mechanics as a fully general approach to address all mechanical systems—macroscopic or microscopic, gaseous or non-gaseous. Gibbs' methods were initially derived in the framework classical mechanics, however they were of such generality that they were found to adapt easily to the later quantum mechanics, and still form the foundation of statistical mechanics to this day.
== Principles: mechanics and ensembles ==
In physics, two types of mechanics are usually examined: classical mechanics and quantum mechanics. For both types of mechanics, the standard mathematical approach is to consider two concepts:
The complete state of the mechanical system at a given time, mathematically encoded as a phase point (classical mechanics) or a pure quantum state vector (quantum mechanics).
An equation of motion which carries the state forward in time: Hamilton's equations (classical mechanics) or the Schrödinger equation (quantum mechanics)
Using these two concepts, the state at any other time, past or future, can in principle be calculated.
There is however a disconnect between these laws and everyday life experiences, as we do not find it necessary (nor even theoretically possible) to know exactly at a microscopic level the simultaneous positions and velocities of each molecule while carrying out processes at the human scale (for example, when performing a chemical reaction). Statistical mechanics fills this disconnection between the laws of mechanics and the practical experience of incomplete knowledge, by adding some uncertainty about which state the system is in.
Whereas ordinary mechanics only considers the behaviour of a single state, statistical mechanics introduces the statistical ensemble, which is a large collection of virtual, independent copies of the system in various states. The statistical ensemble is a probability distribution over all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points (as opposed to a single phase point in ordinary mechanics), usually represented as a distribution in a phase space with canonical coordinate axes. In quantum statistical mechanics, the ensemble is a probability distribution over pure states and can be compactly summarized as a density matrix.
As is usual for probabilities, the ensemble can be interpreted in different ways:
an ensemble can be taken to represent the various possible states that a single system could be in (epistemic probability, a form of knowledge), or
the members of the ensemble can be understood as the states of the systems in experiments repeated on independent systems which have been prepared in a similar but imperfectly controlled manner (empirical probability), in the limit of an infinite number of trials.
These two meanings are equivalent for many purposes, and will be used interchangeably in this article.
However the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself (the probability distribution over states) also evolves, as the virtual systems in the ensemble continually leave one state and enter another. The ensemble evolution is given by the Liouville equation (classical mechanics) or the von Neumann equation (quantum mechanics). These equations are simply derived by the application of the mechanical equation of motion separately to each virtual system contained in the ensemble, with the probability of the virtual system being conserved over time as it evolves from state to state.
One special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium. Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state. (By contrast, mechanical equilibrium is a state with a balance of forces that has ceased to evolve.) The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics. Non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems.
== Statistical thermodynamics ==
The primary goal of statistical thermodynamics (also known as equilibrium statistical mechanics) is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles and the interactions between them. In other words, statistical thermodynamics provides a connection between the macroscopic properties of materials in thermodynamic equilibrium, and the microscopic behaviours and motions occurring inside the material.
Whereas statistical mechanics proper involves dynamics, here the attention is focused on statistical equilibrium (steady state). Statistical equilibrium does not mean that the particles have stopped moving (mechanical equilibrium), rather, only that the ensemble is not evolving.
=== Fundamental postulate ===
A sufficient (but not necessary) condition for statistical equilibrium with an isolated system is that the probability distribution is a function only of conserved properties (total energy, total particle numbers, etc.).
There are many different equilibrium ensembles that can be considered, and only some of them correspond to thermodynamics. Additional postulates are necessary to motivate why the ensemble for a given system should have one form or another.
A common approach found in many textbooks is to take the equal a priori probability postulate. This postulate states that
For an isolated system with an exactly known energy and exactly known composition, the system can be found with equal probability in any microstate consistent with that knowledge.
The equal a priori probability postulate therefore provides a motivation for the microcanonical ensemble described below. There are various arguments in favour of the equal a priori probability postulate:
Ergodic hypothesis: An ergodic system is one that evolves over time to explore "all accessible" states: all those with the same energy and composition. In an ergodic system, the microcanonical ensemble is the only possible equilibrium ensemble with fixed energy. This approach has limited applicability, since most systems are not ergodic.
Principle of indifference: In the absence of any further information, we can only assign equal probabilities to each compatible situation.
Maximum information entropy: A more elaborate version of the principle of indifference states that the correct ensemble is the ensemble that is compatible with the known information and that has the largest Gibbs entropy (information entropy).
Other fundamental postulates for statistical mechanics have also been proposed. For example, recent studies shows that the theory of statistical mechanics can be built without the equal a priori probability postulate. One such formalism is based on the fundamental thermodynamic relation together with the following set of postulates:
where the third postulate can be replaced by the following:
=== Three thermodynamic ensembles ===
There are three equilibrium ensembles with a simple form that can be defined for any isolated system bounded inside a finite volume. These are the most often discussed ensembles in statistical thermodynamics. In the macroscopic limit (defined below) they all correspond to classical thermodynamics.
Microcanonical ensemble
describes a system with a precisely given energy and fixed composition (precise number of particles). The microcanonical ensemble contains with equal probability each possible state that is consistent with that energy and composition.
Canonical ensemble
describes a system of fixed composition that is in thermal equilibrium with a heat bath of a precise temperature. The canonical ensemble contains states of varying energy but identical composition; the different states in the ensemble are accorded different probabilities depending on their total energy.
Grand canonical ensemble
describes a system with non-fixed composition (uncertain particle numbers) that is in thermal and chemical equilibrium with a thermodynamic reservoir. The reservoir has a precise temperature, and precise chemical potentials for various types of particle. The grand canonical ensemble contains states of varying energy and varying numbers of particles; the different states in the ensemble are accorded different probabilities depending on their total energy and total particle numbers.
For systems containing many particles (the thermodynamic limit), all three of the ensembles listed above tend to give identical behaviour. It is then simply a matter of mathematical convenience which ensemble is used.: 227 The Gibbs theorem about equivalence of ensembles was developed into the theory of concentration of measure phenomenon, which has applications in many areas of science, from functional analysis to methods of artificial intelligence and big data technology.
Important cases where the thermodynamic ensembles do not give identical results include:
Microscopic systems.
Large systems at a phase transition.
Large systems with long-range interactions.
In these cases the correct thermodynamic ensemble must be chosen as there are observable differences between these ensembles not just in the size of fluctuations, but also in average quantities such as the distribution of particles. The correct ensemble is that which corresponds to the way the system has been prepared and characterized—in other words, the ensemble that reflects the knowledge about that system.
=== Calculation methods ===
Once the characteristic state function for an ensemble has been calculated for a given system, that system is 'solved' (macroscopic observables can be extracted from the characteristic state function). Calculating the characteristic state function of a thermodynamic ensemble is not necessarily a simple task, however, since it involves considering every possible state of the system. While some hypothetical systems have been exactly solved, the most general (and realistic) case is too complex for an exact solution. Various approaches exist to approximate the true ensemble and allow calculation of average quantities.
==== Exact ====
There are some cases which allow exact solutions.
For very small microscopic systems, the ensembles can be directly computed by simply enumerating over all possible states of the system (using exact diagonalization in quantum mechanics, or integral over all phase space in classical mechanics).
Some large systems consist of many separable microscopic systems, and each of the subsystems can be analysed independently. Notably, idealized gases of non-interacting particles have this property, allowing exact derivations of Maxwell–Boltzmann statistics, Fermi–Dirac statistics, and Bose–Einstein statistics.
A few large systems with interaction have been solved. By the use of subtle mathematical techniques, exact solutions have been found for a few toy models. Some examples include the Bethe ansatz, square-lattice Ising model in zero field, hard hexagon model.
==== Monte Carlo ====
Although some problems in statistical physics can be solved analytically using approximations and expansions, most current research utilizes the large processing power of modern computers to simulate or approximate solutions. A common approach to statistical problems is to use a Monte Carlo simulation to yield insight into the properties of a complex system. Monte Carlo methods are important in computational physics, physical chemistry, and related fields, and have diverse applications including medical physics, where they are used to model radiation transport for radiation dosimetry calculations.
The Monte Carlo method examines just a few of the possible states of the system, with the states chosen randomly (with a fair weight). As long as these states form a representative sample of the whole set of states of the system, the approximate characteristic function is obtained. As more and more random samples are included, the errors are reduced to an arbitrarily low level.
The Metropolis–Hastings algorithm is a classic Monte Carlo method which was initially used to sample the canonical ensemble.
Path integral Monte Carlo, also used to sample the canonical ensemble.
==== Other ====
For rarefied non-ideal gases, approaches such as the cluster expansion use perturbation theory to include the effect of weak interactions, leading to a virial expansion.
For dense fluids, another approximate approach is based on reduced distribution functions, in particular the radial distribution function.
Molecular dynamics computer simulations can be used to calculate microcanonical ensemble averages, in ergodic systems. With the inclusion of a connection to a stochastic heat bath, they can also model canonical and grand canonical conditions.
Mixed methods involving non-equilibrium statistical mechanical results (see below) may be useful.
== Non-equilibrium statistical mechanics ==
Many physical phenomena involve quasi-thermodynamic processes out of equilibrium, for example:
heat transport by the internal motions in a material, driven by a temperature imbalance,
electric currents carried by the motion of charges in a conductor, driven by a voltage imbalance,
spontaneous chemical reactions driven by a decrease in free energy,
friction, dissipation, quantum decoherence,
systems being pumped by external forces (optical pumping, etc.),
and irreversible processes in general.
All of these processes occur over time with characteristic rates. These rates are important in engineering. The field of non-equilibrium statistical mechanics is concerned with understanding these non-equilibrium processes at the microscopic level. (Statistical thermodynamics can only be used to calculate the final result, after the external imbalances have been removed and the ensemble has settled back down to equilibrium.)
In principle, non-equilibrium statistical mechanics could be mathematically exact: ensembles for an isolated system evolve over time according to deterministic equations such as Liouville's equation or its quantum equivalent, the von Neumann equation. These equations are the result of applying the mechanical equations of motion independently to each state in the ensemble. These ensemble evolution equations inherit much of the complexity of the underlying mechanical motion, and so exact solutions are very difficult to obtain. Moreover, the ensemble evolution equations are fully reversible and do not destroy information (the ensemble's Gibbs entropy is preserved). In order to make headway in modelling irreversible processes, it is necessary to consider additional factors besides probability and reversible mechanics.
Non-equilibrium mechanics is therefore an active area of theoretical research as the range of validity of these additional assumptions continues to be explored. A few approaches are described in the following subsections.
=== Stochastic methods ===
One approach to non-equilibrium statistical mechanics is to incorporate stochastic (random) behaviour into the system. Stochastic behaviour destroys information contained in the ensemble. While this is technically inaccurate (aside from hypothetical situations involving black holes, a system cannot in itself cause loss of information), the randomness is added to reflect that information of interest becomes converted over time into subtle correlations within the system, or to correlations between the system and environment. These correlations appear as chaotic or pseudorandom influences on the variables of interest. By replacing these correlations with randomness proper, the calculations can be made much easier.
=== Near-equilibrium methods ===
Another important class of non-equilibrium statistical mechanical models deals with systems that are only very slightly perturbed from equilibrium. With very small perturbations, the response can be analysed in linear response theory. A remarkable result, as formalized by the fluctuation–dissipation theorem, is that the response of a system when near equilibrium is precisely related to the fluctuations that occur when the system is in total equilibrium. Essentially, a system that is slightly away from equilibrium—whether put there by external forces or by fluctuations—relaxes towards equilibrium in the same way, since the system cannot tell the difference or "know" how it came to be away from equilibrium.: 664
This provides an indirect avenue for obtaining numbers such as ohmic conductivity and thermal conductivity by extracting results from equilibrium statistical mechanics. Since equilibrium statistical mechanics is mathematically well defined and (in some cases) more amenable for calculations, the fluctuation–dissipation connection can be a convenient shortcut for calculations in near-equilibrium statistical mechanics.
A few of the theoretical tools used to make this connection include:
Fluctuation–dissipation theorem
Onsager reciprocal relations
Green–Kubo relations
Landauer–Büttiker formalism
Mori–Zwanzig formalism
GENERIC formalism
=== Hybrid methods ===
An advanced approach uses a combination of stochastic methods and linear response theory. As an example, one approach to compute quantum coherence effects (weak localization, conductance fluctuations) in the conductance of an electronic system is the use of the Green–Kubo relations, with the inclusion of stochastic dephasing by interactions between various electrons by use of the Keldysh method.
== Applications ==
The ensemble formalism can be used to analyze general mechanical systems with uncertainty in knowledge about the state of a system. Ensembles are also used in:
propagation of uncertainty over time,
regression analysis of gravitational orbits,
ensemble forecasting of weather,
dynamics of neural networks,
bounded-rational potential games in game theory and non-equilibrium economics.
Statistical physics explains and quantitatively describes superconductivity, superfluidity, turbulence, collective phenomena in solids and plasma, and the structural features of liquid. It underlies the modern astrophysics and virial theorem. In solid state physics, statistical physics aids the study of liquid crystals, phase transitions, and critical phenomena. Many experimental studies of matter are entirely based on the statistical description of a system. These include the scattering of cold neutrons, X-ray, visible light, and more. Statistical physics also plays a role in materials science, nuclear physics, astrophysics, chemistry, biology and medicine (e.g. study of the spread of infectious diseases).
Analytical and computational techniques derived from statistical physics of disordered systems, can be extended to large-scale problems, including machine learning, e.g., to analyze the weight space of deep neural networks. Statistical physics is thus finding applications in the area of medical diagnostics.
=== Quantum statistical mechanics ===
Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics, a statistical ensemble (probability distribution over possible quantum states) is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics. One such formalism is provided by quantum logic.
== Index of statistical mechanics topics ==
=== Physics ===
Probability amplitude
Statistical physics
Boltzmann factor
Feynman–Kac formula
Fluctuation theorem
Information entropy
Vacuum expectation value
Cosmic variance
Negative probability
Gibbs state
Master equation
Partition function (mathematics)
Quantum probability
=== Percolation theory ===
Percolation theory
Schramm–Loewner evolution
== See also ==
List of textbooks in thermodynamics and statistical mechanics
Laplace transform § Statistical mechanics
== References ==
== Further reading ==
Reif, F. (2009). Fundamentals of Statistical and Thermal Physics. Waveland Press. ISBN 978-1-4786-1005-2.
Müller-Kirsten, Harald J W. (2013). Basics of Statistical Physics (PDF). doi:10.1142/8709. ISBN 978-981-4449-53-3.
Kadanoff, Leo P. "Statistical Physics and other resources". Archived from the original on August 12, 2021. Retrieved June 18, 2023.
Kadanoff, Leo P. (2000). Statistical Physics: Statics, Dynamics and Renormalization. World Scientific. ISBN 978-981-02-3764-6.
Flamm, Dieter (1998). "History and outlook of statistical physics". arXiv:physics/9803005.
== External links ==
Philosophy of Statistical Mechanics article by Lawrence Sklar for the Stanford Encyclopedia of Philosophy.
Sklogwiki - Thermodynamics, statistical mechanics, and the computer simulation of materials. SklogWiki is particularly orientated towards liquids and soft condensed matter.
Thermodynamics and Statistical Mechanics by Richard Fitzpatrick
Cohen, Doron (2011). "Lecture Notes in Statistical Mechanics and Mesoscopics". arXiv:1107.0568 [quant-ph].
Videos of lecture series in statistical mechanics on YouTube taught by Leonard Susskind.
Vu-Quoc, L., Configuration integral (statistical mechanics), 2008. this wiki site is down; see this article in the web archive on 2012 April 28. | Wikipedia/Statistical_mechanics |
Fractional calculus is a branch of mathematical analysis that studies the several different possibilities of defining real number powers or complex number powers of the differentiation operator
D
{\displaystyle D}
D
f
(
x
)
=
d
d
x
f
(
x
)
,
{\displaystyle Df(x)={\frac {d}{dx}}f(x)\,,}
and of the integration operator
J
{\displaystyle J}
J
f
(
x
)
=
∫
0
x
f
(
s
)
d
s
,
{\displaystyle Jf(x)=\int _{0}^{x}f(s)\,ds\,,}
and developing a calculus for such operators generalizing the classical one.
In this context, the term powers refers to iterative application of a linear operator
D
{\displaystyle D}
to a function
f
{\displaystyle f}
, that is, repeatedly composing
D
{\displaystyle D}
with itself, as in
D
n
(
f
)
=
(
D
∘
D
∘
D
∘
⋯
∘
D
⏟
n
)
(
f
)
=
D
(
D
(
D
(
⋯
D
⏟
n
(
f
)
⋯
)
)
)
.
{\displaystyle {\begin{aligned}D^{n}(f)&=(\underbrace {D\circ D\circ D\circ \cdots \circ D} _{n})(f)\\&=\underbrace {D(D(D(\cdots D} _{n}(f)\cdots ))).\end{aligned}}}
For example, one may ask for a meaningful interpretation of
D
=
D
1
2
{\displaystyle {\sqrt {D}}=D^{\scriptstyle {\frac {1}{2}}}}
as an analogue of the functional square root for the differentiation operator, that is, an expression for some linear operator that, when applied twice to any function, will have the same effect as differentiation. More generally, one can look at the question of defining a linear operator
D
a
{\displaystyle D^{a}}
for every real number
a
{\displaystyle a}
in such a way that, when
a
{\displaystyle a}
takes an integer value
n
∈
Z
{\displaystyle n\in \mathbb {Z} }
, it coincides with the usual
n
{\displaystyle n}
-fold differentiation
D
{\displaystyle D}
if
n
>
0
{\displaystyle n>0}
, and with the
n
{\displaystyle n}
-th power of
J
{\displaystyle J}
when
n
<
0
{\displaystyle n<0}
.
One of the motivations behind the introduction and study of these sorts of extensions of the differentiation operator
D
{\displaystyle D}
is that the sets of operator powers
{
D
a
∣
a
∈
R
}
{\displaystyle \{D^{a}\mid a\in \mathbb {R} \}}
defined in this way are continuous semigroups with parameter
a
{\displaystyle a}
, of which the original discrete semigroup of
{
D
n
∣
n
∈
Z
}
{\displaystyle \{D^{n}\mid n\in \mathbb {Z} \}}
for integer
n
{\displaystyle n}
is a denumerable subgroup: since continuous semigroups have a well developed mathematical theory, they can be applied to other branches of mathematics.
Fractional differential equations, also known as extraordinary differential equations, are a generalization of differential equations through the application of fractional calculus.
== Historical notes ==
In applied mathematics and mathematical analysis, a fractional derivative is a derivative of any arbitrary order, real or complex. Its first appearance is in a letter written to Guillaume de l'Hôpital by Gottfried Wilhelm Leibniz in 1695. Around the same time, Leibniz wrote to Johann Bernoulli about derivatives of "general order". In the correspondence between Leibniz and John Wallis in 1697, Wallis's infinite product for
π
/
2
{\displaystyle \pi /2}
is discussed. Leibniz suggested using differential calculus to achieve this result. Leibniz further used the notation
d
1
/
2
y
{\displaystyle {d}^{1/2}{y}}
to denote the derivative of order 1/2.
Fractional calculus was introduced in one of Niels Henrik Abel's early papers where all the elements can be found: the idea of fractional-order integration and differentiation, the mutually inverse relationship between them, the understanding that fractional-order differentiation and integration can be considered as the same generalized operation, and the unified notation for differentiation and integration of arbitrary real order.
Independently, the foundations of the subject were laid by Liouville in a paper from 1832. Oliver Heaviside introduced the practical use of fractional differential operators in electrical transmission line analysis circa 1890. The theory and applications of fractional calculus expanded greatly over the 19th and 20th centuries, and numerous contributors have given different definitions for fractional derivatives and integrals.
== Computing the fractional integral ==
Let
f
(
x
)
{\displaystyle f(x)}
be a function defined for
x
>
0
{\displaystyle x>0}
. Form the definite integral from 0 to
x
{\displaystyle x}
. Call this
(
J
f
)
(
x
)
=
∫
0
x
f
(
t
)
d
t
.
{\displaystyle (Jf)(x)=\int _{0}^{x}f(t)\,dt\,.}
Repeating this process gives
(
J
2
f
)
(
x
)
=
∫
0
x
(
J
f
)
(
t
)
d
t
=
∫
0
x
(
∫
0
t
f
(
s
)
d
s
)
d
t
,
{\displaystyle {\begin{aligned}\left(J^{2}f\right)(x)&=\int _{0}^{x}(Jf)(t)\,dt\\&=\int _{0}^{x}\left(\int _{0}^{t}f(s)\,ds\right)dt\,,\end{aligned}}}
and this can be extended arbitrarily.
The Cauchy formula for repeated integration, namely
(
J
n
f
)
(
x
)
=
1
(
n
−
1
)
!
∫
0
x
(
x
−
t
)
n
−
1
f
(
t
)
d
t
,
{\displaystyle \left(J^{n}f\right)(x)={\frac {1}{(n-1)!}}\int _{0}^{x}\left(x-t\right)^{n-1}f(t)\,dt\,,}
leads in a straightforward way to a generalization for real n: using the gamma function to remove the discrete nature of the factorial function gives us a natural candidate for applications of the fractional integral operator as
(
J
α
f
)
(
x
)
=
1
Γ
(
α
)
∫
0
x
(
x
−
t
)
α
−
1
f
(
t
)
d
t
.
{\displaystyle \left(J^{\alpha }f\right)(x)={\frac {1}{\Gamma (\alpha )}}\int _{0}^{x}\left(x-t\right)^{\alpha -1}f(t)\,dt\,.}
This is in fact a well-defined operator.
It is straightforward to show that the J operator satisfies
(
J
α
)
(
J
β
f
)
(
x
)
=
(
J
β
)
(
J
α
f
)
(
x
)
=
(
J
α
+
β
f
)
(
x
)
=
1
Γ
(
α
+
β
)
∫
0
x
(
x
−
t
)
α
+
β
−
1
f
(
t
)
d
t
.
{\displaystyle {\begin{aligned}\left(J^{\alpha }\right)\left(J^{\beta }f\right)(x)&=\left(J^{\beta }\right)\left(J^{\alpha }f\right)(x)\\&=\left(J^{\alpha +\beta }f\right)(x)\\&={\frac {1}{\Gamma (\alpha +\beta )}}\int _{0}^{x}\left(x-t\right)^{\alpha +\beta -1}f(t)\,dt\,.\end{aligned}}}
This relationship is called the semigroup property of fractional differintegral operators.
=== Riemann–Liouville fractional integral ===
The classical form of fractional calculus is given by the Riemann–Liouville integral, which is essentially what has been described above. The theory of fractional integration for periodic functions (therefore including the "boundary condition" of repeating after a period) is given by the Weyl integral. It is defined on Fourier series, and requires the constant Fourier coefficient to vanish (thus, it applies to functions on the unit circle whose integrals evaluate to zero). The Riemann–Liouville integral exists in two forms, upper and lower. Considering the interval [a,b], the integrals are defined as
D
a
D
t
−
α
f
(
t
)
=
I
a
I
t
α
f
(
t
)
=
1
Γ
(
α
)
∫
a
t
(
t
−
τ
)
α
−
1
f
(
τ
)
d
τ
D
t
D
b
−
α
f
(
t
)
=
I
t
I
b
α
f
(
t
)
=
1
Γ
(
α
)
∫
t
b
(
τ
−
t
)
α
−
1
f
(
τ
)
d
τ
{\displaystyle {\begin{aligned}\sideset {_{a}}{_{t}^{-\alpha }}Df(t)&=\sideset {_{a}}{_{t}^{\alpha }}If(t)\\&={\frac {1}{\Gamma (\alpha )}}\int _{a}^{t}\left(t-\tau \right)^{\alpha -1}f(\tau )\,d\tau \\\sideset {_{t}}{_{b}^{-\alpha }}Df(t)&=\sideset {_{t}}{_{b}^{\alpha }}If(t)\\&={\frac {1}{\Gamma (\alpha )}}\int _{t}^{b}\left(\tau -t\right)^{\alpha -1}f(\tau )\,d\tau \end{aligned}}}
Where the former is valid for t > a and the latter is valid for t < b.
It has been suggested that the integral on the positive real axis (i.e.
a
=
0
{\displaystyle a=0}
) would be more appropriately named the Abel–Riemann integral, on the basis of history of discovery and use, and in the same vein the integral over the entire real line be named Liouville–Weyl integral.
By contrast the Grünwald–Letnikov derivative starts with the derivative instead of the integral.
=== Hadamard fractional integral ===
The Hadamard fractional integral was introduced by Jacques Hadamard and is given by the following formula,
D
a
D
t
−
α
f
(
t
)
=
1
Γ
(
α
)
∫
a
t
(
log
t
τ
)
α
−
1
f
(
τ
)
d
τ
τ
,
t
>
a
.
{\displaystyle \sideset {_{a}}{_{t}^{-\alpha }}{\mathbf {D} }f(t)={\frac {1}{\Gamma (\alpha )}}\int _{a}^{t}\left(\log {\frac {t}{\tau }}\right)^{\alpha -1}f(\tau ){\frac {d\tau }{\tau }},\qquad t>a\,.}
=== Atangana–Baleanu fractional integral (AB fractional integral) ===
The Atangana–Baleanu fractional integral of a continuous function is defined as:
I
A
a
AB
I
t
α
f
(
t
)
=
1
−
α
AB
(
α
)
f
(
t
)
+
α
AB
(
α
)
Γ
(
α
)
∫
a
t
(
t
−
τ
)
α
−
1
f
(
τ
)
d
τ
{\displaystyle \sideset {_{{\hphantom {A}}a}^{\operatorname {AB} }}{_{t}^{\alpha }}If(t)={\frac {1-\alpha }{\operatorname {AB} (\alpha )}}f(t)+{\frac {\alpha }{\operatorname {AB} (\alpha )\Gamma (\alpha )}}\int _{a}^{t}\left(t-\tau \right)^{\alpha -1}f(\tau )\,d\tau }
== Fractional derivatives ==
Unfortunately, the comparable process for the derivative operator D is significantly more complex, but it can be shown that D is neither commutative nor additive in general.
Unlike classical Newtonian derivatives, fractional derivatives can be defined in a variety of different ways that often do not all lead to the same result even for smooth functions. Some of these are defined via a fractional integral. Because of the incompatibility of definitions, it is frequently necessary to be explicit about which definition is used.
=== Riemann–Liouville fractional derivative ===
The corresponding derivative is calculated using Lagrange's rule for differential operators. To find the αth order derivative, the nth order derivative of the integral of order (n − α) is computed, where n is the smallest integer greater than α (that is, n = ⌈α⌉). The Riemann–Liouville fractional derivative and integral has multiple applications such as in case of solutions to the equation in the case of multiple systems such as the tokamak systems, and Variable order fractional parameter. Similar to the definitions for the Riemann–Liouville integral, the derivative has upper and lower variants.
D
a
D
t
α
f
(
t
)
=
d
n
d
t
n
D
a
D
t
−
(
n
−
α
)
f
(
t
)
=
d
n
d
t
n
I
a
I
t
n
−
α
f
(
t
)
D
t
D
b
α
f
(
t
)
=
d
n
d
t
n
D
t
D
b
−
(
n
−
α
)
f
(
t
)
=
d
n
d
t
n
I
t
I
b
n
−
α
f
(
t
)
{\displaystyle {\begin{aligned}\sideset {_{a}}{_{t}^{\alpha }}Df(t)&={\frac {d^{n}}{dt^{n}}}\sideset {_{a}}{_{t}^{-(n-\alpha )}}Df(t)\\&={\frac {d^{n}}{dt^{n}}}\sideset {_{a}}{_{t}^{n-\alpha }}If(t)\\\sideset {_{t}}{_{b}^{\alpha }}Df(t)&={\frac {d^{n}}{dt^{n}}}\sideset {_{t}}{_{b}^{-(n-\alpha )}}Df(t)\\&={\frac {d^{n}}{dt^{n}}}\sideset {_{t}}{_{b}^{n-\alpha }}If(t)\end{aligned}}}
=== Caputo fractional derivative ===
Another option for computing fractional derivatives is the Caputo fractional derivative. It was introduced by Michele Caputo in his 1967 paper. In contrast to the Riemann–Liouville fractional derivative, when solving differential equations using Caputo's definition, it is not necessary to define the fractional order initial conditions. Caputo's definition is illustrated as follows, where again n = ⌈α⌉:
D
C
D
t
α
f
(
t
)
=
1
Γ
(
n
−
α
)
∫
0
t
f
(
n
)
(
τ
)
(
t
−
τ
)
α
+
1
−
n
d
τ
.
{\displaystyle \sideset {^{C}}{_{t}^{\alpha }}Df(t)={\frac {1}{\Gamma (n-\alpha )}}\int _{0}^{t}{\frac {f^{(n)}(\tau )}{\left(t-\tau \right)^{\alpha +1-n}}}\,d\tau .}
There is the Caputo fractional derivative defined as:
D
ν
f
(
t
)
=
1
Γ
(
n
−
ν
)
∫
0
t
(
t
−
u
)
(
n
−
ν
−
1
)
f
(
n
)
(
u
)
d
u
(
n
−
1
)
<
ν
<
n
{\displaystyle D^{\nu }f(t)={\frac {1}{\Gamma (n-\nu )}}\int _{0}^{t}(t-u)^{(n-\nu -1)}f^{(n)}(u)\,du\qquad (n-1)<\nu <n}
which has the advantage that it is zero when f(t) is constant and its Laplace Transform is expressed by means of the initial values of the function and its derivative. Moreover, there is the Caputo fractional derivative of distributed order defined as
D
a
b
D
n
u
f
(
t
)
=
∫
a
b
ϕ
(
ν
)
[
D
(
ν
)
f
(
t
)
]
d
ν
=
∫
a
b
[
ϕ
(
ν
)
Γ
(
1
−
ν
)
∫
0
t
(
t
−
u
)
−
ν
f
′
(
u
)
d
u
]
d
ν
{\displaystyle {\begin{aligned}\sideset {_{a}^{b}}{^{n}u}Df(t)&=\int _{a}^{b}\phi (\nu )\left[D^{(\nu )}f(t)\right]\,d\nu \\&=\int _{a}^{b}\left[{\frac {\phi (\nu )}{\Gamma (1-\nu )}}\int _{0}^{t}\left(t-u\right)^{-\nu }f'(u)\,du\right]\,d\nu \end{aligned}}}
where ϕ(ν) is a weight function and which is used to represent mathematically the presence of multiple memory formalisms.
=== Caputo–Fabrizio fractional derivative ===
In a paper of 2015, M. Caputo and M. Fabrizio presented a definition of fractional derivative with a non singular kernel, for a function
f
(
t
)
{\displaystyle f(t)}
of
C
1
{\displaystyle C^{1}}
given by:
D
C
a
CF
D
t
α
f
(
t
)
=
1
1
−
α
∫
a
t
f
′
(
τ
)
e
(
−
α
t
−
τ
1
−
α
)
d
τ
,
{\displaystyle \sideset {_{{\hphantom {C}}a}^{\text{CF}}}{_{t}^{\alpha }}Df(t)={\frac {1}{1-\alpha }}\int _{a}^{t}f'(\tau )\ e^{\left(-\alpha {\frac {t-\tau }{1-\alpha }}\right)}\ d\tau ,}
where
a
<
0
,
α
∈
(
0
,
1
]
{\displaystyle a<0,\alpha \in (0,1]}
.
=== Atangana–Baleanu fractional derivative ===
In 2016, Atangana and Baleanu suggested differential operators based on the generalized Mittag-Leffler function
E
α
{\displaystyle E_{\alpha }}
. The aim was to introduce fractional differential operators with non-singular nonlocal kernel. Their fractional differential operators are given below in Riemann–Liouville sense and Caputo sense respectively. For a function
f
(
t
)
{\displaystyle f(t)}
of
C
1
{\displaystyle C^{1}}
given by
D
A
B
a
ABC
D
t
α
f
(
t
)
=
AB
(
α
)
1
−
α
∫
a
t
f
′
(
τ
)
E
α
(
−
α
(
t
−
τ
)
α
1
−
α
)
d
τ
,
{\displaystyle \sideset {_{{\hphantom {AB}}a}^{\text{ABC}}}{_{t}^{\alpha }}Df(t)={\frac {\operatorname {AB} (\alpha )}{1-\alpha }}\int _{a}^{t}f'(\tau )E_{\alpha }\left(-\alpha {\frac {(t-\tau )^{\alpha }}{1-\alpha }}\right)d\tau ,}
If the function is continuous, the Atangana–Baleanu derivative in Riemann–Liouville sense is given by:
D
A
B
a
ABC
D
t
α
f
(
t
)
=
AB
(
α
)
1
−
α
d
d
t
∫
a
t
f
(
τ
)
E
α
(
−
α
(
t
−
τ
)
α
1
−
α
)
d
τ
,
{\displaystyle \sideset {_{{\hphantom {AB}}a}^{\text{ABC}}}{_{t}^{\alpha }}Df(t)={\frac {\operatorname {AB} (\alpha )}{1-\alpha }}{\frac {d}{dt}}\int _{a}^{t}f(\tau )E_{\alpha }\left(-\alpha {\frac {(t-\tau )^{\alpha }}{1-\alpha }}\right)d\tau ,}
The kernel used in Atangana–Baleanu fractional derivative has some properties of a cumulative distribution function. For example, for all
α
∈
(
0
,
1
]
{\displaystyle \alpha \in (0,1]}
, the function
E
α
{\displaystyle E_{\alpha }}
is increasing on the real line, converges to
0
{\displaystyle 0}
in
−
∞
{\displaystyle -\infty }
, and
E
α
(
0
)
=
1
{\displaystyle E_{\alpha }(0)=1}
. Therefore, we have that, the function
x
↦
1
−
E
α
(
−
x
α
)
{\displaystyle x\mapsto 1-E_{\alpha }(-x^{\alpha })}
is the cumulative distribution function of a probability measure on the positive real numbers. The distribution is therefore defined, and any of its multiples is called a Mittag-Leffler distribution of order
α
{\displaystyle \alpha }
. It is also very well-known that, all these probability distributions are absolutely continuous. In particular, the function Mittag-Leffler has a particular case
E
1
{\displaystyle E_{1}}
, which is the exponential function, the Mittag-Leffler distribution of order
1
{\displaystyle 1}
is therefore an exponential distribution. However, for
α
∈
(
0
,
1
)
{\displaystyle \alpha \in (0,1)}
, the Mittag-Leffler distributions are heavy-tailed. Their Laplace transform is given by:
E
(
e
−
λ
X
α
)
=
1
1
+
λ
α
,
{\displaystyle \mathbb {E} (e^{-\lambda X_{\alpha }})={\frac {1}{1+\lambda ^{\alpha }}},}
This directly implies that, for
α
∈
(
0
,
1
)
{\displaystyle \alpha \in (0,1)}
, the expectation is infinite. In addition, these distributions are geometric stable distributions.
=== Riesz derivative ===
The Riesz derivative is defined as
F
{
∂
α
u
∂
|
x
|
α
}
(
k
)
=
−
|
k
|
α
F
{
u
}
(
k
)
,
{\displaystyle {\mathcal {F}}\left\{{\frac {\partial ^{\alpha }u}{\partial \left|x\right|^{\alpha }}}\right\}(k)=-\left|k\right|^{\alpha }{\mathcal {F}}\{u\}(k),}
where
F
{\displaystyle {\mathcal {F}}}
denotes the Fourier transform.
=== Conformable fractional derivative ===
The conformable fractional derivative of a function
f
{\displaystyle f}
of order
α
{\displaystyle \alpha }
is given by
T
a
(
f
)
(
t
)
=
lim
ϵ
→
0
f
(
t
+
ϵ
t
1
−
α
)
−
f
(
t
)
ϵ
{\displaystyle T_{a}(f)(t)=\lim _{\epsilon \rightarrow 0}{\frac {f\left(t+\epsilon t^{1-\alpha }\right)-f(t)}{\epsilon }}}
Unlike other definitions of the fractional derivative, the conformable fractional derivative obeys the product and quotient rule has analogs to Rolle's theorem and the mean value theorem. However, this fractional derivative produces significantly different results compared to the Riemann-Liouville and Caputo fractional derivative. In 2020, Feng Gao and Chunmei Chi defined the improved Caputo-type conformable fractional derivative, which more closely approximates the behavior of the Caputo fractional derivative:
a
C
T
~
a
(
f
)
(
t
)
=
lim
ϵ
→
0
[
(
1
−
α
)
(
f
(
t
)
−
f
(
a
)
)
+
α
f
(
t
+
ϵ
(
t
−
a
)
1
−
α
)
−
f
(
t
)
ϵ
]
{\displaystyle _{a}^{C}{\widetilde {T}}_{a}(f)(t)=\lim _{\epsilon \rightarrow 0}\left[(1-\alpha )(f(t)-f(a))+\alpha {\frac {f\left(t+\epsilon (t-a)^{1-\alpha }\right)-f(t)}{\epsilon }}\right]}
where
a
{\displaystyle a}
and
t
{\displaystyle t}
are real numbers and
a
<
t
{\displaystyle a<t}
. They also defined the improved Riemann-Liouville-type conformable fractional derivative to similarly approximate the Riemann-Liouville fractional derivative:
a
R
L
T
~
a
(
f
)
(
t
)
=
lim
ϵ
→
0
[
(
1
−
α
)
f
(
t
)
+
α
f
(
t
+
ϵ
(
t
−
a
)
1
−
α
)
−
f
(
t
)
ϵ
]
{\displaystyle _{a}^{RL}{\widetilde {T}}_{a}(f)(t)=\lim _{\epsilon \rightarrow 0}\left[(1-\alpha )f(t)+\alpha {\frac {f\left(t+\epsilon (t-a)^{1-\alpha }\right)-f(t)}{\epsilon }}\right]}
where
a
{\displaystyle a}
and
t
{\displaystyle t}
are real numbers and
a
<
t
{\displaystyle a<t}
. Both improved conformable fractional derivatives have analogs to Rolle's theorem and the interior extremum theorem.
=== Other types ===
Classical fractional derivatives include:
Grünwald–Letnikov derivative
Sonin–Letnikov derivative
Liouville derivative
Caputo derivative
Hadamard derivative
Marchaud derivative
Riesz derivative
Miller–Ross derivative
Weyl derivative
Erdélyi–Kober derivative
F
α
{\displaystyle F^{\alpha }}
-derivative
New fractional derivatives include:
Coimbra derivative
Katugampola derivative
Hilfer derivative
Davidson derivative
Chen derivative
Caputo Fabrizio derivative
Atangana–Baleanu derivative
==== Coimbra derivative ====
The Coimbra derivative is used for physical modeling: A number of applications in both mechanics and optics can be found in the works by Coimbra and collaborators, as well as additional applications to physical problems and numerical implementations studied in a number of works by other authors
For
q
(
t
)
<
1
{\displaystyle q(t)<1}
a
C
D
q
(
t
)
f
(
t
)
=
1
Γ
[
1
−
q
(
t
)
]
∫
0
+
t
(
t
−
τ
)
−
q
(
t
)
d
f
(
τ
)
d
τ
d
τ
+
(
f
(
0
+
)
−
f
(
0
−
)
)
t
−
q
(
t
)
Γ
(
1
−
q
(
t
)
)
,
{\displaystyle {\begin{aligned}^{\mathbb {C} }_{a}\mathbb {D} ^{q(t)}f(t)={\frac {1}{\Gamma [1-q(t)]}}\int _{0^{+}}^{t}(t-\tau )^{-q(t)}{\frac {d\,f(\tau )}{d\tau }}d\tau \,+\,{\frac {(f(0^{+})-f(0^{-}))\,t^{-q(t)}}{\Gamma (1-q(t))}},\end{aligned}}}
where the lower limit
a
{\displaystyle a}
can be taken as either
0
−
{\displaystyle 0^{-}}
or
−
∞
{\displaystyle -\infty }
as long as
f
(
t
)
{\displaystyle f(t)}
is identically zero from or
−
∞
{\displaystyle -\infty }
to
0
−
{\displaystyle 0^{-}}
. Note that this operator returns the correct fractional derivatives for all values of
t
{\displaystyle t}
and can be applied to either the dependent function itself
f
(
t
)
{\displaystyle f(t)}
with a variable order of the form
q
(
f
(
t
)
)
{\displaystyle q(f(t))}
or to the independent variable with a variable order of the form
q
(
t
)
{\displaystyle q(t)}
.
[
1
]
{\displaystyle ^{[1]}}
The Coimbra derivative can be generalized to any order, leading to the Coimbra Generalized Order Differintegration Operator (GODO)
For
q
(
t
)
<
m
{\displaystyle q(t)<m}
−
∞
C
D
q
(
t
)
f
(
t
)
=
1
Γ
[
m
−
q
(
t
)
]
∫
0
+
t
(
t
−
τ
)
m
−
1
−
q
(
t
)
d
m
f
(
τ
)
d
τ
m
d
τ
+
∑
n
=
0
m
−
1
(
d
n
f
(
t
)
d
t
n
|
0
+
−
d
n
f
(
t
)
d
t
n
|
0
−
)
t
n
−
q
(
t
)
Γ
[
n
+
1
−
q
(
t
)
]
,
{\displaystyle {\begin{aligned}^{\mathbb {\quad C} }_{\,\,-\infty }\mathbb {D} ^{q(t)}f(t)={\frac {1}{\Gamma [m-q(t)]}}\int _{0^{+}}^{t}(t-\tau )^{m-1-q(t)}{\frac {d^{m}f(\tau )}{d\tau ^{m}}}d\tau \,+\,\sum _{n=0}^{m-1}{\frac {({\frac {d^{n}f(t)}{dt^{n}}}|_{0^{+}}-{\frac {d^{n}f(t)}{dt^{n}}}|_{0^{-}})\,t^{n-q(t)}}{\Gamma [n+1-q(t)]}},\end{aligned}}}
where
m
{\displaystyle m}
is an integer larger than the larger value of
q
(
t
)
{\displaystyle q(t)}
for all values of
t
{\displaystyle t}
. Note that the second (summation) term on the right side of the definition above can be expressed as
1
Γ
[
m
−
q
(
t
)
]
∑
n
=
0
m
−
1
{
[
d
n
f
(
t
)
d
t
n
|
0
+
−
d
n
f
(
t
)
d
t
n
|
0
−
]
t
n
−
q
(
t
)
∏
j
=
n
+
1
m
−
1
[
j
−
q
(
t
)
]
}
{\displaystyle {\begin{aligned}{\frac {1}{\Gamma [m-q(t)]}}\sum _{n=0}^{m-1}\{[{\frac {d^{n}\!f(t)}{dt^{n}}}|_{0^{+}}-{\frac {d^{n}\!f(t)}{dt^{n}}}|_{0^{-}}]\,t^{n-q(t)}\prod _{j=n+1}^{m-1}[j-q(t)]\}\end{aligned}}}
so to keep the denominator on the positive branch of the Gamma (
Γ
{\displaystyle \Gamma }
) function and for ease of numerical calculation.
=== Nature of the fractional derivative ===
The
a
{\displaystyle a}
-th derivative of a function
f
{\displaystyle f}
at a point
x
{\displaystyle x}
is a local property only when
a
{\displaystyle a}
is an integer; this is not the case for non-integer power derivatives. In other words, a non-integer fractional derivative of
f
{\displaystyle f}
at
x
=
c
{\displaystyle x=c}
depends on all values of
f
{\displaystyle f}
, even those far away from
c
{\displaystyle c}
. Therefore, it is expected that the fractional derivative operation involves some sort of boundary conditions, involving information on the function further out.
The fractional derivative of a function of order
a
{\displaystyle a}
is nowadays often defined by means of the Fourier or Mellin integral transforms.
== Generalizations ==
=== Erdélyi–Kober operator ===
The Erdélyi–Kober operator is an integral operator introduced by Arthur Erdélyi (1940). and Hermann Kober (1940) and is given by
x
−
ν
−
α
+
1
Γ
(
α
)
∫
0
x
(
t
−
x
)
α
−
1
t
−
α
−
ν
f
(
t
)
d
t
,
{\displaystyle {\frac {x^{-\nu -\alpha +1}}{\Gamma (\alpha )}}\int _{0}^{x}\left(t-x\right)^{\alpha -1}t^{-\alpha -\nu }f(t)\,dt\,,}
which generalizes the Riemann–Liouville fractional integral and the Weyl integral.
== Functional calculus ==
In the context of functional analysis, functions f(D) more general than powers are studied in the functional calculus of spectral theory. The theory of pseudo-differential operators also allows one to consider powers of D. The operators arising are examples of singular integral operators; and the generalisation of the classical theory to higher dimensions is called the theory of Riesz potentials. So there are a number of contemporary theories available, within which fractional calculus can be discussed. See also Erdélyi–Kober operator, important in special function theory (Kober 1940), (Erdélyi 1950–1951).
== Applications ==
=== Fractional conservation of mass ===
As described by Wheatcraft and Meerschaert (2008), a fractional conservation of mass equation is needed to model fluid flow when the control volume is not large enough compared to the scale of heterogeneity and when the flux within the control volume is non-linear. In the referenced paper, the fractional conservation of mass equation for fluid flow is:
−
ρ
(
∇
α
⋅
u
→
)
=
Γ
(
α
+
1
)
Δ
x
1
−
α
ρ
(
β
s
+
ϕ
β
w
)
∂
p
∂
t
{\displaystyle -\rho \left(\nabla ^{\alpha }\cdot {\vec {u}}\right)=\Gamma (\alpha +1)\Delta x^{1-\alpha }\rho \left(\beta _{s}+\phi \beta _{w}\right){\frac {\partial p}{\partial t}}}
=== Electrochemical analysis ===
When studying the redox behavior of a substrate in solution, a voltage is applied at an electrode surface to force electron transfer between electrode and substrate. The resulting electron transfer is measured as a current. The current depends upon the concentration of substrate at the electrode surface. As substrate is consumed, fresh substrate diffuses to the electrode as described by Fick's laws of diffusion. Taking the Laplace transform of Fick's second law yields an ordinary second-order differential equation (here in dimensionless form):
d
2
d
x
2
C
(
x
,
s
)
=
s
C
(
x
,
s
)
{\displaystyle {\frac {d^{2}}{dx^{2}}}C(x,s)=sC(x,s)}
whose solution C(x,s) contains a one-half power dependence on s. Taking the derivative of C(x,s) and then the inverse Laplace transform yields the following relationship:
d
d
x
C
(
x
,
t
)
=
d
1
2
d
t
1
2
C
(
x
,
t
)
{\displaystyle {\frac {d}{dx}}C(x,t)={\frac {d^{\scriptstyle {\frac {1}{2}}}}{dt^{\scriptstyle {\frac {1}{2}}}}}C(x,t)}
which relates the concentration of substrate at the electrode surface to the current. This relationship is applied in electrochemical kinetics to elucidate mechanistic behavior. For example, it has been used to study the rate of dimerization of substrates upon electrochemical reduction.
=== Groundwater flow problem ===
In 2013–2014 Atangana et al. described some groundwater flow problems using the concept of a derivative with fractional order. In these works, the classical Darcy law is generalized by regarding the water flow as a function of a non-integer order derivative of the piezometric head. This generalized law and the law of conservation of mass are then used to derive a new equation for groundwater flow.
=== Fractional advection dispersion equation ===
This equation has been shown useful for modeling contaminant flow in heterogenous porous media.
Atangana and Kilicman extended the fractional advection dispersion equation to a variable order equation. In their work, the hydrodynamic dispersion equation was generalized using the concept of a variational order derivative. The modified equation was numerically solved via the Crank–Nicolson method. The stability and convergence in numerical simulations showed that the modified equation is more reliable in predicting the movement of pollution in deformable aquifers than equations with constant fractional and integer derivatives
=== Time-space fractional diffusion equation models ===
Anomalous diffusion processes in complex media can be well characterized by using fractional-order diffusion equation models. The time derivative term corresponds to long-time heavy tail decay and the spatial derivative for diffusion nonlocality. The time-space fractional diffusion governing equation can be written as
∂
α
u
∂
t
α
=
−
K
(
−
Δ
)
β
u
.
{\displaystyle {\frac {\partial ^{\alpha }u}{\partial t^{\alpha }}}=-K(-\Delta )^{\beta }u.}
A simple extension of the fractional derivative is the variable-order fractional derivative, α and β are changed into α(x, t) and β(x, t). Its applications in anomalous diffusion modeling can be found in the reference.
=== Structural damping models ===
Fractional derivatives are used to model viscoelastic damping in certain types of materials like polymers.
=== PID controllers ===
Generalizing PID controllers to use fractional orders can increase their degree of freedom. The new equation relating the control variable u(t) in terms of a measured error value e(t) can be written as
u
(
t
)
=
K
p
e
(
t
)
+
K
i
D
t
−
α
e
(
t
)
+
K
d
D
t
β
e
(
t
)
{\displaystyle u(t)=K_{\mathrm {p} }e(t)+K_{\mathrm {i} }D_{t}^{-\alpha }e(t)+K_{\mathrm {d} }D_{t}^{\beta }e(t)}
where α and β are positive fractional orders and Kp, Ki, and Kd, all non-negative, denote the coefficients for the proportional, integral, and derivative terms, respectively (sometimes denoted P, I, and D).
=== Acoustic wave equations for complex media ===
The propagation of acoustical waves in complex media, such as in biological tissue, commonly implies attenuation obeying a frequency power-law. This kind of phenomenon may be described using a causal wave equation which incorporates fractional time derivatives:
∇
2
u
−
1
c
0
2
∂
2
u
∂
t
2
+
τ
σ
α
∂
α
∂
t
α
∇
2
u
−
τ
ϵ
β
c
0
2
∂
β
+
2
u
∂
t
β
+
2
=
0
.
{\displaystyle \nabla ^{2}u-{\dfrac {1}{c_{0}^{2}}}{\frac {\partial ^{2}u}{\partial t^{2}}}+\tau _{\sigma }^{\alpha }{\dfrac {\partial ^{\alpha }}{\partial t^{\alpha }}}\nabla ^{2}u-{\dfrac {\tau _{\epsilon }^{\beta }}{c_{0}^{2}}}{\dfrac {\partial ^{\beta +2}u}{\partial t^{\beta +2}}}=0\,.}
See also Holm & Näsholm (2011) and the references therein. Such models are linked to the commonly recognized hypothesis that multiple relaxation phenomena give rise to the attenuation measured in complex media. This link is further described in Näsholm & Holm (2011b) and in the survey paper, as well as the Acoustic attenuation article. See Holm & Nasholm (2013) for a paper which compares fractional wave equations which model power-law attenuation. This book on power-law attenuation also covers the topic in more detail.
Pandey and Holm gave a physical meaning to fractional differential equations by deriving them from physical principles and interpreting the fractional-order in terms of the parameters of the acoustical media, example in fluid-saturated granular unconsolidated marine sediments. Interestingly, Pandey and Holm derived Lomnitz's law in seismology and Nutting's law in non-Newtonian rheology using the framework of fractional calculus. Nutting's law was used to model the wave propagation in marine sediments using fractional derivatives.
=== Fractional Schrödinger equation in quantum theory ===
The fractional Schrödinger equation, a fundamental equation of fractional quantum mechanics, has the following form:
i
ℏ
∂
ψ
(
r
,
t
)
∂
t
=
D
α
(
−
ℏ
2
Δ
)
α
2
ψ
(
r
,
t
)
+
V
(
r
,
t
)
ψ
(
r
,
t
)
.
{\displaystyle i\hbar {\frac {\partial \psi (\mathbf {r} ,t)}{\partial t}}=D_{\alpha }\left(-\hbar ^{2}\Delta \right)^{\frac {\alpha }{2}}\psi (\mathbf {r} ,t)+V(\mathbf {r} ,t)\psi (\mathbf {r} ,t)\,.}
where the solution of the equation is the wavefunction ψ(r, t) – the quantum mechanical probability amplitude for the particle to have a given position vector r at any given time t, and ħ is the reduced Planck constant. The potential energy function V(r, t) depends on the system.
Further,
Δ
=
∂
2
∂
r
2
{\textstyle \Delta ={\frac {\partial ^{2}}{\partial \mathbf {r} ^{2}}}}
is the Laplace operator, and Dα is a scale constant with physical dimension [Dα] = J1 − α·mα·s−α = kg1 − α·m2 − α·sα − 2, (at α = 2,
D
2
=
1
2
m
{\textstyle D_{2}={\frac {1}{2m}}}
for a particle of mass m), and the operator (−ħ2Δ)α/2 is the 3-dimensional fractional quantum Riesz derivative defined by
(
−
ℏ
2
Δ
)
α
2
ψ
(
r
,
t
)
=
1
(
2
π
ℏ
)
3
∫
d
3
p
e
i
ℏ
p
⋅
r
|
p
|
α
φ
(
p
,
t
)
.
{\displaystyle (-\hbar ^{2}\Delta )^{\frac {\alpha }{2}}\psi (\mathbf {r} ,t)={\frac {1}{(2\pi \hbar )^{3}}}\int d^{3}pe^{{\frac {i}{\hbar }}\mathbf {p} \cdot \mathbf {r} }|\mathbf {p} |^{\alpha }\varphi (\mathbf {p} ,t)\,.}
The index α in the fractional Schrödinger equation is the Lévy index, 1 < α ≤ 2.
==== Variable-order fractional Schrödinger equation ====
As a natural generalization of the fractional Schrödinger equation, the variable-order fractional Schrödinger equation has been exploited to study fractional quantum phenomena:
i
ℏ
∂
ψ
α
(
r
)
(
r
,
t
)
∂
t
α
(
r
)
=
(
−
ℏ
2
Δ
)
β
(
t
)
2
ψ
(
r
,
t
)
+
V
(
r
,
t
)
ψ
(
r
,
t
)
,
{\displaystyle i\hbar {\frac {\partial \psi ^{\alpha (\mathbf {r} )}(\mathbf {r} ,t)}{\partial t^{\alpha (\mathbf {r} )}}}=\left(-\hbar ^{2}\Delta \right)^{\frac {\beta (t)}{2}}\psi (\mathbf {r} ,t)+V(\mathbf {r} ,t)\psi (\mathbf {r} ,t),}
where
Δ
=
∂
2
∂
r
2
{\textstyle \Delta ={\frac {\partial ^{2}}{\partial \mathbf {r} ^{2}}}}
is the Laplace operator and the operator (−ħ2Δ)β(t)/2 is the variable-order fractional quantum Riesz derivative.
== See also ==
Acoustic attenuation
Autoregressive fractionally integrated moving average
Initialized fractional calculus
Nonlocal operator
=== Other fractional theories ===
Fractional-order system
Fractional Fourier transform
Prabhakar function
== Notes ==
== References ==
== Further reading ==
=== Articles regarding the history of fractional calculus ===
Debnath, L. (2004). "A brief historical introduction to fractional calculus". International Journal of Mathematical Education in Science and Technology. 35 (4): 487–501. doi:10.1080/00207390410001686571. S2CID 122198977.
=== Books ===
Miller, Kenneth S.; Ross, Bertram, eds. (1993). An Introduction to the Fractional Calculus and Fractional Differential Equations. John Wiley & Sons. ISBN 978-0-471-58884-9.
Samko, S.; Kilbas, A.A.; Marichev, O. (1993). Fractional Integrals and Derivatives: Theory and Applications. Taylor & Francis Books. ISBN 978-2-88124-864-1.
Carpinteri, A.; Mainardi, F., eds. (1998). Fractals and Fractional Calculus in Continuum Mechanics. Springer-Verlag Telos. ISBN 978-3-211-82913-4.
Igor Podlubny (27 October 1998). Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications. Elsevier. ISBN 978-0-08-053198-4.
Tarasov, V.E. (2010). Fractional Dynamics: Applications of Fractional Calculus to Dynamics of Particles, Fields and Media. Nonlinear Physical Science. Springer. doi:10.1007/978-3-642-14003-7. ISBN 978-3-642-14003-7.
Li, Changpin; Cai, Min (2019). Theory and Numerical Approximations of Fractional Integrals and Derivatives. SIAM. doi:10.1137/1.9781611975888. ISBN 978-1-61197-587-1.
== External links == | Wikipedia/Fractional_differential_equations |
In mathematical analysis, particularly numerical analysis, the rate of convergence and order of convergence of a sequence that converges to a limit are any of several characterizations of how quickly that sequence approaches its limit. These are broadly divided into rates and orders of convergence that describe how quickly a sequence further approaches its limit once it is already close to it, called asymptotic rates and orders of convergence, and those that describe how quickly sequences approach their limits from starting points that are not necessarily close to their limits, called non-asymptotic rates and orders of convergence.
Asymptotic behavior is particularly useful for deciding when to stop a sequence of numerical computations, for instance once a target precision has been reached with an iterative root-finding algorithm, but pre-asymptotic behavior is often crucial for determining whether to begin a sequence of computations at all, since it may be impossible or impractical to ever reach a target precision with a poorly chosen approach. Asymptotic rates and orders of convergence are the focus of this article.
In practical numerical computations, asymptotic rates and orders of convergence follow two common conventions for two types of sequences: the first for sequences of iterations of an iterative numerical method and the second for sequences of successively more accurate numerical discretizations of a target. In formal mathematics, rates of convergence and orders of convergence are often described comparatively using asymptotic notation commonly called "big O notation," which can be used to encompass both of the prior conventions; this is an application of asymptotic analysis.
For iterative methods, a sequence
(
x
k
)
{\displaystyle (x_{k})}
that converges to
L
{\displaystyle L}
is said to have asymptotic order of convergence
q
≥
1
{\displaystyle q\geq 1}
and asymptotic rate of convergence
μ
{\displaystyle \mu }
if
lim
k
→
∞
|
x
k
+
1
−
L
|
|
x
k
−
L
|
q
=
μ
.
{\displaystyle \lim _{k\rightarrow \infty }{\frac {\left|x_{k+1}-L\right|}{\left|x_{k}-L\right|^{q}}}=\mu .}
Where methodological precision is required, these rates and orders of convergence are known specifically as the rates and orders of Q-convergence, short for quotient-convergence, since the limit in question is a quotient of error terms. The rate of convergence
μ
{\displaystyle \mu }
may also be called the asymptotic error constant, and some authors will use rate where this article uses order. Series acceleration methods are techniques for improving the rate of convergence of the sequence of partial sums of a series and possibly its order of convergence, also.
Similar concepts are used for sequences of discretizations. For instance, ideally the solution of a differential equation discretized via a regular grid will converge to the solution of the continuous equation as the grid spacing goes to zero, and if so the asymptotic rate and order of that convergence are important properties of the gridding method. A sequence of approximate grid solutions
(
y
k
)
{\displaystyle (y_{k})}
of some problem that converges to a true solution
S
{\displaystyle S}
with a corresponding sequence of regular grid spacings
(
h
k
)
{\displaystyle (h_{k})}
that converge to 0 is said to have asymptotic order of convergence
q
{\displaystyle q}
and asymptotic rate of convergence
μ
{\displaystyle \mu }
if
lim
k
→
∞
|
y
k
−
S
|
h
k
q
=
μ
,
{\displaystyle \lim _{k\rightarrow \infty }{\frac {\left|y_{k}-S\right|}{h_{k}^{q}}}=\mu ,}
where the absolute value symbols stand for a metric for the space of solutions such as the uniform norm. Similar definitions also apply for non-grid discretization schemes such as the polygon meshes of a finite element method or the basis sets in computational chemistry: in general, the appropriate definition of the asymptotic rate
μ
{\displaystyle \mu }
will involve the asymptotic limit of the ratio of an approximation error term above to an asymptotic order
q
{\displaystyle q}
power of a discretization scale parameter below.
In general, comparatively, one sequence
(
a
k
)
{\displaystyle (a_{k})}
that converges to a limit
L
a
{\displaystyle L_{a}}
is said to asymptotically converge more quickly than another sequence
(
b
k
)
{\displaystyle (b_{k})}
that converges to a limit
L
b
{\displaystyle L_{b}}
if
lim
k
→
∞
|
a
k
−
L
a
|
|
b
k
−
L
b
|
=
0
,
{\displaystyle \lim _{k\rightarrow \infty }{\frac {\left|a_{k}-L_{a}\right|}{|b_{k}-L_{b}|}}=0,}
and the two are said to asymptotically converge with the same order of convergence if the limit is any positive finite value. The two are said to be asymptotically equivalent if the limit is equal to one. These comparative definitions of rate and order of asymptotic convergence are fundamental in asymptotic analysis and find wide application in mathematical analysis as a whole, including numerical analysis, real analysis, complex analysis, and functional analysis.
== Asymptotic rates of convergence for iterative methods ==
=== Definitions ===
==== Q-convergence ====
Suppose that the sequence
(
x
k
)
{\displaystyle (x_{k})}
of iterates of an iterative method converges to the limit number
L
{\displaystyle L}
as
k
→
∞
{\displaystyle k\rightarrow \infty }
. The sequence is said to converge with order
q
{\displaystyle q}
to
L
{\displaystyle L}
and with a rate of convergence
μ
{\displaystyle \mu }
if the
k
→
∞
{\displaystyle k\rightarrow \infty }
limit of quotients of absolute differences of sequential iterates
x
k
,
x
k
+
1
{\displaystyle x_{k},x_{k+1}}
from their limit
L
{\displaystyle L}
satisfies
lim
k
→
∞
|
x
k
+
1
−
L
|
|
x
k
−
L
|
q
=
μ
{\displaystyle \lim _{k\to \infty }{\frac {|x_{k+1}-L|}{|x_{k}-L|^{q}}}=\mu }
for some positive constant
μ
∈
(
0
,
1
)
{\displaystyle \mu \in (0,1)}
if
q
=
1
{\displaystyle q=1}
and
μ
∈
(
0
,
∞
)
{\displaystyle \mu \in (0,\infty )}
if
q
>
1
{\displaystyle q>1}
. Other more technical rate definitions are needed if the sequence converges but
lim
k
→
∞
|
x
k
+
1
−
L
|
|
x
k
−
L
|
=
1
{\textstyle \lim _{k\to \infty }{\frac {|x_{k+1}-L|}{|x_{k}-L|}}=1}
or the limit does not exist. This definition is technically called Q-convergence, short for quotient-convergence, and the rates and orders are called rates and orders of Q-convergence when that technical specificity is needed. § R-convergence, below, is an appropriate alternative when this limit does not exist.
Sequences with larger orders
q
{\displaystyle q}
converge more quickly than those with smaller order, and those with smaller rates
μ
{\displaystyle \mu }
converge more quickly than those with larger rates for a given order. This "smaller rates converge more quickly" behavior among sequences of the same order is standard but it can be counterintuitive. Therefore it is also common to define
−
log
10
μ
{\displaystyle -\log _{10}\mu }
as the rate; this is the "number of extra decimals of precision per iterate" for sequences that converge with order 1.
Integer powers of
q
{\displaystyle q}
are common and are given common names. Convergence with order
q
=
1
{\displaystyle q=1}
and
μ
∈
(
0
,
1
)
{\displaystyle \mu \in (0,1)}
is called linear convergence and the sequence is said to converge linearly to
L
{\displaystyle L}
. Convergence with
q
=
2
{\displaystyle q=2}
and any
μ
{\displaystyle \mu }
is called quadratic convergence and the sequence is said to converge quadratically. Convergence with
q
=
3
{\displaystyle q=3}
and any
μ
{\displaystyle \mu }
is called cubic convergence. However, it is not necessary that
q
{\displaystyle q}
be an integer. For example, the secant method, when converging to a regular, simple root, has an order of the golden ratio φ ≈ 1.618.
The common names for integer orders of convergence connect to asymptotic big O notation, where the convergence of the quotient implies
|
x
k
+
1
−
L
|
=
O
(
|
x
k
−
L
|
q
)
.
{\textstyle |x_{k+1}-L|=O(|x_{k}-L|^{q}).}
These are linear, quadratic, and cubic polynomial expressions when
q
{\displaystyle q}
is 1, 2, and 3, respectively. More precisely, the limits imply the leading order error is exactly
μ
|
x
k
−
L
|
q
,
{\textstyle \mu |x_{k}-L|^{q},}
which can be expressed using asymptotic small o notation as
|
x
k
+
1
−
L
|
=
μ
|
x
k
−
L
|
q
+
o
(
|
x
k
−
L
|
q
)
.
{\textstyle |x_{k+1}-L|=\mu |x_{k}-L|^{q}+o(|x_{k}-L|^{q}).}
In general, when
q
>
1
{\displaystyle q>1}
for a sequence or for any sequence that satisfies
lim
k
→
∞
|
x
k
+
1
−
L
|
|
x
k
−
L
|
=
0
,
{\textstyle \lim _{k\to \infty }{\frac {|x_{k+1}-L|}{|x_{k}-L|}}=0,}
those sequences are said to converge superlinearly (i.e., faster than linearly). A sequence is said to converge sublinearly (i.e., slower than linearly) if it converges and
lim
k
→
∞
|
x
k
+
1
−
L
|
|
x
k
−
L
|
=
1.
{\textstyle \lim _{k\to \infty }{\frac {|x_{k+1}-L|}{|x_{k}-L|}}=1.}
Importantly, it is incorrect to say that these sublinear-order sequences converge linearly with an asymptotic rate of convergence of 1. A sequence
(
x
k
)
{\displaystyle (x_{k})}
converges logarithmically to
L
{\displaystyle L}
if the sequence converges sublinearly and also
lim
k
→
∞
|
x
k
+
1
−
x
k
|
|
x
k
−
x
k
−
1
|
=
1.
{\textstyle \lim _{k\to \infty }{\frac {|x_{k+1}-x_{k}|}{|x_{k}-x_{k-1}|}}=1.}
==== R-convergence ====
The definitions of Q-convergence rates have the shortcoming that they do not naturally capture the convergence behavior of sequences that do converge, but do not converge with an asymptotically constant rate with every step, so that the Q-convergence limit does not exist. One class of examples is the staggered geometric progressions that get closer to their limits only every other step or every several steps, for instance the example
(
b
k
)
=
1
,
1
,
1
/
4
,
1
/
4
,
1
/
16
,
1
/
16
,
…
,
1
/
4
⌊
k
2
⌋
,
…
{\textstyle (b_{k})=1,1,1/4,1/4,1/16,1/16,\ldots ,1/4^{\left\lfloor {\frac {k}{2}}\right\rfloor },\ldots }
detailed below (where
⌊
x
⌋
{\textstyle \lfloor x\rfloor }
is the floor function applied to
x
{\displaystyle x}
). The defining Q-linear convergence limits do not exist for this sequence because one subsequence of error quotients starting from odd steps converges to 1 and another subsequence of quotients starting from even steps converges to 1/4. When two subsequences of a sequence converge to different limits, the sequence does not itself converge to a limit.
In cases like these, a closely related but more technical definition of rate of convergence called R-convergence is more appropriate. The "R-" prefix stands for "root.": 620 A sequence
(
x
k
)
{\displaystyle (x_{k})}
that converges to
L
{\displaystyle L}
is said to converge at least R-linearly if there exists an error-bounding sequence
(
ε
k
)
{\displaystyle (\varepsilon _{k})}
such that
|
x
k
−
L
|
≤
ε
k
for all
k
{\textstyle |x_{k}-L|\leq \varepsilon _{k}\quad {\text{for all }}k}
and
(
ε
k
)
{\displaystyle (\varepsilon _{k})}
converges Q-linearly to zero; analogous definitions hold for R-superlinear convergence, R-sublinear convergence, R-quadratic convergence, and so on.
Any error bounding sequence
(
ε
k
)
{\displaystyle (\varepsilon _{k})}
provides a lower bound on the rate and order of R-convergence and the greatest lower bound gives the exact rate and order of R-convergence. As for Q-convergence, sequences with larger orders
q
{\displaystyle q}
converge more quickly and those with smaller rates
μ
{\displaystyle \mu }
converge more quickly for a given order, so these greatest-rate-lower-bound error-upper-bound sequences are those that have the greatest possible
q
{\displaystyle q}
and the smallest possible
μ
{\displaystyle \mu }
given that
q
{\displaystyle q}
.
For the example
(
b
k
)
{\textstyle (b_{k})}
given above, the tight bounding sequence
(
ε
k
)
=
2
,
1
,
1
/
2
,
1
/
4
,
1
/
8
,
1
/
16
,
…
,
1
/
2
k
−
1
,
…
{\textstyle (\varepsilon _{k})=2,1,1/2,1/4,1/8,1/16,\ldots ,1/2^{k-1},\ldots }
converges Q-linearly with rate 1/2, so
(
b
k
)
{\textstyle (b_{k})}
converges R-linearly with rate 1/2. Generally, for any staggered geometric progression
(
a
r
⌊
k
/
m
⌋
)
{\displaystyle (ar^{\lfloor k/m\rfloor })}
, the sequence will not converge Q-linearly but will converge R-linearly with rate
|
r
|
m
.
{\textstyle {\sqrt[{m}]{|r|}}.}
These examples demonstrate why the "R" in R-linear convergence is short for "root."
=== Examples ===
The geometric progression
(
a
k
)
=
1
,
1
2
,
1
4
,
1
8
,
1
16
,
1
32
,
…
,
1
/
2
k
,
…
{\textstyle (a_{k})=1,{\frac {1}{2}},{\frac {1}{4}},{\frac {1}{8}},{\frac {1}{16}},{\frac {1}{32}},\ldots ,1/{2^{k}},\dots }
converges to
L
=
0
{\displaystyle L=0}
. Plugging the sequence into the definition of Q-linear convergence (i.e., order of convergence 1) shows that
lim
k
→
∞
|
1
/
2
k
+
1
−
0
|
|
1
/
2
k
−
0
|
=
lim
k
→
∞
2
k
2
k
+
1
=
1
2
.
{\displaystyle \lim _{k\to \infty }{\frac {\left|1/2^{k+1}-0\right|}{\left|1/2^{k}-0\right|}}=\lim _{k\to \infty }{\frac {2^{k}}{2^{k+1}}}={\frac {1}{2}}.}
Thus
(
a
k
)
{\displaystyle (a_{k})}
converges Q-linearly with a convergence rate of
μ
=
1
/
2
{\displaystyle \mu =1/2}
; see the first plot of the figure below.
More generally, for any initial value
a
{\displaystyle a}
in the real numbers and a real number common ratio
r
{\displaystyle r}
between -1 and 1, a geometric progression
(
a
r
k
)
{\displaystyle (ar^{k})}
converges linearly with rate
|
r
|
{\displaystyle |r|}
and the sequence of partial sums of a geometric series
(
∑
n
=
0
k
a
r
n
)
{\textstyle (\sum _{n=0}^{k}ar^{n})}
also converges linearly with rate
|
r
|
{\displaystyle |r|}
. The same holds also for geometric progressions and geometric series parameterized by any complex numbers
a
∈
C
,
r
∈
C
,
|
r
|
<
1.
{\displaystyle a\in \mathbb {C} ,r\in \mathbb {C} ,|r|<1.}
The staggered geometric progression
(
b
k
)
=
1
,
1
,
1
4
,
1
4
,
1
16
,
1
16
,
…
,
1
/
4
⌊
k
2
⌋
,
…
,
{\textstyle (b_{k})=1,1,{\frac {1}{4}},{\frac {1}{4}},{\frac {1}{16}},{\frac {1}{16}},\ldots ,1/4^{\left\lfloor {\frac {k}{2}}\right\rfloor },\ldots ,}
using the floor function
⌊
x
⌋
{\textstyle \lfloor x\rfloor }
that gives the largest integer that is less than or equal to
x
,
{\displaystyle x,}
converges R-linearly to 0 with rate 1/2, but it does not converge Q-linearly; see the second plot of the figure below. The defining Q-linear convergence limits do not exist for this sequence because one subsequence of error quotients starting from odd steps converges to 1 and another subsequence of quotients starting from even steps converges to 1/4. When two subsequences of a sequence converge to different limits, the sequence does not itself converge to a limit. Generally, for any staggered geometric progression
(
a
r
⌊
k
/
m
⌋
)
{\displaystyle (ar^{\lfloor k/m\rfloor })}
, the sequence will not converge Q-linearly but will converge R-linearly with rate
|
r
|
m
;
{\textstyle {\sqrt[{m}]{|r|}};}
these examples demonstrate why the "R" in R-linear convergence is short for "root."
The sequence
(
c
k
)
=
1
2
,
1
4
,
1
16
,
1
256
,
1
65
,
536
,
…
,
1
2
2
k
,
…
{\displaystyle (c_{k})={\frac {1}{2}},{\frac {1}{4}},{\frac {1}{16}},{\frac {1}{256}},{\frac {1}{65,\!536}},\ldots ,{\frac {1}{2^{2^{k}}}},\ldots }
converges to zero Q-superlinearly. In fact, it is quadratically convergent with a quadratic convergence rate of 1. It is shown in the third plot of the figure below.
Finally, the sequence
(
d
k
)
=
1
,
1
2
,
1
3
,
1
4
,
1
5
,
1
6
,
…
,
1
k
+
1
,
…
{\displaystyle (d_{k})=1,{\frac {1}{2}},{\frac {1}{3}},{\frac {1}{4}},{\frac {1}{5}},{\frac {1}{6}},\ldots ,{\frac {1}{k+1}},\ldots }
converges to zero Q-sublinearly and logarithmically and its convergence is shown as the fourth plot of the figure below.
=== Convergence rates to fixed points of recurrent sequences ===
Recurrent sequences
x
k
+
1
:=
f
(
x
k
)
{\textstyle x_{k+1}:=f(x_{k})}
, called fixed point iterations, define discrete time autonomous dynamical systems and have important general applications in mathematics through various fixed-point theorems about their convergence behavior. When f is continuously differentiable, given a fixed point p,
f
(
p
)
=
p
,
{\textstyle f(p)=p,}
such that
|
f
′
(
p
)
|
<
1
{\textstyle |f'(p)|<1}
, the fixed point is an attractive fixed point and the recurrent sequence will converge at least linearly to p for any starting value
x
0
{\displaystyle x_{0}}
sufficiently close to p. If
|
f
′
(
p
)
|
=
0
{\displaystyle |f'(p)|=0}
and
|
f
″
(
p
)
|
<
1
{\textstyle |f''(p)|<1}
, then the recurrent sequence will converge at least quadratically, and so on. If
|
f
′
(
p
)
|
>
1
{\displaystyle |f'(p)|>1}
, then the fixed point is a repulsive fixed point and sequences cannot converge to p from its immediate neighborhoods, though they may still jump to p directly from outside of its local neighborhoods.
=== Order estimation ===
A practical method to calculate the order of convergence for a sequence generated by a fixed point iteration is to calculate the following sequence, which converges to the order
q
{\displaystyle q}
:
q
≈
log
|
x
k
+
1
−
x
k
x
k
−
x
k
−
1
|
log
|
x
k
−
x
k
−
1
x
k
−
1
−
x
k
−
2
|
.
{\displaystyle q\approx {\frac {\log \left|\displaystyle {\frac {x_{k+1}-x_{k}}{x_{k}-x_{k-1}}}\right|}{\log \left|\displaystyle {\frac {x_{k}-x_{k-1}}{x_{k-1}-x_{k-2}}}\right|}}.}
For numerical approximation of an exact value through a numerical method of order
q
{\displaystyle q}
see.
=== Accelerating convergence rates ===
Many methods exist to accelerate the convergence of a given sequence, i.e., to transform one sequence into a second sequence that converges more quickly to the same limit. Such techniques are in general known as "series acceleration" methods. These may reduce the computational costs of approximating the limits of the original sequences. One example of series acceleration by sequence transformation is Aitken's delta-squared process. These methods in general, and in particular Aitken's method, do not typically increase the order of convergence and thus they are useful only if initially the convergence is not faster than linear: if
(
x
k
)
{\displaystyle (x_{k})}
converges linearly, Aitken's method transforms it into a sequence
(
a
k
)
{\displaystyle (a_{k})}
that still converges linearly (except for pathologically designed special cases), but faster in the sense that
lim
k
→
∞
(
a
k
−
L
)
/
(
x
k
−
L
)
=
0
{\textstyle \lim _{k\rightarrow \infty }(a_{k}-L)/(x_{k}-L)=0}
. On the other hand, if the convergence is already of order ≥ 2, Aitken's method will bring no improvement.
== Asymptotic rates of convergence for discretization methods ==
=== Definitions ===
A sequence of discretized approximations
(
y
k
)
{\displaystyle (y_{k})}
of some continuous-domain function
S
{\displaystyle S}
that converges to this target, together with a corresponding sequence of discretization scale parameters
(
h
k
)
{\displaystyle (h_{k})}
that converge to 0, is said to have asymptotic order of convergence
q
{\displaystyle q}
and asymptotic rate of convergence
μ
{\displaystyle \mu }
if
lim
k
→
∞
|
y
k
−
S
|
h
k
q
=
μ
,
{\displaystyle \lim _{k\rightarrow \infty }{\frac {\left|y_{k}-S\right|}{h_{k}^{q}}}=\mu ,}
for some positive constants
μ
{\displaystyle \mu }
and
q
{\displaystyle q}
and using
|
x
|
{\displaystyle |x|}
to stand for an appropriate distance metric on the space of solutions, most often either the uniform norm, the absolute difference, or the Euclidean distance. Discretization scale parameters may be spacings of a regular grid in space or in time, the inverse of the number of points of a grid in one dimension, an average or maximum distance between points in a polygon mesh, the single-dimension spacings of an irregular sparse grid, or a characteristic quantum of energy or momentum in a quantum mechanical basis set.
When all the discretizations are generated using a single common method, it is common to discuss the asymptotic rate and order of convergence for the method itself rather than any particular discrete sequences of discretized solutions. In these cases one considers a single abstract discretized solution
y
h
{\displaystyle y_{h}}
generated using the method with a scale parameter
h
{\displaystyle h}
and then the method is said to have asymptotic order of convergence
q
{\displaystyle q}
and asymptotic rate of convergence
μ
{\displaystyle \mu }
if
lim
h
→
0
|
y
h
−
S
|
h
q
=
μ
,
{\displaystyle \lim _{h\rightarrow 0}{\frac {\left|y_{h}-S\right|}{h^{q}}}=\mu ,}
again for some positive constants
μ
{\displaystyle \mu }
and
q
{\displaystyle q}
and an appropriate metric
|
x
|
.
{\displaystyle |x|.}
This implies that the error of a discretization asymptotically scales like the discretization's scale parameter to the
q
{\displaystyle q}
power, or
|
y
h
−
S
|
=
O
(
h
q
)
{\textstyle \left|y_{h}-S\right|=O(h^{q})}
using asymptotic big O notation. More precisely, it implies the leading order error is
μ
h
q
,
{\displaystyle \mu h^{q},}
which can be expressed using asymptotic small o notation as
|
y
h
−
S
|
=
μ
h
q
+
o
(
h
q
)
.
{\textstyle \left|y_{h}-S\right|=\mu h^{q}+o(h^{q}).}
In some cases multiple rates and orders for the same method but with different choices of scale parameter may be important, for instance for finite difference methods based on multidimensional grids where the different dimensions have different grid spacings or for finite element methods based on polygon meshes where choosing either average distance between mesh points or maximum distance between mesh points as scale parameters may imply different orders of convergence. In some especially technical contexts, discretization methods' asymptotic rates and orders of convergence will be characterized by several scale parameters at once with the value of each scale parameter possibly affecting the asymptotic rate and order of convergence of the method with respect to the other scale parameters.
=== Example ===
Consider the ordinary differential equation
d
y
d
x
=
−
κ
y
{\displaystyle {\frac {dy}{dx}}=-\kappa y}
with initial condition
y
(
0
)
=
y
0
{\displaystyle y(0)=y_{0}}
. We can approximate a solution to this one-dimensional equation using a sequence
(
y
n
)
{\displaystyle (y_{n})}
applying the forward Euler method for numerical discretization using any regular grid spacing
h
{\displaystyle h}
and grid points indexed by
n
{\displaystyle n}
as follows:
y
n
+
1
−
y
n
h
=
−
κ
y
n
,
{\displaystyle {\frac {y_{n+1}-y_{n}}{h}}=-\kappa y_{n},}
which implies the first-order linear recurrence with constant coefficients
y
n
+
1
=
y
n
(
1
−
h
κ
)
.
{\displaystyle y_{n+1}=y_{n}(1-h\kappa ).}
Given
y
(
0
)
=
y
0
{\displaystyle y(0)=y_{0}}
, the sequence satisfying that recurrence is the geometric progression
y
n
=
y
0
(
1
−
h
κ
)
n
=
y
0
(
1
−
n
h
κ
+
n
(
n
−
1
)
2
h
2
κ
2
+
.
.
.
.
)
.
{\displaystyle y_{n}=y_{0}(1-h\kappa )^{n}=y_{0}\left(1-nh\kappa +{\frac {n(n-1)}{2}}h^{2}\kappa ^{2}+....\right).}
The exact analytical solution to the differential equation is
y
=
f
(
x
)
=
y
0
exp
(
−
κ
x
)
{\displaystyle y=f(x)=y_{0}\exp(-\kappa x)}
, corresponding to the following Taylor expansion in
n
h
κ
{\displaystyle nh\kappa }
:
f
(
x
n
)
=
f
(
n
h
)
=
y
0
exp
(
−
κ
n
h
)
=
y
0
(
1
−
n
h
κ
+
n
2
h
2
κ
2
2
+
.
.
.
)
.
{\displaystyle f(x_{n})=f(nh)=y_{0}\exp(-\kappa nh)=y_{0}\left(1-nh\kappa +{\frac {n^{2}h^{2}\kappa ^{2}}{2}}+...\right).}
Therefore the error of the discrete approximation at each discrete point is
|
y
n
−
f
(
x
n
)
|
=
n
h
2
κ
2
2
+
…
{\displaystyle |y_{n}-f(x_{n})|={\frac {nh^{2}\kappa ^{2}}{2}}+\ldots }
For any specific
x
=
p
{\displaystyle x=p}
, given a sequence of forward Euler approximations
(
(
y
n
)
k
)
{\displaystyle ((y_{n})_{k})}
, each using grid spacings
h
k
{\displaystyle h_{k}}
that divide
p
{\displaystyle p}
so that
n
p
,
k
=
p
/
h
k
{\displaystyle n_{p,k}=p/h_{k}}
, one has
lim
h
k
→
0
|
y
k
(
p
)
−
f
(
p
)
|
h
k
=
lim
h
k
→
0
|
y
k
,
n
p
,
k
−
f
(
h
k
n
p
,
k
)
|
h
k
=
h
k
n
p
,
k
κ
2
2
=
p
κ
2
2
{\displaystyle \lim _{h_{k}\rightarrow 0}{\frac {|y_{k}(p)-f(p)|}{h_{k}}}=\lim _{h_{k}\rightarrow 0}{\frac {|y_{k,n_{p,k}}-f(h_{k}n_{p,k})|}{h_{k}}}={\frac {h_{k}n_{p,k}\kappa ^{2}}{2}}={\frac {p\kappa ^{2}}{2}}}
for any sequence of grids with successively smaller grid spacings
h
k
{\displaystyle h_{k}}
. Thus
(
(
y
n
)
k
)
{\displaystyle ((y_{n})_{k})}
converges to
f
(
x
)
{\displaystyle f(x)}
pointwise with a convergence order
q
=
1
{\displaystyle q=1}
and asymptotic error constant
p
κ
2
/
2
{\displaystyle p\kappa ^{2}/2}
at each point
p
>
0.
{\displaystyle p>0.}
Similarly, the sequence converges uniformly with the same order and with rate
L
κ
2
/
2
{\displaystyle L\kappa ^{2}/2}
on any bounded interval of
p
≤
L
{\displaystyle p\leq L}
, but it does not converge uniformly on the unbounded set of all positive real values,
[
0
,
∞
)
.
{\displaystyle [0,\infty ).}
== Comparing asymptotic rates of convergence ==
=== Definitions ===
In asymptotic analysis in general, one sequence
(
a
k
)
k
∈
N
{\displaystyle (a_{k})_{k\in \mathbb {N} }}
that converges to a limit
L
{\displaystyle L}
is said to asymptotically converge to
L
{\displaystyle L}
with a faster order of convergence than another sequence
(
b
k
)
k
∈
N
{\displaystyle (b_{k})_{k\in \mathbb {N} }}
that converges to
L
{\displaystyle L}
in a shared metric space with distance metric
|
⋅
|
,
{\displaystyle |\cdot |,}
such as the real numbers or complex numbers with the ordinary absolute difference metrics, if
lim
k
→
∞
|
a
k
−
L
|
|
b
k
−
L
|
=
0
,
{\displaystyle \lim _{k\rightarrow \infty }{\frac {\left|a_{k}-L\right|}{|b_{k}-L|}}=0,}
the two are said to asymptotically converge to
L
{\displaystyle L}
with the same order of convergence if
lim
k
→
∞
|
a
k
−
L
|
|
b
k
−
L
|
=
μ
{\displaystyle \lim _{k\rightarrow \infty }{\frac {\left|a_{k}-L\right|}{|b_{k}-L|}}=\mu }
for some positive finite constant
μ
,
{\displaystyle \mu ,}
and the two are said to asymptotically converge to
L
{\displaystyle L}
with the same rate and order of convergence if
lim
k
→
∞
|
a
k
−
L
|
|
b
k
−
L
|
=
1.
{\displaystyle \lim _{k\rightarrow \infty }{\frac {\left|a_{k}-L\right|}{|b_{k}-L|}}=1.}
These comparative definitions of rate and order of asymptotic convergence are fundamental in asymptotic analysis. For the first two of these there are associated expressions in asymptotic O notation: the first is that
a
k
−
L
=
o
(
b
k
−
L
)
{\displaystyle a_{k}-L=o(b_{k}-L)}
in small o notation and the second is that
a
k
−
L
=
Θ
(
b
k
−
L
)
{\displaystyle a_{k}-L=\Theta (b_{k}-L)}
in Knuth notation. The third is also called asymptotic equivalence, expressed
a
k
−
L
∼
b
k
−
L
.
{\displaystyle a_{k}-L\sim b_{k}-L.}
=== Examples ===
For any two geometric progressions
(
a
r
k
)
k
∈
N
{\displaystyle (ar^{k})_{k\in \mathbb {N} }}
and
(
b
s
k
)
k
∈
N
,
{\displaystyle (bs^{k})_{k\in \mathbb {N} },}
with shared limit zero, the two sequences are asymptotically equivalent if and only if both
a
=
b
{\displaystyle a=b}
and
r
=
s
.
{\displaystyle r=s.}
They converge with the same order if and only if
r
=
s
.
{\displaystyle r=s.}
(
a
r
k
)
{\displaystyle (ar^{k})}
converges with a faster order than
(
b
s
k
)
{\displaystyle (bs^{k})}
if and only if
r
<
s
.
{\displaystyle r<s.}
The convergence of any geometric series to its limit has error terms that are equal to a geometric progression, so similar relationships hold among geometric series as well. Any sequence that is asymptotically equivalent to a convergent geometric sequence may be either be said to "converge geometrically" or "converge exponentially" with respect to the absolute difference from its limit, or it may be said to "converge linearly" relative to a logarithm of the absolute difference such as the "number of decimals of precision." The latter is standard in numerical analysis.
For any two sequences of elements proportional to an inverse power of
k
,
{\displaystyle k,}
(
a
k
−
n
)
k
∈
N
{\displaystyle (ak^{-n})_{k\in \mathbb {N} }}
and
(
b
k
−
m
)
k
∈
N
,
{\displaystyle (bk^{-m})_{k\in \mathbb {N} },}
with shared limit zero, the two sequences are asymptotically equivalent if and only if both
a
=
b
{\displaystyle a=b}
and
n
=
m
.
{\displaystyle n=m.}
They converge with the same order if and only if
n
=
m
.
{\displaystyle n=m.}
(
a
k
−
n
)
{\displaystyle (ak^{-n})}
converges with a faster order than
(
b
k
−
m
)
{\displaystyle (bk^{-m})}
if and only if
n
>
m
.
{\displaystyle n>m.}
For any sequence
(
a
k
)
k
∈
N
{\displaystyle (a_{k})_{k\in \mathbb {N} }}
with a limit of zero, its convergence can be compared to the convergence of the shifted sequence
(
a
k
−
1
)
k
∈
N
,
{\displaystyle (a_{k-1})_{k\in \mathbb {N} },}
rescalings of the shifted sequence by a constant
μ
,
{\displaystyle \mu ,}
(
μ
a
k
−
1
)
k
∈
N
,
{\displaystyle (\mu a_{k-1})_{k\in \mathbb {N} },}
and scaled
q
{\displaystyle q}
-powers of the shifted sequence,
(
μ
a
k
−
1
q
)
k
∈
N
.
{\displaystyle (\mu a_{k-1}^{q})_{k\in \mathbb {N} }.}
These comparisons are the basis for the Q-convergence classifications for iterative numerical methods as described above: when a sequence of iterate errors from a numerical method
(
|
x
k
−
L
|
)
k
∈
N
{\displaystyle (|x_{k}-L|)_{k\in \mathbb {N} }}
is asymptotically equivalent to the shifted, exponentiated, and rescaled sequence of iterate errors
(
μ
|
x
k
−
1
−
L
|
q
)
k
∈
N
,
{\displaystyle (\mu |x_{k-1}-L|^{q})_{k\in \mathbb {N} },}
it is said to converge with order
q
{\displaystyle q}
and rate
μ
.
{\displaystyle \mu .}
== Non-asymptotic rates of convergence ==
Non-asymptotic rates of convergence do not have the common, standard definitions that asymptotic rates of convergence have. Among formal techniques, Lyapunov theory is one of the most powerful and widely applied frameworks for characterizing and analyzing non-asymptotic convergence behavior.
For iterative methods, one common practical approach is to discuss these rates in terms of the number of iterates or the computer time required to reach close neighborhoods of a limit from starting points far from the limit. The non-asymptotic rate is then an inverse of that number of iterates or computer time. In practical applications, an iterative method that required fewer steps or less computer time than another to reach target accuracy will be said to have converged faster than the other, even if its asymptotic convergence is slower. These rates will generally be different for different starting points and different error thresholds for defining the neighborhoods. It is most common to discuss summaries of statistical distributions of these single point rates corresponding to distributions of possible starting points, such as the "average non-asymptotic rate," the "median non-asymptotic rate," or the "worst-case non-asymptotic rate" for some method applied to some problem with some fixed error threshold. These ensembles of starting points can be chosen according to parameters like initial distance from the eventual limit in order to define quantities like "average non-asymptotic rate of convergence from a given distance."
For discretized approximation methods, similar approaches can be used with a discretization scale parameter such as an inverse of a number of grid or mesh points or a Fourier series cutoff frequency playing the role of inverse iterate number, though it is not especially common. For any problem, there is a greatest discretization scale parameter compatible with a desired accuracy of approximation, and it may not be as small as required for the asymptotic rate and order of convergence to provide accurate estimates of the error. In practical applications, when one discretization method gives a desired accuracy with a larger discretization scale parameter than another it will often be said to converge faster than the other, even if its eventual asymptotic convergence is slower.
== References == | Wikipedia/Rate_of_convergence |
The Petrov–Galerkin method is a mathematical method used to approximate solutions of partial differential equations which contain terms with odd order and where the test function and solution function belong to different function spaces. It can be viewed as an extension of Bubnov-Galerkin method where the bases of test functions and solution functions are the same. In an operator formulation of the differential equation, Petrov–Galerkin method can be viewed as applying a projection that is not necessarily orthogonal, in contrast to Bubnov-Galerkin method.
It is named after the Soviet scientists Georgy I. Petrov and Boris G. Galerkin.
== Introduction with an abstract problem ==
Petrov-Galerkin's method is a natural extension of Galerkin method and can be similarly introduced as follows.
=== A problem in weak formulation ===
Let us consider an abstract problem posed as a weak formulation on a pair of Hilbert spaces
V
{\displaystyle V}
and
W
{\displaystyle W}
, namely,
find
u
∈
V
{\displaystyle u\in V}
such that
a
(
u
,
w
)
=
f
(
w
)
{\displaystyle a(u,w)=f(w)}
for all
w
∈
W
{\displaystyle w\in W}
.
Here,
a
(
⋅
,
⋅
)
{\displaystyle a(\cdot ,\cdot )}
is a bilinear form and
f
{\displaystyle f}
is a bounded linear functional on
W
{\displaystyle W}
.
=== Petrov-Galerkin dimension reduction ===
Choose subspaces
V
n
⊂
V
{\displaystyle V_{n}\subset V}
of dimension n and
W
m
⊂
W
{\displaystyle W_{m}\subset W}
of dimension m and solve the projected problem:
Find
v
n
∈
V
n
{\displaystyle v_{n}\in V_{n}}
such that
a
(
v
n
,
w
m
)
=
f
(
w
m
)
{\displaystyle a(v_{n},w_{m})=f(w_{m})}
for all
w
m
∈
W
m
{\displaystyle w_{m}\in W_{m}}
.
We notice that the equation has remained unchanged and only the spaces have changed. Reducing the problem to a finite-dimensional vector subspace allows us to numerically compute
v
n
{\displaystyle v_{n}}
as a finite linear combination of the basis vectors in
V
n
{\displaystyle V_{n}}
.
=== Petrov-Galerkin generalized orthogonality ===
The key property of the Petrov-Galerkin approach is that the error is in some sense "orthogonal" to the chosen subspaces. Since
W
m
⊂
W
{\displaystyle W_{m}\subset W}
, we can use
w
m
{\displaystyle w_{m}}
as a test vector in the original equation. Subtracting the two, we get the relation for the error,
ϵ
n
=
v
−
v
n
{\displaystyle \epsilon _{n}=v-v_{n}}
which is the error between the solution of the original problem,
v
{\displaystyle v}
, and the solution of the Galerkin equation,
v
n
{\displaystyle v_{n}}
, as follows
a
(
ϵ
n
,
w
m
)
=
a
(
v
,
w
m
)
−
a
(
v
n
,
w
m
)
=
f
(
w
m
)
−
f
(
w
m
)
=
0
{\displaystyle a(\epsilon _{n},w_{m})=a(v,w_{m})-a(v_{n},w_{m})=f(w_{m})-f(w_{m})=0}
for all
w
m
∈
W
m
{\displaystyle w_{m}\in W_{m}}
.
=== Matrix form ===
Since the aim of the approximation is producing a linear system of equations, we build its matrix form, which can be used to compute the solution algorithmically.
Let
v
1
,
v
2
,
…
,
v
n
{\displaystyle v^{1},v^{2},\ldots ,v^{n}}
be a basis for
V
n
{\displaystyle V_{n}}
and
w
1
,
w
2
,
…
,
w
m
{\displaystyle w^{1},w^{2},\ldots ,w^{m}}
be a basis for
W
m
{\displaystyle W_{m}}
. Then, it is sufficient to use these in turn for testing the Galerkin equation, i.e.: find
v
n
∈
V
n
{\displaystyle v_{n}\in V_{n}}
such that
a
(
v
n
,
w
j
)
=
f
(
w
j
)
j
=
1
,
…
,
m
.
{\displaystyle a(v_{n},w^{j})=f(w^{j})\quad j=1,\ldots ,m.}
We expand
v
n
{\displaystyle v_{n}}
with respect to the solution basis,
v
n
=
∑
i
=
1
n
x
i
v
i
{\displaystyle v_{n}=\sum _{i=1}^{n}x^{i}v^{i}}
and insert it into the equation above, to obtain
a
(
∑
i
=
1
n
x
i
v
i
,
w
j
)
=
∑
i
=
1
n
x
i
a
(
v
i
,
w
j
)
=
f
(
w
j
)
j
=
1
,
…
,
m
.
{\displaystyle a\left(\sum _{i=1}^{n}x^{i}v^{i},\,w^{j}\right)=\sum _{i=1}^{n}x^{i}a(v^{i},w^{j})=f(w^{j})\quad j=1,\ldots ,m.}
This previous equation is actually a linear system of equations
A
T
x
=
b
{\displaystyle A^{T}x=b}
, where
A
i
j
=
d
e
f
a
(
v
i
,
w
j
)
,
and
b
j
=
d
e
f
f
(
w
j
)
.
{\displaystyle A_{ij}{\stackrel {\mathrm {def} }{=}}a(v^{i},w^{j}),\qquad {\text{and}}\qquad b_{j}{\stackrel {\mathrm {def} }{=}}f(w^{j}).}
==== Symmetry of the matrix ====
Due to the definition of the matrix entries, the matrix
A
{\displaystyle A}
is symmetric if
V
=
W
{\displaystyle V=W}
, the bilinear form
a
(
⋅
,
⋅
)
{\displaystyle a(\cdot ,\cdot )}
is symmetric,
n
=
m
{\displaystyle n=m}
,
V
n
=
W
m
{\displaystyle V_{n}=W_{m}}
, and
v
i
=
w
j
{\displaystyle v^{i}=w^{j}}
for all
i
=
j
=
1
,
…
,
n
=
m
.
{\displaystyle i=j=1,\ldots ,n=m.}
In contrast to the case of Bubnov-Galerkin method, the system matrix
A
{\displaystyle A}
is not even square, if
n
≠
m
.
{\displaystyle n\neq m.}
== See also ==
Bubnov-Galerkin method
== Notes == | Wikipedia/Petrov–Galerkin_method |
In mathematics, the method of characteristics is a technique for solving particular partial differential equations. Typically, it applies to first-order equations, though in general characteristic curves can also be found for hyperbolic and parabolic partial differential equation. The method is to reduce a partial differential equation (PDE) to a family of ordinary differential equations (ODEs) along which the solution can be integrated from some initial data given on a suitable hypersurface.
== Characteristics of first-order partial differential equation ==
For a first-order PDE, the method of characteristics discovers so called characteristic curves along which the PDE becomes an ODE. Once the ODE is found, it can be solved along the characteristic curves and transformed into a solution for the original PDE.
For the sake of simplicity, we confine our attention to the case of a function of two independent variables x and y for the moment. Consider a quasilinear PDE of the form
Suppose that a solution z is known, and consider the surface graph z = z(x,y) in R3. A normal vector to this surface is given by
(
∂
z
∂
x
(
x
,
y
)
,
∂
z
∂
y
(
x
,
y
)
,
−
1
)
.
{\displaystyle \left({\frac {\partial z}{\partial x}}(x,y),{\frac {\partial z}{\partial y}}(x,y),-1\right).\,}
As a result, equation (1) is equivalent to the geometrical statement that the vector field
(
a
(
x
,
y
,
z
)
,
b
(
x
,
y
,
z
)
,
c
(
x
,
y
,
z
)
)
{\displaystyle (a(x,y,z),b(x,y,z),c(x,y,z))\,}
is tangent to the surface z = z(x,y) at every point, for the dot product of this vector field with the above normal vector is zero. In other words, the graph of the solution must be a union of integral curves of this vector field. These integral curves are called the characteristic curves of the original partial differential equation and follow as the solutions of the characteristic equations:
d
x
d
t
=
a
(
x
,
y
,
z
)
,
d
y
d
t
=
b
(
x
,
y
,
z
)
,
d
z
d
t
=
c
(
x
,
y
,
z
)
.
{\displaystyle {\begin{aligned}{\frac {dx}{dt}}&=a(x,y,z),\\[8pt]{\frac {dy}{dt}}&=b(x,y,z),\\[8pt]{\frac {dz}{dt}}&=c(x,y,z).\end{aligned}}}
A parametrization invariant form of the Lagrange–Charpit equations is:
d
x
a
(
x
,
y
,
z
)
=
d
y
b
(
x
,
y
,
z
)
=
d
z
c
(
x
,
y
,
z
)
.
{\displaystyle {\frac {dx}{a(x,y,z)}}={\frac {dy}{b(x,y,z)}}={\frac {dz}{c(x,y,z)}}.}
=== Linear and quasilinear cases ===
Consider now a PDE of the form
∑
i
=
1
n
a
i
(
x
1
,
…
,
x
n
,
u
)
∂
u
∂
x
i
=
c
(
x
1
,
…
,
x
n
,
u
)
.
{\displaystyle \sum _{i=1}^{n}a_{i}(x_{1},\dots ,x_{n},u){\frac {\partial u}{\partial x_{i}}}=c(x_{1},\dots ,x_{n},u).}
For this PDE to be linear, the coefficients ai may be functions of the spatial variables only, and independent of u. For it to be quasilinear, ai may also depend on the value of the function, but not on any derivatives. The distinction between these two cases is inessential for the discussion here.
For a linear or quasilinear PDE, the characteristic curves are given parametrically by
(
x
1
,
…
,
x
n
,
u
)
=
(
X
1
(
s
)
,
…
,
X
n
(
s
)
,
U
(
s
)
)
{\displaystyle (x_{1},\dots ,x_{n},u)=(X_{1}(s),\dots ,X_{n}(s),U(s))}
u
(
X
(
s
)
)
=
U
(
s
)
{\displaystyle u(\mathbf {X} (s))=U(s)}
for some univariate functions
s
↦
(
X
i
(
s
)
)
i
,
U
(
s
)
{\displaystyle s\mapsto (X_{i}(s))_{i},U(s)}
of one real variable
s
{\displaystyle s}
satisfying the following system of ordinary differential equations
Equations (2) and (3) give the characteristics of the PDE.
=== Fully nonlinear case ===
Consider the partial differential equation
where the variables pi are shorthand for the partial derivatives
p
i
=
∂
u
∂
x
i
.
{\displaystyle p_{i}={\frac {\partial u}{\partial x_{i}}}.}
Let (xi(s),u(s),pi(s)) be a curve in R2n+1. Suppose that u is any solution, and that
u
(
s
)
=
u
(
x
1
(
s
)
,
…
,
x
n
(
s
)
)
.
{\displaystyle u(s)=u(x_{1}(s),\dots ,x_{n}(s)).}
Along a solution, differentiating (4) with respect to s gives
∑
i
(
F
x
i
+
F
u
p
i
)
x
˙
i
+
∑
i
F
p
i
p
˙
i
=
0
{\displaystyle \sum _{i}(F_{x_{i}}+F_{u}p_{i}){\dot {x}}_{i}+\sum _{i}F_{p_{i}}{\dot {p}}_{i}=0}
u
˙
−
∑
i
p
i
x
˙
i
=
0
{\displaystyle {\dot {u}}-\sum _{i}p_{i}{\dot {x}}_{i}=0}
∑
i
(
x
˙
i
d
p
i
−
p
˙
i
d
x
i
)
=
0.
{\displaystyle \sum _{i}({\dot {x}}_{i}dp_{i}-{\dot {p}}_{i}dx_{i})=0.}
The second equation follows from applying the chain rule to a solution u, and the third follows by taking an exterior derivative of the relation
d
u
−
∑
i
p
i
d
x
i
=
0
{\displaystyle du-\sum _{i}p_{i}\,dx_{i}=0}
. Manipulating these equations gives
x
˙
i
=
λ
F
p
i
,
p
˙
i
=
−
λ
(
F
x
i
+
F
u
p
i
)
,
u
˙
=
λ
∑
i
p
i
F
p
i
{\displaystyle {\dot {x}}_{i}=\lambda F_{p_{i}},\quad {\dot {p}}_{i}=-\lambda (F_{x_{i}}+F_{u}p_{i}),\quad {\dot {u}}=\lambda \sum _{i}p_{i}F_{p_{i}}}
where λ is a constant. Writing these equations more symmetrically, one obtains the Lagrange–Charpit equations for the characteristic
x
˙
i
F
p
i
=
−
p
˙
i
F
x
i
+
F
u
p
i
=
u
˙
∑
p
i
F
p
i
.
{\displaystyle {\frac {{\dot {x}}_{i}}{F_{p_{i}}}}=-{\frac {{\dot {p}}_{i}}{F_{x_{i}}+F_{u}p_{i}}}={\frac {\dot {u}}{\sum p_{i}F_{p_{i}}}}.}
Geometrically, the method of characteristics in the fully nonlinear case can be interpreted as requiring that the Monge cone of the differential equation should everywhere be tangent to the graph of the solution.
== Example ==
As an example, consider the advection equation (this example assumes familiarity with PDE notation, and solutions to basic ODEs).
a
∂
u
∂
x
+
∂
u
∂
t
=
0
{\displaystyle a{\frac {\partial u}{\partial x}}+{\frac {\partial u}{\partial t}}=0}
where
a
{\displaystyle a}
is constant and
u
{\displaystyle u}
is a function of
x
{\displaystyle x}
and
t
{\displaystyle t}
. We want to transform this linear first-order PDE into an ODE along the appropriate curve; i.e. something of the form
d
d
s
u
(
x
(
s
)
,
t
(
s
)
)
=
F
(
u
,
x
(
s
)
,
t
(
s
)
)
,
{\displaystyle {\frac {d}{ds}}u(x(s),t(s))=F(u,x(s),t(s)),}
where
(
x
(
s
)
,
t
(
s
)
)
{\displaystyle (x(s),t(s))}
is a characteristic line. First, we find
d
d
s
u
(
x
(
s
)
,
t
(
s
)
)
=
∂
u
∂
x
d
x
d
s
+
∂
u
∂
t
d
t
d
s
{\displaystyle {\frac {d}{ds}}u(x(s),t(s))={\frac {\partial u}{\partial x}}{\frac {dx}{ds}}+{\frac {\partial u}{\partial t}}{\frac {dt}{ds}}}
by the chain rule. Now, if we set
d
x
d
s
=
a
{\displaystyle {\frac {dx}{ds}}=a}
and
d
t
d
s
=
1
{\displaystyle {\frac {dt}{ds}}=1}
we get
a
∂
u
∂
x
+
∂
u
∂
t
{\displaystyle a{\frac {\partial u}{\partial x}}+{\frac {\partial u}{\partial t}}}
which is the left hand side of the PDE we started with. Thus
d
d
s
u
=
a
∂
u
∂
x
+
∂
u
∂
t
=
0.
{\displaystyle {\frac {d}{ds}}u=a{\frac {\partial u}{\partial x}}+{\frac {\partial u}{\partial t}}=0.}
So, along the characteristic line
(
x
(
s
)
,
t
(
s
)
)
{\displaystyle (x(s),t(s))}
, the original PDE becomes the ODE
u
s
=
F
(
u
,
x
(
s
)
,
t
(
s
)
)
=
0
{\displaystyle u_{s}=F(u,x(s),t(s))=0}
. That is to say that along the characteristics, the solution is constant. Thus,
u
(
x
s
,
t
s
)
=
u
(
x
0
,
0
)
{\displaystyle u(x_{s},t_{s})=u(x_{0},0)}
where
(
x
s
,
t
s
)
{\displaystyle (x_{s},t_{s})\,}
and
(
x
0
,
0
)
{\displaystyle (x_{0},0)}
lie on the same characteristic. Therefore, to determine the general solution, it is enough to find the characteristics by solving the characteristic system of ODEs:
d
t
d
s
=
1
{\displaystyle {\frac {dt}{ds}}=1}
, letting
t
(
0
)
=
0
{\displaystyle t(0)=0}
we know
t
=
s
{\displaystyle t=s}
,
d
x
d
s
=
a
{\displaystyle {\frac {dx}{ds}}=a}
, letting
x
(
0
)
=
x
0
{\displaystyle x(0)=x_{0}}
we know
x
=
a
s
+
x
0
=
a
t
+
x
0
{\displaystyle x=as+x_{0}=at+x_{0}}
,
d
u
d
s
=
0
{\displaystyle {\frac {du}{ds}}=0}
, letting
u
(
0
)
=
f
(
x
0
)
{\displaystyle u(0)=f(x_{0})}
we know
u
(
x
(
t
)
,
t
)
=
f
(
x
0
)
=
f
(
x
−
a
t
)
{\displaystyle u(x(t),t)=f(x_{0})=f(x-at)}
.
In this case, the characteristic lines are straight lines with slope
a
{\displaystyle a}
, and the value of
u
{\displaystyle u}
remains constant along any characteristic line.
== Characteristics of linear differential operators ==
Let X be a differentiable manifold and P a linear differential operator
P
:
C
∞
(
X
)
→
C
∞
(
X
)
{\displaystyle P:C^{\infty }(X)\to C^{\infty }(X)}
of order k. In a local coordinate system xi,
P
=
∑
|
α
|
≤
k
P
α
(
x
)
∂
∂
x
α
{\displaystyle P=\sum _{|\alpha |\leq k}P^{\alpha }(x){\frac {\partial }{\partial x^{\alpha }}}}
in which α denotes a multi-index. The principal symbol of P, denoted σP, is the function on the cotangent bundle T∗X defined in these local coordinates by
σ
P
(
x
,
ξ
)
=
∑
|
α
|
=
k
P
α
(
x
)
ξ
α
{\displaystyle \sigma _{P}(x,\xi )=\sum _{|\alpha |=k}P^{\alpha }(x)\xi _{\alpha }}
where the ξi are the fiber coordinates on the cotangent bundle induced by the coordinate differentials dxi. Although this is defined using a particular coordinate system, the transformation law relating the ξi and the xi ensures that σP is a well-defined function on the cotangent bundle.
The function σP is homogeneous of degree k in the ξ variable. The zeros of σP, away from the zero section of T∗X, are the characteristics of P. A hypersurface of X defined by the equation F(x) = c is called a characteristic hypersurface at x if
σ
P
(
x
,
d
F
(
x
)
)
=
0.
{\displaystyle \sigma _{P}(x,dF(x))=0.}
Invariantly, a characteristic hypersurface is a hypersurface whose conormal bundle is in the characteristic set of P.
== Qualitative analysis of characteristics ==
Characteristics are also a powerful tool for gaining qualitative insight into a PDE.
One can use the crossings of the characteristics to find shock waves for potential flow in a compressible fluid. Intuitively, we can think of each characteristic line implying a solution to
u
{\displaystyle u}
along itself. Thus, when two characteristics cross, the function becomes multi-valued resulting in a non-physical solution. Physically, this contradiction is removed by the formation of a shock wave, a tangential discontinuity or a weak discontinuity and can result in non-potential flow, violating the initial assumptions.
Characteristics may fail to cover part of the domain of the PDE. This is called a rarefaction, and indicates the solution typically exists only in a weak, i.e. integral equation, sense.
The direction of the characteristic lines indicates the flow of values through the solution, as the example above demonstrates. This kind of knowledge is useful when solving PDEs numerically as it can indicate which finite difference scheme is best for the problem.
== See also ==
Method of quantum characteristics
== Notes ==
== References ==
Courant, Richard; Hilbert, David (1962), Methods of Mathematical Physics, Volume II, Wiley-Interscience
Demidov, S. S. (1982). "The study of partial differential equations of the first order in the 18th and 19th centuries". Archive for History of Exact Sciences. 26 (4). Springer Science and Business Media LLC: 325–350. doi:10.1007/bf00418753. ISSN 0003-9519.
Evans, Lawrence C. (1998), Partial Differential Equations, Providence: American Mathematical Society, ISBN 0-8218-0772-2
John, Fritz (1991). Partial Differential Equations (4th ed.). New York: Springer Science & Business Media. ISBN 978-0-387-90609-6.
Zauderer, Erich (2006). Partial Differential Equations of Applied Mathematics. Wiley. doi:10.1002/9781118033302. ISBN 978-0-471-69073-3.* Polyanin, A. D.; Zaitsev, V. F.; Moussiaux, A. (2002), Handbook of First Order Partial Differential Equations, London: Taylor & Francis, ISBN 0-415-27267-X
Pinchover, Yehuda; Rubinstein, Jacob (2005). An Introduction to Partial Differential Equations. Cambridge University Press. doi:10.1017/cbo9780511801228. ISBN 978-0-511-80122-8.
Polyanin, A. D. (2002), Handbook of Linear Partial Differential Equations for Engineers and Scientists, Boca Raton: Chapman & Hall/CRC Press, ISBN 1-58488-299-9
Sarra, Scott (2003), "The Method of Characteristics with applications to Conservation Laws", Journal of Online Mathematics and Its Applications
Streeter, VL; Wylie, EB (1998), Fluid mechanics (International 9th Revised ed.), McGraw-Hill Higher Education
Zachmanoglou, E. C.; Thoe, Dale W. (1986). Introduction to Partial Differential Equations with Applications. New York: Courier Corporation. ISBN 0-486-65251-3.
== External links ==
Prof. Scott Sarra tutorial on Method of Characteristics
Prof. Alan Hood tutorial on Method of Characteristics | Wikipedia/Method_of_characteristics |
A functional differential equation is a differential equation with deviating argument. That is, a functional differential equation is an equation that contains a function and some of its derivatives evaluated at different argument values.
Functional differential equations find use in mathematical models that assume a specified behavior or phenomenon depends on the present as well as the past state of a system. In other words, past events explicitly influence future results. For this reason, functional differential equations are more applicable than ordinary differential equations (ODE), in which future behavior only implicitly depends on the past.
== Definition ==
Unlike ordinary differential equations, which contain a function of one variable and its derivatives evaluated with the same input, functional differential equations contain a function and its derivatives evaluated with different input values.
An example of an ordinary differential equation would be
f
′
(
x
)
=
2
f
(
x
)
+
1
{\displaystyle f'(x)=2f(x)+1}
In comparison, a functional differential equation would be
f
′
(
x
)
=
2
f
(
x
+
3
)
−
[
f
(
x
−
1
)
]
2
{\displaystyle f'(x)=2f(x+3)-[f(x-1)]^{2}}
The simplest type of functional differential equation called the retarded functional differential equation or retarded differential difference equation, is of the form
x
′
(
t
)
=
f
(
t
,
x
(
t
)
,
x
(
t
−
r
)
)
{\displaystyle x'(t)=f{\bigl (}t,x(t),x(t-r){\bigr )}}
=== Examples ===
The simplest, fundamental functional differential equation is the linear first-order delay differential equation which is given by
x
′
(
t
)
=
α
1
x
(
t
)
+
α
2
x
(
t
−
τ
)
+
f
(
t
)
,
t
≥
0
{\displaystyle x'(t)=\alpha _{1}x(t)+\alpha _{2}x(t-\tau )+f(t),t\geq 0}
where
α
1
,
α
2
,
τ
{\displaystyle \alpha _{1},\alpha _{2},\tau }
are constants,
f
(
t
)
{\displaystyle f(t)}
is some continuous function, and
x
{\displaystyle x}
is a scalar. Below is a table with a comparison of several ordinary and functional differential equations.
== Types of functional differential equations ==
"Functional differential equation" is the general name for a number of more specific types of differential equations that are used in numerous applications. There are delay differential equations, integro-differential equations, and so on.
=== Differential difference equation ===
Differential difference equations are functional differential equations in which the argument values are discrete. The general form for functional differential equations of finitely many discrete deviating arguments is
x
(
n
)
(
t
)
=
f
(
t
,
x
(
n
1
)
(
t
−
τ
1
(
t
)
)
,
x
(
n
2
)
(
t
−
τ
2
(
t
)
)
,
…
,
x
(
n
k
)
(
t
−
τ
k
(
t
)
)
)
{\displaystyle x^{(n)}(t)=f{\Bigl (}t,x^{(n_{1})}{\bigl (}t-\tau _{1}(t){\bigr )},x^{(n_{2})}{\bigl (}t-\tau _{2}(t){\bigr )},\ldots ,x^{(n_{k})}{\bigl (}t-\tau _{k}(t){\bigr )}{\Bigr )}}
where
x
(
t
)
∈
R
m
,
n
1
,
n
2
,
…
,
n
i
≥
0
,
{\displaystyle x(t)\in \mathbb {R} ^{m},\,n_{1},n_{2},\ldots ,n_{i}\geq 0,}
and
τ
1
(
t
)
,
τ
2
(
t
)
,
…
,
τ
i
(
t
)
≥
0
{\displaystyle \tau _{1}(t),\tau _{2}(t),\ldots ,\tau _{i}(t)\geq 0}
Differential difference equations are also referred to as retarded, neutral, advanced, and mixed functional differential equations. This classification depends on whether the rate of change of the current state of the system depends on past values, future values, or both.
==== Delay differential equation ====
Functional differential equations of retarded type occur when
max
{
n
1
,
n
2
,
…
,
n
k
}
<
n
{\displaystyle \max\{n_{1},n_{2},\ldots ,n_{k}\ \}<n}
for the equation given above. In other words, this class of functional differential equations depends on the past and present values of the function with delays.
A simple example of a retarded functional differential equation is
x
′
(
t
)
=
−
x
(
t
−
τ
)
{\displaystyle x'(t)=-x(t-\tau )}
whereas a more general form for discrete deviating arguments can be written as
x
′
(
t
)
=
f
(
t
,
x
(
t
−
τ
1
(
t
)
)
,
x
(
t
−
τ
2
(
t
)
)
,
…
,
x
(
t
−
τ
k
(
t
)
)
)
.
{\displaystyle x'(t)=f{\Bigl (}t,x{\bigl (}t-\tau _{1}(t){\bigr )},x{\bigl (}t-\tau _{2}(t){\bigr )},\ldots ,x{\bigl (}t-\tau _{k}(t){\bigr )}{\Bigr )}.}
==== Neutral differential equations ====
Functional differential equations of neutral type, or neutral differential equations occur when
max
{
n
1
,
n
2
,
…
,
n
k
}
=
n
.
{\displaystyle \max\{n_{1},n_{2},\ldots ,n_{k}\}=n.}
Neutral differential equations depend on past and present values of the function, similarly to retarded differential equations, except it also depends on derivatives with delays. In other words, retarded differential equations do not involve the given function's derivative with delays while neutral differential equations do.
=== Integro-differential equation ===
Integro-differential equations of Volterra type are functional differential equations with continuous argument values. Integro-differential equations involve both the integrals and derivatives of some function with respect to its argument.
The continuous integro-differential equation for retarded functional differential equations,
x
′
(
t
)
=
f
(
t
,
x
(
t
−
τ
1
(
t
)
)
,
x
(
t
−
τ
2
(
t
)
)
,
…
,
x
(
t
−
τ
k
(
t
)
)
)
{\displaystyle x'(t)=f{\bigl (}t,x(t-\tau _{1}(t)),x(t-\tau _{2}(t)),\ldots ,x(t-\tau _{k}(t)){\bigr )}}
, can be written as
x
′
(
t
)
=
f
(
t
,
∫
t
−
τ
(
t
)
t
K
(
t
,
θ
,
x
(
θ
)
)
d
θ
)
,
τ
(
t
)
≥
0
{\displaystyle x'(t)=f{\Biggl (}t,\int _{t-\tau (t)}^{t}K(t,\theta ,x(\theta ))\,\mathrm {d} \theta {\Biggr )},\quad \tau (t)\geq 0}
== Application ==
Functional differential equations have been used in models that determine future behavior of a certain phenomenon determined by the present and the past. Future behavior of phenomena, described by the solutions of ODEs, assumes that behavior is independent of the past. However, there can be many situations that depend on past behavior.
FDEs are applicable for models in multiple fields, such as medicine, mechanics, biology, and economics. FDEs have been used in research for heat-transfer, signal processing, evolution of a species, traffic flow and study of epidemics.
=== Population growth with time lag ===
A logistic equation for population growth is given by
d
x
d
t
=
ρ
x
(
t
)
(
1
−
x
(
t
)
k
)
,
{\displaystyle {\mathrm {d} x \over \mathrm {d} t}=\rho \,x(t)\left(1-{\frac {x(t)}{k}}\right),}
where ρ is the reproduction rate and k is the carrying capacity.
x
(
t
)
{\displaystyle x(t)}
represents the population size at time t, and
ρ
(
1
−
x
(
t
)
k
)
{\textstyle \rho \left(1-{\frac {x(t)}{k}}\right)}
is the density-dependent reproduction rate.
If we were to now apply this to an earlier time
t
−
τ
{\displaystyle t-\tau }
, we get
d
x
d
t
=
ρ
x
(
t
)
(
1
−
x
(
t
−
τ
)
k
)
{\displaystyle {\mathrm {d} x \over \mathrm {d} t}=\rho \,x(t)\left(1-{\frac {x(t-\tau )}{k}}\right)}
=== Mixing model ===
Upon exposure to applications of ordinary differential equations, many come across the mixing model of some chemical solution.
Suppose there is a container holding liters of salt water. Salt water is flowing in, and out of the container at the same rate
r
{\displaystyle r}
of liters per second. In other words, the rate of water flowing in is equal to the rate of the salt water solution flowing out. Let
V
{\displaystyle V}
be the amount in liters of salt water in the container and
x
(
t
)
{\displaystyle x(t)}
be the uniform concentration in grams per liter of salt water at time
t
{\displaystyle t}
. Then, we have the differential equation
x
′
(
t
)
=
−
r
V
x
(
t
)
,
r
V
>
0
{\displaystyle x'(t)=-{\frac {r}{V}}x(t),{\frac {r}{V}}>0}
The problem with this equation is that it makes the assumption that every drop of water that enters the contain is instantaneously mixed into the solution. This can be eliminated by using a FDE instead of an ODE.
Let
x
(
t
)
{\displaystyle x(t)}
be the average concentration at time
t
{\displaystyle t}
, rather than uniform. Then, let's assume the solution leaving the container at time
t
{\displaystyle t}
is equal to
x
(
t
−
τ
)
,
τ
>
0
{\displaystyle x(t-\tau ),\tau >0}
, the average concentration at some earlier time. Then, the equation is a delay-differential equation of the form
x
′
(
t
)
=
−
r
V
x
(
t
−
τ
)
{\displaystyle x'(t)=-{\frac {r}{V}}x(t-\tau )}
=== Volterra's predator-prey model ===
The Lotka–Volterra predator-prey model was originally developed to observe the population of sharks and fish in the Adriatic Sea; however, this model has been used in many other fields for different uses, such as describing chemical reactions. Modelling predatory-prey population has always been widely researched, and as a result, there have been many different forms of the original equation.
One example, as shown by Xu, Wu (2013), of the Lotka–Volterra model with time-delay is given below:
p
′
(
t
)
=
p
(
t
)
[
r
1
(
t
)
−
a
11
(
t
)
p
(
t
−
τ
11
(
t
)
)
−
a
12
(
t
)
P
1
(
t
−
τ
12
(
t
)
)
−
a
13
(
t
)
P
2
(
t
−
τ
13
(
t
)
)
]
{\displaystyle p'(t)=p(t){\Biggl [}r_{1}(t)-a_{11}(t)p{\biggl (}t-\tau _{11}(t){\biggr )}-a_{12}(t)P_{1}{\biggl (}t-\tau _{12}(t){\biggr )}-a_{13}(t)P_{2}{\biggl (}t-\tau _{13}(t){\biggr )}{\Biggr ]}}
P
1
′
(
t
)
=
P
1
(
t
)
[
−
r
2
(
t
)
+
a
21
(
t
)
p
(
t
−
τ
21
(
t
)
)
−
a
22
(
t
)
P
1
(
t
−
τ
22
(
t
)
)
−
a
23
(
t
)
P
2
(
t
−
τ
23
(
t
)
)
]
{\displaystyle P_{1}'(t)=P_{1}(t){\Biggl [}-r_{2}(t)+a_{21}(t)p{\biggl (}t-\tau _{21}(t){\biggr )}-a_{22}(t)P_{1}{\biggl (}t-\tau _{22}(t){\biggr )}-a_{23}(t)P_{2}{\biggl (}t-\tau _{23}(t){\biggr )}{\Biggr ]}}
P
2
′
(
t
)
=
P
2
(
t
)
[
−
r
2
(
t
)
+
a
31
(
t
)
p
(
t
−
τ
31
(
t
)
)
−
a
32
(
t
)
P
1
(
t
−
τ
32
(
t
)
)
−
a
33
(
t
)
P
2
(
t
−
τ
33
(
t
)
)
]
{\displaystyle P_{2}'(t)=P_{2}(t){\Biggl [}-r_{2}(t)+a_{31}(t)p{\biggl (}t-\tau _{31}(t){\biggr )}-a_{32}(t)P_{1}{\biggl (}t-\tau _{32}(t){\biggr )}-a_{33}(t)P_{2}{\biggl (}t-\tau _{33}(t){\biggr )}{\Biggr ]}}
where
p
(
t
)
{\displaystyle p(t)}
denotes the prey population density at time t,
P
1
(
t
)
{\displaystyle P_{1}(t)}
and
P
2
(
t
)
{\displaystyle P_{2}(t)}
denote the density of the predator population at time
t
,
r
i
,
a
i
j
∈
C
(
R
,
[
0
,
∞
)
)
{\displaystyle t,r_{i},a_{ij}\in C(\mathbb {R} ,[0,\infty ))}
and
τ
i
j
∈
C
(
R
,
R
)
{\displaystyle \tau _{ij}\in C(\mathbb {R} ,\mathbb {R} )}
=== Other models using FDEs ===
Examples of other models that have used FDEs, namely RFDEs, are given below:
Controlled motion of a rigid body
Periodic motions
Flip-flop circuit as a NDE
Model of HIV epidemic
Math models of sugar quantity in blood
Evolution equations of single species
Spread of an infection between two species
Classical electrodynamics
== See also ==
Volterra integral equation
Lotka–Volterra equations
Bifurcation theory
Lyapunov function
Volterra series
== References ==
== Further reading ==
Herdman, Terry L.; Rankin III, Samuel M.; Stech, Harlan W. (1981). Integral and Functional Differential Equations: Lecture notes. 67. United States: Marcel Dekker Inc, Pure and Applied Mathematics
Ford, Neville J.; Lumb, Patricia M. (2009). "Mixed-type functional differential equations: A numerical approach". Journal of Computational and Applied Mathematics. 229 (2): 471–479
Lemon, Greg; Kinf, John R. (2012). :A functional differential equation model for biological cell sorting due to differential adhesion". Mathematical Models and Methods in Applied Sciences. 12(1): 93–126
Da Silva, Carmen, Escalante, René (2011). "Segmented Tau approximation for forward-backward functional differential equation". Computers and Mathematics with Applications. 62 (12): 4582–4591
Pravica, D. W.; Randriampiry, N.; Spurr, M. J. (2009). "Applications of an advanced differential equation in the study of wavelets". Applied and Computational Harmonic Analysis. 27 (1): 2(10)
Breda, Dimitri; Maset, Stefano; Vermiglio Rossana (2015). Stability of Linear Delay Differential Equations: A Numerical Approach with MATLAB. Springer. ISBN 978-1-4939-2106-5 | Wikipedia/Functional_differential_equation |
In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.
Typically, the behavior of a nonlinear system is described in mathematics by a nonlinear system of equations, which is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one.
In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it.
As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such as solitons, chaos, and singularities are hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology.
Some authors use the term nonlinear science for the study of nonlinear systems. This term is disputed by others:
Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals.
== Definition ==
In mathematics, a linear map (or linear function)
f
(
x
)
{\displaystyle f(x)}
is one which satisfies both of the following properties:
Additivity or superposition principle:
f
(
x
+
y
)
=
f
(
x
)
+
f
(
y
)
;
{\displaystyle \textstyle f(x+y)=f(x)+f(y);}
Homogeneity:
f
(
α
x
)
=
α
f
(
x
)
.
{\displaystyle \textstyle f(\alpha x)=\alpha f(x).}
Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity. For example, an antilinear map is additive but not homogeneous. The conditions of additivity and homogeneity are often combined in the superposition principle
f
(
α
x
+
β
y
)
=
α
f
(
x
)
+
β
f
(
y
)
{\displaystyle f(\alpha x+\beta y)=\alpha f(x)+\beta f(y)}
An equation written as
f
(
x
)
=
C
{\displaystyle f(x)=C}
is called linear if
f
(
x
)
{\displaystyle f(x)}
is a linear map (as defined above) and nonlinear otherwise. The equation is called homogeneous if
C
=
0
{\displaystyle C=0}
and
f
(
x
)
{\displaystyle f(x)}
is a homogeneous function.
The definition
f
(
x
)
=
C
{\displaystyle f(x)=C}
is very general in that
x
{\displaystyle x}
can be any sensible mathematical object (number, vector, function, etc.), and the function
f
(
x
)
{\displaystyle f(x)}
can literally be any mapping, including integration or differentiation with associated constraints (such as boundary values). If
f
(
x
)
{\displaystyle f(x)}
contains differentiation with respect to
x
{\displaystyle x}
, the result will be a differential equation.
== Nonlinear systems of equations ==
A nonlinear system of equations consists of a set of equations in several variables such that at least one of them is not a linear equation.
For a single equation of the form
f
(
x
)
=
0
,
{\displaystyle f(x)=0,}
many methods have been designed; see Root-finding algorithm. In the case where f is a polynomial, one has a polynomial equation such as
x
2
+
x
−
1
=
0.
{\displaystyle x^{2}+x-1=0.}
The general root-finding algorithms apply to polynomial roots, but, generally they do not find all the roots, and when they fail to find a root, this does not imply that there is no roots. Specific methods for polynomials allow finding all roots or the real roots; see real-root isolation.
Solving systems of polynomial equations, that is finding the common zeros of a set of several polynomials in several variables is a difficult problem for which elaborate algorithms have been designed, such as Gröbner base algorithms.
For the general case of system of equations formed by equating to zero several differentiable functions, the main method is Newton's method and its variants. Generally they may provide a solution, but do not provide any information on the number of solutions.
== Nonlinear recurrence relations ==
A nonlinear recurrence relation defines successive terms of a sequence as a nonlinear function of preceding terms. Examples of nonlinear recurrence relations are the logistic map and the relations that define the various Hofstadter sequences. Nonlinear discrete models that represent a wide class of nonlinear recurrence relationships include the NARMAX (Nonlinear Autoregressive Moving Average with eXogenous inputs) model and the related nonlinear system identification and analysis procedures. These approaches can be used to study a wide class of complex nonlinear behaviors in the time, frequency, and spatio-temporal domains.
== Nonlinear differential equations ==
A system of differential equations is said to be nonlinear if it is not a system of linear equations. Problems involving nonlinear differential equations are extremely diverse, and methods of solution or analysis are problem dependent. Examples of nonlinear differential equations are the Navier–Stokes equations in fluid dynamics and the Lotka–Volterra equations in biology.
One of the greatest difficulties of nonlinear problems is that it is not generally possible to combine known solutions into new solutions. In linear problems, for example, a family of linearly independent solutions can be used to construct general solutions through the superposition principle. A good example of this is one-dimensional heat transport with Dirichlet boundary conditions, the solution of which can be written as a time-dependent linear combination of sinusoids of differing frequencies; this makes solutions very flexible. It is often possible to find several very specific solutions to nonlinear equations, however the lack of a superposition principle prevents the construction of new solutions.
=== Ordinary differential equations ===
First order ordinary differential equations are often exactly solvable by separation of variables, especially for autonomous equations. For example, the nonlinear equation
d
u
d
x
=
−
u
2
{\displaystyle {\frac {du}{dx}}=-u^{2}}
has
u
=
1
x
+
C
{\displaystyle u={\frac {1}{x+C}}}
as a general solution (and also the special solution
u
=
0
,
{\displaystyle u=0,}
corresponding to the limit of the general solution when C tends to infinity). The equation is nonlinear because it may be written as
d
u
d
x
+
u
2
=
0
{\displaystyle {\frac {du}{dx}}+u^{2}=0}
and the left-hand side of the equation is not a linear function of
u
{\displaystyle u}
and its derivatives. Note that if the
u
2
{\displaystyle u^{2}}
term were replaced with
u
{\displaystyle u}
, the problem would be linear (the exponential decay problem).
Second and higher order ordinary differential equations (more generally, systems of nonlinear equations) rarely yield closed-form solutions, though implicit solutions and solutions involving nonelementary integrals are encountered.
Common methods for the qualitative analysis of nonlinear ordinary differential equations include:
Examination of any conserved quantities, especially in Hamiltonian systems
Examination of dissipative quantities (see Lyapunov function) analogous to conserved quantities
Linearization via Taylor expansion
Change of variables into something easier to study
Bifurcation theory
Perturbation methods (can be applied to algebraic equations too)
Existence of solutions of Finite-Duration, which can happen under specific conditions for some non-linear ordinary differential equations.
=== Partial differential equations ===
The most common basic approach to studying nonlinear partial differential equations is to change the variables (or otherwise transform the problem) so that the resulting problem is simpler (possibly linear). Sometimes, the equation may be transformed into one or more ordinary differential equations, as seen in separation of variables, which is always useful whether or not the resulting ordinary differential equation(s) is solvable.
Another common (though less mathematical) tactic, often exploited in fluid and heat mechanics, is to use scale analysis to simplify a general, natural equation in a certain specific boundary value problem. For example, the (very) nonlinear Navier-Stokes equations can be simplified into one linear partial differential equation in the case of transient, laminar, one dimensional flow in a circular pipe; the scale analysis provides conditions under which the flow is laminar and one dimensional and also yields the simplified equation.
Other methods include examining the characteristics and using the methods outlined above for ordinary differential equations.
=== Pendula ===
A classic, extensively studied nonlinear problem is the dynamics of a frictionless pendulum under the influence of gravity. Using Lagrangian mechanics, it may be shown that the motion of a pendulum can be described by the dimensionless nonlinear equation
d
2
θ
d
t
2
+
sin
(
θ
)
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\sin(\theta )=0}
where gravity points "downwards" and
θ
{\displaystyle \theta }
is the angle the pendulum forms with its rest position, as shown in the figure at right. One approach to "solving" this equation is to use
d
θ
/
d
t
{\displaystyle d\theta /dt}
as an integrating factor, which would eventually yield
∫
d
θ
C
0
+
2
cos
(
θ
)
=
t
+
C
1
{\displaystyle \int {\frac {d\theta }{\sqrt {C_{0}+2\cos(\theta )}}}=t+C_{1}}
which is an implicit solution involving an elliptic integral. This "solution" generally does not have many uses because most of the nature of the solution is hidden in the nonelementary integral (nonelementary unless
C
0
=
2
{\displaystyle C_{0}=2}
).
Another way to approach the problem is to linearize any nonlinearity (the sine function term in this case) at the various points of interest through Taylor expansions. For example, the linearization at
θ
=
0
{\displaystyle \theta =0}
, called the small angle approximation, is
d
2
θ
d
t
2
+
θ
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\theta =0}
since
sin
(
θ
)
≈
θ
{\displaystyle \sin(\theta )\approx \theta }
for
θ
≈
0
{\displaystyle \theta \approx 0}
. This is a simple harmonic oscillator corresponding to oscillations of the pendulum near the bottom of its path. Another linearization would be at
θ
=
π
{\displaystyle \theta =\pi }
, corresponding to the pendulum being straight up:
d
2
θ
d
t
2
+
π
−
θ
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\pi -\theta =0}
since
sin
(
θ
)
≈
π
−
θ
{\displaystyle \sin(\theta )\approx \pi -\theta }
for
θ
≈
π
{\displaystyle \theta \approx \pi }
. The solution to this problem involves hyperbolic sinusoids, and note that unlike the small angle approximation, this approximation is unstable, meaning that
|
θ
|
{\displaystyle |\theta |}
will usually grow without limit, though bounded solutions are possible. This corresponds to the difficulty of balancing a pendulum upright, it is literally an unstable state.
One more interesting linearization is possible around
θ
=
π
/
2
{\displaystyle \theta =\pi /2}
, around which
sin
(
θ
)
≈
1
{\displaystyle \sin(\theta )\approx 1}
:
d
2
θ
d
t
2
+
1
=
0.
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+1=0.}
This corresponds to a free fall problem. A very useful qualitative picture of the pendulum's dynamics may be obtained by piecing together such linearizations, as seen in the figure at right. Other techniques may be used to find (exact) phase portraits and approximate periods.
== Types of nonlinear dynamic behaviors ==
Amplitude death – any oscillations present in the system cease due to some kind of interaction with other system or feedback by the same system
Chaos – values of a system cannot be predicted indefinitely far into the future, and fluctuations are aperiodic
Multistability – the presence of two or more stable states
Solitons – self-reinforcing solitary waves
Limit cycles – asymptotic periodic orbits to which destabilized fixed points are attracted.
Self-oscillations – feedback oscillations taking place in open dissipative physical systems.
== Examples of nonlinear equations ==
== See also ==
== References ==
== Further reading ==
== External links ==
Command and Control Research Program (CCRP)
New England Complex Systems Institute: Concepts in Complex Systems
Nonlinear Dynamics I: Chaos at MIT's OpenCourseWare
Nonlinear Model Library – (in MATLAB) a Database of Physical Systems
The Center for Nonlinear Studies at Los Alamos National Laboratory | Wikipedia/Non-linear_differential_equation |
In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with any other DE, its unknown(s) consists of one (or more) function(s) and involves the derivatives of those functions. The term "ordinary" is used in contrast with partial differential equations (PDEs) which may be with respect to more than one independent variable, and, less commonly, in contrast with stochastic differential equations (SDEs) where the progression is random.
== Differential equations ==
A linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form
a
0
(
x
)
y
+
a
1
(
x
)
y
′
+
a
2
(
x
)
y
″
+
⋯
+
a
n
(
x
)
y
(
n
)
+
b
(
x
)
=
0
,
{\displaystyle a_{0}(x)y+a_{1}(x)y'+a_{2}(x)y''+\cdots +a_{n}(x)y^{(n)}+b(x)=0,}
where
a
0
(
x
)
,
…
,
a
n
(
x
)
{\displaystyle a_{0}(x),\ldots ,a_{n}(x)}
and
b
(
x
)
{\displaystyle b(x)}
are arbitrary differentiable functions that do not need to be linear, and
y
′
,
…
,
y
(
n
)
{\displaystyle y',\ldots ,y^{(n)}}
are the successive derivatives of the unknown function
y
{\displaystyle y}
of the variable
x
{\displaystyle x}
.
Among ordinary differential equations, linear differential equations play a prominent role for several reasons. Most elementary and special functions that are encountered in physics and applied mathematics are solutions of linear differential equations (see Holonomic function). When physical phenomena are modeled with non-linear equations, they are generally approximated by linear differential equations for an easier solution. The few non-linear ODEs that can be solved explicitly are generally solved by transforming the equation into an equivalent linear ODE (see, for example Riccati equation).
Some ODEs can be solved explicitly in terms of known functions and integrals. When that is not possible, the equation for computing the Taylor series of the solutions may be useful. For applied problems, numerical methods for ordinary differential equations can supply an approximation of the solution.
== Background ==
Ordinary differential equations (ODEs) arise in many contexts of mathematics and social and natural sciences. Mathematical descriptions of change use differentials and derivatives. Various differentials, derivatives, and functions become related via equations, such that a differential equation is a result that describes dynamically changing phenomena, evolution, and variation. Often, quantities are defined as the rate of change of other quantities (for example, derivatives of displacement with respect to time), or gradients of quantities, which is how they enter differential equations.
Specific mathematical fields include geometry and analytical mechanics. Scientific fields include much of physics and astronomy (celestial mechanics), meteorology (weather modeling), chemistry (reaction rates), biology (infectious diseases, genetic variation), ecology and population modeling (population competition), economics (stock trends, interest rates and the market equilibrium price changes).
Many mathematicians have studied differential equations and contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, d'Alembert, and Euler.
A simple example is Newton's second law of motion—the relationship between the displacement
x
{\displaystyle x}
and the time
t
{\displaystyle t}
of an object under the force
F
{\displaystyle F}
, is given by the differential equation
m
d
2
x
(
t
)
d
t
2
=
F
(
x
(
t
)
)
{\displaystyle m{\frac {\mathrm {d} ^{2}x(t)}{\mathrm {d} t^{2}}}=F(x(t))\,}
which constrains the motion of a particle of constant mass
m
{\displaystyle m}
. In general,
F
{\displaystyle F}
is a function of the position
x
(
t
)
{\displaystyle x(t)}
of the particle at time
t
{\displaystyle t}
. The unknown function
x
(
t
)
{\displaystyle x(t)}
appears on both sides of the differential equation, and is indicated in the notation
F
(
x
(
t
)
)
{\displaystyle F(x(t))}
.
== Definitions ==
In what follows,
y
{\displaystyle y}
is a dependent variable representing an unknown function
y
=
f
(
x
)
{\displaystyle y=f(x)}
of the independent variable
x
{\displaystyle x}
. The notation for differentiation varies depending upon the author and upon which notation is most useful for the task at hand. In this context, the Leibniz's notation
d
y
d
x
,
d
2
y
d
x
2
,
…
,
d
n
y
d
x
n
{\displaystyle {\frac {dy}{dx}},{\frac {d^{2}y}{dx^{2}}},\ldots ,{\frac {d^{n}y}{dx^{n}}}}
is more useful for differentiation and integration, whereas Lagrange's notation
y
′
,
y
″
,
…
,
y
(
n
)
{\displaystyle y',y'',\ldots ,y^{(n)}}
is more useful for representing higher-order derivatives compactly, and Newton's notation
(
y
˙
,
y
¨
,
y
.
.
.
)
{\displaystyle ({\dot {y}},{\ddot {y}},{\overset {...}{y}})}
is often used in physics for representing derivatives of low order with respect to time.
=== General definition ===
Given
F
{\displaystyle F}
, a function of
x
{\displaystyle x}
,
y
{\displaystyle y}
, and derivatives of
y
{\displaystyle y}
. Then an equation of the form
F
(
x
,
y
,
y
′
,
…
,
y
(
n
−
1
)
)
=
y
(
n
)
{\displaystyle F\left(x,y,y',\ldots ,y^{(n-1)}\right)=y^{(n)}}
is called an explicit ordinary differential equation of order
n
{\displaystyle n}
.
More generally, an implicit ordinary differential equation of order
n
{\displaystyle n}
takes the form:
F
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
=
0
{\displaystyle F\left(x,y,y',y'',\ \ldots ,\ y^{(n)}\right)=0}
There are further classifications:
AutonomousA differential equation is autonomous if it does not depend on the variable x.
Linear
A differential equation is linear if
F
{\displaystyle F}
can be written as a linear combination of the derivatives of
y
{\displaystyle y}
; that is, it can be rewritten as
y
(
n
)
=
∑
i
=
0
n
−
1
a
i
(
x
)
y
(
i
)
+
r
(
x
)
{\displaystyle y^{(n)}=\sum _{i=0}^{n-1}a_{i}(x)y^{(i)}+r(x)}
where
a
i
(
x
)
{\displaystyle a_{i}(x)}
and
r
(
x
)
{\displaystyle r(x)}
are continuous functions of
x
{\displaystyle x}
.
The function
r
(
x
)
{\displaystyle r(x)}
is called the source term, leading to further classification.
HomogeneousA linear differential equation is homogeneous if
r
(
x
)
=
0
{\displaystyle r(x)=0}
. In this case, there is always the "trivial solution"
y
=
0
{\displaystyle y=0}
.
Nonhomogeneous (or inhomogeneous)A linear differential equation is nonhomogeneous if
r
(
x
)
≠
0
{\displaystyle r(x)\neq 0}
.
Non-linearA differential equation that is not linear.
=== System of ODEs ===
A number of coupled differential equations form a system of equations. If
y
{\displaystyle \mathbf {y} }
is a vector whose elements are functions;
y
(
x
)
=
[
y
1
(
x
)
,
y
2
(
x
)
,
…
,
y
m
(
x
)
]
{\displaystyle \mathbf {y} (x)=[y_{1}(x),y_{2}(x),\ldots ,y_{m}(x)]}
, and
F
{\displaystyle \mathbf {F} }
is a vector-valued function of
y
{\displaystyle \mathbf {y} }
and its derivatives, then
y
(
n
)
=
F
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
{\displaystyle \mathbf {y} ^{(n)}=\mathbf {F} \left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)}
is an explicit system of ordinary differential equations of order
n
{\displaystyle n}
and dimension
m
{\displaystyle m}
. In column vector form:
(
y
1
(
n
)
y
2
(
n
)
⋮
y
m
(
n
)
)
=
(
f
1
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
f
2
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
⋮
f
m
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
)
{\displaystyle {\begin{pmatrix}y_{1}^{(n)}\\y_{2}^{(n)}\\\vdots \\y_{m}^{(n)}\end{pmatrix}}={\begin{pmatrix}f_{1}\left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)\\f_{2}\left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)\\\vdots \\f_{m}\left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)\end{pmatrix}}}
These are not necessarily linear. The implicit analogue is:
F
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
=
0
{\displaystyle \mathbf {F} \left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)}\right)={\boldsymbol {0}}}
where
0
=
(
0
,
0
,
…
,
0
)
{\displaystyle {\boldsymbol {0}}=(0,0,\ldots ,0)}
is the zero vector. In matrix form
(
f
1
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
f
2
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
⋮
f
m
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
)
=
(
0
0
⋮
0
)
{\displaystyle {\begin{pmatrix}f_{1}(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)})\\f_{2}(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)})\\\vdots \\f_{m}(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)})\end{pmatrix}}={\begin{pmatrix}0\\0\\\vdots \\0\end{pmatrix}}}
For a system of the form
F
(
x
,
y
,
y
′
)
=
0
{\displaystyle \mathbf {F} \left(x,\mathbf {y} ,\mathbf {y} '\right)={\boldsymbol {0}}}
, some sources also require that the Jacobian matrix
∂
F
(
x
,
u
,
v
)
∂
v
{\displaystyle {\frac {\partial \mathbf {F} (x,\mathbf {u} ,\mathbf {v} )}{\partial \mathbf {v} }}}
be non-singular in order to call this an implicit ODE [system]; an implicit ODE system satisfying this Jacobian non-singularity condition can be transformed into an explicit ODE system. In the same sources, implicit ODE systems with a singular Jacobian are termed differential algebraic equations (DAEs). This distinction is not merely one of terminology; DAEs have fundamentally different characteristics and are generally more involved to solve than (nonsingular) ODE systems. Presumably for additional derivatives, the Hessian matrix and so forth are also assumed non-singular according to this scheme, although note that any ODE of order greater than one can be (and usually is) rewritten as system of ODEs of first order, which makes the Jacobian singularity criterion sufficient for this taxonomy to be comprehensive at all orders.
The behavior of a system of ODEs can be visualized through the use of a phase portrait.
=== Solutions ===
Given a differential equation
F
(
x
,
y
,
y
′
,
…
,
y
(
n
)
)
=
0
{\displaystyle F\left(x,y,y',\ldots ,y^{(n)}\right)=0}
a function
u
:
I
⊂
R
→
R
{\displaystyle u:I\subset \mathbb {R} \to \mathbb {R} }
, where
I
{\displaystyle I}
is an interval, is called a solution or integral curve for
F
{\displaystyle F}
, if
u
{\displaystyle u}
is
n
{\displaystyle n}
-times differentiable on
I
{\displaystyle I}
, and
F
(
x
,
u
,
u
′
,
…
,
u
(
n
)
)
=
0
x
∈
I
.
{\displaystyle F(x,u,u',\ \ldots ,\ u^{(n)})=0\quad x\in I.}
Given two solutions
u
:
J
⊂
R
→
R
{\displaystyle u:J\subset \mathbb {R} \to \mathbb {R} }
and
v
:
I
⊂
R
→
R
{\displaystyle v:I\subset \mathbb {R} \to \mathbb {R} }
,
u
{\displaystyle u}
is called an extension of
v
{\displaystyle v}
if
I
⊂
J
{\displaystyle I\subset J}
and
u
(
x
)
=
v
(
x
)
x
∈
I
.
{\displaystyle u(x)=v(x)\quad x\in I.\,}
A solution that has no extension is called a maximal solution. A solution defined on all of
R
{\displaystyle \mathbb {R} }
is called a global solution.
A general solution of an
n
{\displaystyle n}
th-order equation is a solution containing
n
{\displaystyle n}
arbitrary independent constants of integration. A particular solution is derived from the general solution by setting the constants to particular values, often chosen to fulfill set 'initial conditions or boundary conditions'. A singular solution is a solution that cannot be obtained by assigning definite values to the arbitrary constants in the general solution.
In the context of linear ODE, the terminology particular solution can also refer to any solution of the ODE (not necessarily satisfying the initial conditions), which is then added to the homogeneous solution (a general solution of the homogeneous ODE), which then forms a general solution of the original ODE. This is the terminology used in the guessing method section in this article, and is frequently used when discussing the method of undetermined coefficients and variation of parameters.
=== Solutions of finite duration ===
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that from its own dynamics, the system will reach the value zero at an ending time and stays there in zero forever after. These finite-duration solutions can't be analytical functions on the whole real line, and because they will be non-Lipschitz functions at their ending time, they are not included in the uniqueness theorem of solutions of Lipschitz differential equations.
As example, the equation:
y
′
=
−
sgn
(
y
)
|
y
|
,
y
(
0
)
=
1
{\displaystyle y'=-{\text{sgn}}(y){\sqrt {|y|}},\,\,y(0)=1}
Admits the finite duration solution:
y
(
x
)
=
1
4
(
1
−
x
2
+
|
1
−
x
2
|
)
2
{\displaystyle y(x)={\frac {1}{4}}\left(1-{\frac {x}{2}}+\left|1-{\frac {x}{2}}\right|\right)^{2}}
== Theories ==
=== Singular solutions ===
The theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century has it received special attention. A valuable but little-known work on the subject is that of Houtain (1854). Darboux (from 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.
=== Reduction to quadratures ===
The primitive attempt in dealing with differential equations had in view a reduction to quadratures, that is, expressing the solutions in terms of known function and their integrals. This is possible for linear equations with constant coefficients, it appeared in the 19th century that this is generally impossible in other cases. Hence, analysts began the study (for their own) of functions that are solutions of differential equations, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter, the real question was no longer whether a solution is possible by quadratures, but whether a given differential equation suffices for the definition of a function, and, if so, what are the characteristic properties of such functions.
=== Fuchsian theory ===
Two memoirs by Fuchs inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869. His method for integrating a non-linear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve that remains unchanged under a rational transformation, Clebsch proposed to classify the transcendent functions defined by differential equations according to the invariant properties of the corresponding surfaces
f
=
0
{\displaystyle f=0}
under rational one-to-one transformations.
=== Lie's theory ===
From 1870, Sophus Lie's work put the theory of differential equations on a better foundation. He showed that the integration theories of the older mathematicians can, using Lie groups, be referred to a common source, and that ordinary differential equations that admit the same infinitesimal transformations present comparable integration difficulties. He also emphasized the subject of transformations of contact.
Lie's group theory of differential equations has been certified, namely: (1) that it unifies the many ad hoc methods known for solving differential equations, and (2) that it provides powerful new ways to find solutions. The theory has applications to both ordinary and partial differential equations.
A general solution approach uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras, and differential geometry are used to understand the structure of linear and non-linear (partial) differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform, and finally finding exact analytic solutions to DE.
Symmetry methods have been applied to differential equations that arise in mathematics, physics, engineering, and other disciplines.
=== Sturm–Liouville theory ===
Sturm–Liouville theory is a theory of a special type of second-order linear ordinary differential equation. Their solutions are based on eigenvalues and corresponding eigenfunctions of linear operators defined via second-order homogeneous linear equations. The problems are identified as Sturm–Liouville problems (SLP) and are named after J. C. F. Sturm and J. Liouville, who studied them in the mid-1800s. SLPs have an infinite number of eigenvalues, and the corresponding eigenfunctions form a complete, orthogonal set, which makes orthogonal expansions possible. This is a key idea in applied mathematics, physics, and engineering. SLPs are also useful in the analysis of certain partial differential equations.
== Existence and uniqueness of solutions ==
There are several theorems that establish existence and uniqueness of solutions to initial value problems involving ODEs both locally and globally. The two main theorems are
In their basic form both of these theorems only guarantee local results, though the latter can be extended to give a global result, for example, if the conditions of Grönwall's inequality are met.
Also, uniqueness theorems like the Lipschitz one above do not apply to DAE systems, which may have multiple solutions stemming from their (non-linear) algebraic part alone.
=== Local existence and uniqueness theorem simplified ===
The theorem can be stated simply as follows. For the equation and initial value problem:
y
′
=
F
(
x
,
y
)
,
y
0
=
y
(
x
0
)
{\displaystyle y'=F(x,y)\,,\quad y_{0}=y(x_{0})}
if
F
{\displaystyle F}
and
∂
F
/
∂
y
{\displaystyle \partial F/\partial y}
are continuous in a closed rectangle
R
=
[
x
0
−
a
,
x
0
+
a
]
×
[
y
0
−
b
,
y
0
+
b
]
{\displaystyle R=[x_{0}-a,x_{0}+a]\times [y_{0}-b,y_{0}+b]}
in the
x
−
y
{\displaystyle x-y}
plane, where
a
{\displaystyle a}
and
b
{\displaystyle b}
are real (symbolically:
a
,
b
∈
R
{\displaystyle a,b\in \mathbb {R} }
) and
x
{\displaystyle x}
denotes the Cartesian product, square brackets denote closed intervals, then there is an interval
I
=
[
x
0
−
h
,
x
0
+
h
]
⊂
[
x
0
−
a
,
x
0
+
a
]
{\displaystyle I=[x_{0}-h,x_{0}+h]\subset [x_{0}-a,x_{0}+a]}
for some
h
∈
R
{\displaystyle h\in \mathbb {R} }
where the solution to the above equation and initial value problem can be found. That is, there is a solution and it is unique. Since there is no restriction on
F
{\displaystyle F}
to be linear, this applies to non-linear equations that take the form
F
(
x
,
y
)
{\displaystyle F(x,y)}
, and it can also be applied to systems of equations.
=== Global uniqueness and maximum domain of solution ===
When the hypotheses of the Picard–Lindelöf theorem are satisfied, then local existence and uniqueness can be extended to a global result. More precisely:
For each initial condition
(
x
0
,
y
0
)
{\displaystyle (x_{0},y_{0})}
there exists a unique maximum (possibly infinite) open interval
I
max
=
(
x
−
,
x
+
)
,
x
±
∈
R
∪
{
±
∞
}
,
x
0
∈
I
max
{\displaystyle I_{\max }=(x_{-},x_{+}),x_{\pm }\in \mathbb {R} \cup \{\pm \infty \},x_{0}\in I_{\max }}
such that any solution that satisfies this initial condition is a restriction of the solution that satisfies this initial condition with domain
I
max
{\displaystyle I_{\max }}
.
In the case that
x
±
≠
±
∞
{\displaystyle x_{\pm }\neq \pm \infty }
, there are exactly two possibilities
explosion in finite time:
lim sup
x
→
x
±
‖
y
(
x
)
‖
→
∞
{\displaystyle \limsup _{x\to x_{\pm }}\|y(x)\|\to \infty }
leaves domain of definition:
lim
x
→
x
±
y
(
x
)
∈
∂
Ω
¯
{\displaystyle \lim _{x\to x_{\pm }}y(x)\ \in \partial {\bar {\Omega }}}
where
Ω
{\displaystyle \Omega }
is the open set in which
F
{\displaystyle F}
is defined, and
∂
Ω
¯
{\displaystyle \partial {\bar {\Omega }}}
is its boundary.
Note that the maximum domain of the solution
is always an interval (to have uniqueness)
may be smaller than
R
{\displaystyle \mathbb {R} }
may depend on the specific choice of
(
x
0
,
y
0
)
{\displaystyle (x_{0},y_{0})}
.
Example.
y
′
=
y
2
{\displaystyle y'=y^{2}}
This means that
F
(
x
,
y
)
=
y
2
{\displaystyle F(x,y)=y^{2}}
, which is
C
1
{\displaystyle C^{1}}
and therefore locally Lipschitz continuous, satisfying the Picard–Lindelöf theorem.
Even in such a simple setting, the maximum domain of solution cannot be all
R
{\displaystyle \mathbb {R} }
since the solution is
y
(
x
)
=
y
0
(
x
0
−
x
)
y
0
+
1
{\displaystyle y(x)={\frac {y_{0}}{(x_{0}-x)y_{0}+1}}}
which has maximum domain:
{
R
y
0
=
0
(
−
∞
,
x
0
+
1
y
0
)
y
0
>
0
(
x
0
+
1
y
0
,
+
∞
)
y
0
<
0
{\displaystyle {\begin{cases}\mathbb {R} &y_{0}=0\\[4pt]\left(-\infty ,x_{0}+{\frac {1}{y_{0}}}\right)&y_{0}>0\\[4pt]\left(x_{0}+{\frac {1}{y_{0}}},+\infty \right)&y_{0}<0\end{cases}}}
This shows clearly that the maximum interval may depend on the initial conditions. The domain of
y
{\displaystyle y}
could be taken as being
R
∖
(
x
0
+
1
/
y
0
)
,
{\displaystyle \mathbb {R} \setminus (x_{0}+1/y_{0}),}
but this would lead to a domain that is not an interval, so that the side opposite to the initial condition would be disconnected from the initial condition, and therefore not uniquely determined by it.
The maximum domain is not
R
{\displaystyle \mathbb {R} }
because
lim
x
→
x
±
‖
y
(
x
)
‖
→
∞
,
{\displaystyle \lim _{x\to x_{\pm }}\|y(x)\|\to \infty ,}
which is one of the two possible cases according to the above theorem.
== Reduction of order ==
Differential equations are usually easier to solve if the order of the equation can be reduced.
=== Reduction to a first-order system ===
Any explicit differential equation of order
n
{\displaystyle n}
,
F
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
=
y
(
n
)
{\displaystyle F\left(x,y,y',y'',\ \ldots ,\ y^{(n-1)}\right)=y^{(n)}}
can be written as a system of
n
{\displaystyle n}
first-order differential equations by defining a new family of unknown functions
y
i
=
y
(
i
−
1
)
.
{\displaystyle y_{i}=y^{(i-1)}.\!}
for
i
=
1
,
2
,
…
,
n
{\displaystyle i=1,2,\ldots ,n}
. The
n
{\displaystyle n}
-dimensional system of first-order coupled differential equations is then
y
1
′
=
y
2
y
2
′
=
y
3
⋮
y
n
−
1
′
=
y
n
y
n
′
=
F
(
x
,
y
1
,
…
,
y
n
)
.
{\displaystyle {\begin{array}{rcl}y_{1}'&=&y_{2}\\y_{2}'&=&y_{3}\\&\vdots &\\y_{n-1}'&=&y_{n}\\y_{n}'&=&F(x,y_{1},\ldots ,y_{n}).\end{array}}}
more compactly in vector notation:
y
′
=
F
(
x
,
y
)
{\displaystyle \mathbf {y} '=\mathbf {F} (x,\mathbf {y} )}
where
y
=
(
y
1
,
…
,
y
n
)
,
F
(
x
,
y
1
,
…
,
y
n
)
=
(
y
2
,
…
,
y
n
,
F
(
x
,
y
1
,
…
,
y
n
)
)
.
{\displaystyle \mathbf {y} =(y_{1},\ldots ,y_{n}),\quad \mathbf {F} (x,y_{1},\ldots ,y_{n})=(y_{2},\ldots ,y_{n},F(x,y_{1},\ldots ,y_{n})).}
== Summary of exact solutions ==
Some differential equations have solutions that can be written in an exact and closed form. Several important classes are given here.
In the table below,
P
(
x
)
{\displaystyle P(x)}
,
Q
(
x
)
{\displaystyle Q(x)}
,
P
(
y
)
{\displaystyle P(y)}
,
Q
(
y
)
{\displaystyle Q(y)}
, and
M
(
x
,
y
)
{\displaystyle M(x,y)}
,
N
(
x
,
y
)
{\displaystyle N(x,y)}
are any integrable functions of
x
{\displaystyle x}
,
y
{\displaystyle y}
;
b
{\displaystyle b}
and
c
{\displaystyle c}
are real given constants;
C
1
,
C
2
,
…
{\displaystyle C_{1},C_{2},\ldots }
are arbitrary constants (complex in general). The differential equations are in their equivalent and alternative forms that lead to the solution through integration.
In the integral solutions,
λ
{\displaystyle \lambda }
and
ε
{\displaystyle \varepsilon }
are dummy variables of integration (the continuum analogues of indices in summation), and the notation
∫
x
F
(
λ
)
d
λ
{\displaystyle \int ^{x}F(\lambda )\,d\lambda }
just means to integrate
F
(
λ
)
{\displaystyle F(\lambda )}
with respect to
λ
{\displaystyle \lambda }
, then after the integration substitute
λ
=
x
{\displaystyle \lambda =x}
, without adding constants (explicitly stated).
=== Separable equations ===
=== General first-order equations ===
=== General second-order equations ===
=== Linear to the nth order equations ===
== The guessing method ==
When all other methods for solving an ODE fail, or in the cases where we have some intuition about what the solution to a DE might look like, it is sometimes possible to solve a DE simply by guessing the solution and validating it is correct. To use this method, we simply guess a solution to the differential equation, and then plug the solution into the differential equation to validate if it satisfies the equation. If it does then we have a particular solution to the DE, otherwise we start over again and try another guess. For instance we could guess that the solution to a DE has the form:
y
=
A
e
α
t
{\displaystyle y=Ae^{\alpha t}}
since this is a very common solution that physically behaves in a sinusoidal way.
In the case of a first order ODE that is non-homogeneous we need to first find a solution to the homogeneous portion of the DE, otherwise known as the associated homogeneous equation, and then find a solution to the entire non-homogeneous equation by guessing. Finally, we add both of these solutions together to obtain the general solution to the ODE, that is:
general solution
=
general solution of the associated homogeneous equation
+
particular solution
{\displaystyle {\text{general solution}}={\text{general solution of the associated homogeneous equation}}+{\text{particular solution}}}
== Software for ODE solving ==
Maxima, an open-source computer algebra system.
COPASI, a free (Artistic License 2.0) software package for the integration and analysis of ODEs.
MATLAB, a technical computing application (MATrix LABoratory)
GNU Octave, a high-level language, primarily intended for numerical computations.
Scilab, an open source application for numerical computation.
Maple, a proprietary application for symbolic calculations.
Mathematica, a proprietary application primarily intended for symbolic calculations.
SymPy, a Python package that can solve ODEs symbolically
Julia (programming language), a high-level language primarily intended for numerical computations.
SageMath, an open-source application that uses a Python-like syntax with a wide range of capabilities spanning several branches of mathematics.
SciPy, a Python package that includes an ODE integration module.
Chebfun, an open-source package, written in MATLAB, for computing with functions to 15-digit accuracy.
GNU R, an open source computational environment primarily intended for statistics, which includes packages for ODE solving.
== See also ==
Boundary value problem
Examples of differential equations
Laplace transform applied to differential equations
List of dynamical systems and differential equations topics
Matrix differential equation
Method of undetermined coefficients
Recurrence relation
== Notes ==
== References ==
Halliday, David; Resnick, Robert (1977), Physics (3rd ed.), New York: Wiley, ISBN 0-471-71716-9
Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN 0-13-487538-9
Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: Wiley, ISBN 0-471-50728-8.
Polyanin, A. D. and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition), Chapman & Hall/CRC Press, Boca Raton, 2003. ISBN 1-58488-297-2
Simmons, George F. (1972), Differential Equations with Applications and Historical Notes, New York: McGraw-Hill, LCCN 75173716
Tipler, Paul A. (1991), Physics for Scientists and Engineers: Extended version (3rd ed.), New York: Worth Publishers, ISBN 0-87901-432-6
Boscain, Ugo; Chitour, Yacine (2011), Introduction à l'automatique (PDF) (in French)
Dresner, Lawrence (1999), Applications of Lie's Theory of Ordinary and Partial Differential Equations, Bristol and Philadelphia: Institute of Physics Publishing, ISBN 978-0750305303
Ascher, Uri; Petzold, Linda (1998), Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, SIAM, ISBN 978-1-61197-139-2
== Bibliography ==
Coddington, Earl A.; Levinson, Norman (1955). Theory of Ordinary Differential Equations. New York: McGraw-Hill.
Hartman, Philip (2002) [1964], Ordinary differential equations, Classics in Applied Mathematics, vol. 38, Philadelphia: Society for Industrial and Applied Mathematics, doi:10.1137/1.9780898719222, ISBN 978-0-89871-510-1, MR 1929104
W. Johnson, A Treatise on Ordinary and Partial Differential Equations, John Wiley and Sons, 1913, in University of Michigan Historical Math Collection
Ince, Edward L. (1944) [1926], Ordinary Differential Equations, Dover Publications, New York, ISBN 978-0-486-60349-0, MR 0010757 {{citation}}: ISBN / Date incompatibility (help)
Witold Hurewicz, Lectures on Ordinary Differential Equations, Dover Publications, ISBN 0-486-49510-8
Ibragimov, Nail H. (1993). CRC Handbook of Lie Group Analysis of Differential Equations Vol. 1-3. Providence: CRC-Press. ISBN 0-8493-4488-3..
Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux, Handbook of First Order Partial Differential Equations, Taylor & Francis, London, 2002. ISBN 0-415-27267-X
D. Zwillinger, Handbook of Differential Equations (3rd edition), Academic Press, Boston, 1997.
== External links ==
"Differential equation, ordinary", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
EqWorld: The World of Mathematical Equations, containing a list of ordinary differential equations with their solutions.
Online Notes / Differential Equations by Paul Dawkins, Lamar University.
Differential Equations, S.O.S. Mathematics.
A primer on analytical solution of differential equations from the Holistic Numerical Methods Institute, University of South Florida.
Ordinary Differential Equations and Dynamical Systems lecture notes by Gerald Teschl.
Notes on Diffy Qs: Differential Equations for Engineers An introductory textbook on differential equations by Jiri Lebl of UIUC.
Modeling with ODEs using Scilab A tutorial on how to model a physical system described by ODE using Scilab standard programming language by Openeering team.
Solving an ordinary differential equation in Wolfram|Alpha | Wikipedia/Ordinary_differential_equations |
Stochastic partial differential equations (SPDEs) generalize partial differential equations via random force terms and coefficients, in the same way ordinary stochastic differential equations generalize ordinary differential equations.
They have relevance to quantum field theory, statistical mechanics, and spatial modeling.
== Examples ==
One of the most studied SPDEs is the stochastic heat equation, which may formally be written as
∂
t
u
=
Δ
u
+
ξ
,
{\displaystyle \partial _{t}u=\Delta u+\xi \;,}
where
Δ
{\displaystyle \Delta }
is the Laplacian and
ξ
{\displaystyle \xi }
denotes space-time white noise. Other examples also include stochastic versions of famous linear equations, such as the wave equation and the Schrödinger equation.
== Discussion ==
One difficulty is their lack of regularity. In one dimensional space, solutions to the stochastic heat equation are only almost 1/2-Hölder continuous in space and 1/4-Hölder continuous in time. For dimensions two and higher, solutions are not even function-valued, but can be made sense of as random distributions.
For linear equations, one can usually find a mild solution via semigroup techniques.
However, problems start to appear when considering non-linear equations. For example
∂
t
u
=
Δ
u
+
P
(
u
)
+
ξ
,
{\displaystyle \partial _{t}u=\Delta u+P(u)+\xi ,}
where
P
{\displaystyle P}
is a polynomial. In this case it is not even clear how one should make sense of the equation. Such an equation will also not have a function-valued solution in dimension larger than one, and hence no pointwise meaning. It is well known that the space of distributions has no product structure. This is the core problem of such a theory. This leads to the need of some form of renormalization.
An early attempt to circumvent such problems for some specific equations was the so called da Prato–Debussche trick which involved studying such non-linear equations as perturbations of linear ones. However, this can only be used in very restrictive settings, as it depends on both the non-linear factor and on the regularity of the driving noise term. In recent years, the field has drastically expanded, and now there exists a large machinery to guarantee local existence for a variety of sub-critical SPDEs.
== See also ==
== References ==
== Further reading ==
Bain, A.; Crisan, D. (2009). Fundamentals of Stochastic Filtering. Stochastic Modelling and Applied Probability. Vol. 60. New York: Springer. ISBN 978-0387768953.
Holden, H.; Øksendal, B.; Ubøe, J.; Zhang, T. (2010). Stochastic Partial Differential Equations: A Modeling, White Noise Functional Approach. Universitext (2nd ed.). New York: Springer. doi:10.1007/978-0-387-89488-1. ISBN 978-0-387-89487-4.
Lindgren, F.; Rue, H.; Lindström, J. (2011). "An Explicit Link between Gaussian Fields and Gaussian Markov Random Fields: The Stochastic Partial Differential Equation Approach". Journal of the Royal Statistical Society Series B: Statistical Methodology. 73 (4): 423–498. doi:10.1111/j.1467-9868.2011.00777.x. hdl:20.500.11820/1084d335-e5b4-4867-9245-ec9c4f6f4645. ISSN 1369-7412.
Xiu, D. (2010). Numerical Methods for Stochastic Computations: A Spectral Method Approach. Princeton University Press. ISBN 978-0-691-14212-8.
== External links ==
"A Minicourse on Stochastic Partial Differential Equations" (PDF). 2006.
Hairer, Martin (2009). "An Introduction to Stochastic PDEs". arXiv:0907.4178 [math.PR]. | Wikipedia/Stochastic_partial_differential_equation |
In mathematics, time-scale calculus is a unification of the theory of difference equations with that of differential equations, unifying integral and differential calculus with the calculus of finite differences, offering a formalism for studying hybrid systems. It has applications in any field that requires simultaneous modelling of discrete and continuous data. It gives a new definition of a derivative such that if one differentiates a function defined on the real numbers then the definition is equivalent to standard differentiation, but if one uses a function defined on the integers then it is equivalent to the forward difference operator.
== History ==
Time-scale calculus was introduced in 1988 by the German mathematician Stefan Hilger. However, similar ideas have been used before and go back at least to the introduction of the Riemann–Stieltjes integral, which unifies sums and integrals.
== Dynamic equations ==
Many results concerning differential equations carry over quite easily to corresponding results for difference equations, while other results seem to be completely different from their continuous counterparts. The study of dynamic equations on time scales reveals such discrepancies, and helps avoid proving results twice—once for differential equations and once again for difference equations. The general idea is to prove a result for a dynamic equation where the domain of the unknown function is a so-called time scale (also known as a time-set), which may be an arbitrary closed subset of the reals. In this way, results apply not only to the set of real numbers or set of integers but to more general time scales such as a Cantor set.
The three most popular examples of calculus on time scales are differential calculus, difference calculus, and quantum calculus. Dynamic equations on a time scale have a potential for applications such as in population dynamics. For example, they can model insect populations that evolve continuously while in season, die out in winter while their eggs are incubating or dormant, and then hatch in a new season, giving rise to a non-overlapping population.
== Formal definitions ==
A time scale (or measure chain) is a closed subset of the real line
R
{\displaystyle \mathbb {R} }
. The common notation for a general time scale is
T
{\displaystyle \mathbb {T} }
.
The two most commonly encountered examples of time scales are the real numbers
R
{\displaystyle \mathbb {R} }
and the discrete time scale
h
Z
{\displaystyle h\mathbb {Z} }
.
A single point in a time scale is defined as:
t
:
t
∈
T
{\displaystyle t:t\in \mathbb {T} }
=== Operations on time scales ===
The forward jump and backward jump operators represent the closest point in the time scale on the right and left of a given point
t
{\displaystyle t}
, respectively. Formally:
σ
(
t
)
=
inf
{
s
∈
T
:
s
>
t
}
{\displaystyle \sigma (t)=\inf\{s\in \mathbb {T} :s>t\}}
(forward shift/jump operator)
ρ
(
t
)
=
sup
{
s
∈
T
:
s
<
t
}
{\displaystyle \rho (t)=\sup\{s\in \mathbb {T} :s<t\}}
(backward shift/jump operator)
The graininess
μ
{\displaystyle \mu }
is the distance from a point to the closest point on the right and is given by:
μ
(
t
)
=
σ
(
t
)
−
t
.
{\displaystyle \mu (t)=\sigma (t)-t.}
For a right-dense
t
{\displaystyle t}
,
σ
(
t
)
=
t
{\displaystyle \sigma (t)=t}
and
μ
(
t
)
=
0
{\displaystyle \mu (t)=0}
.
For a left-dense
t
{\displaystyle t}
,
ρ
(
t
)
=
t
.
{\displaystyle \rho (t)=t.}
=== Classification of points ===
For any
t
∈
T
{\displaystyle t\in \mathbb {T} }
,
t
{\displaystyle t}
is:
left dense if
ρ
(
t
)
=
t
{\displaystyle \rho (t)=t}
right dense if
σ
(
t
)
=
t
{\displaystyle \sigma (t)=t}
left scattered if
ρ
(
t
)
<
t
{\displaystyle \rho (t)<t}
right scattered if
σ
(
t
)
>
t
{\displaystyle \sigma (t)>t}
dense if both left dense and right dense
isolated if both left scattered and right scattered
As illustrated by the figure at right:
Point
t
1
{\displaystyle t_{1}}
is dense
Point
t
2
{\displaystyle t_{2}}
is left dense and right scattered
Point
t
3
{\displaystyle t_{3}}
is isolated
Point
t
4
{\displaystyle t_{4}}
is left scattered and right dense
=== Continuity ===
Continuity of a time scale is redefined as equivalent to density. A time scale is said to be right-continuous at point
t
{\displaystyle t}
if it is right dense at point
t
{\displaystyle t}
. Similarly, a time scale is said to be left-continuous at point
t
{\displaystyle t}
if it is left dense at point
t
{\displaystyle t}
.
== Derivative ==
Take a function:
f
:
T
→
R
,
{\displaystyle f:\mathbb {T} \to \mathbb {R} ,}
(where R could be any Banach space, but is set to the real line for simplicity).
Definition: The delta derivative (also Hilger derivative)
f
Δ
(
t
)
{\displaystyle f^{\Delta }(t)}
exists if and only if:
For every
ε
>
0
{\displaystyle \varepsilon >0}
there exists a neighborhood
U
{\displaystyle U}
of
t
{\displaystyle t}
such that:
|
f
(
σ
(
t
)
)
−
f
(
s
)
−
f
Δ
(
t
)
(
σ
(
t
)
−
s
)
|
≤
ε
|
σ
(
t
)
−
s
|
{\displaystyle \left|f(\sigma (t))-f(s)-f^{\Delta }(t)(\sigma (t)-s)\right|\leq \varepsilon \left|\sigma (t)-s\right|}
for all
s
{\displaystyle s}
in
U
{\displaystyle U}
.
Take
T
=
R
.
{\displaystyle \mathbb {T} =\mathbb {R} .}
Then
σ
(
t
)
=
t
{\displaystyle \sigma (t)=t}
,
μ
(
t
)
=
0
{\displaystyle \mu (t)=0}
,
f
Δ
=
f
′
{\displaystyle f^{\Delta }=f'}
; is the derivative used in standard calculus. If
T
=
Z
{\displaystyle \mathbb {T} =\mathbb {Z} }
(the integers),
σ
(
t
)
=
t
+
1
{\displaystyle \sigma (t)=t+1}
,
μ
(
t
)
=
1
{\displaystyle \mu (t)=1}
,
f
Δ
=
Δ
f
{\displaystyle f^{\Delta }=\Delta f}
is the forward difference operator used in difference equations.
== Integration ==
The delta integral is defined as the antiderivative with respect to the delta derivative. If
F
(
t
)
{\displaystyle F(t)}
has a continuous derivative
f
(
t
)
=
F
Δ
(
t
)
{\displaystyle f(t)=F^{\Delta }(t)}
one sets
∫
r
s
f
(
t
)
Δ
(
t
)
=
F
(
s
)
−
F
(
r
)
.
{\displaystyle \int _{r}^{s}f(t)\Delta (t)=F(s)-F(r).}
== Laplace transform and z-transform ==
A Laplace transform can be defined for functions on time scales, which uses the same table of transforms for any arbitrary time scale. This transform can be used to solve dynamic equations on time scales. If the time scale is the non-negative integers then the transform is equal to a modified Z-transform:
Z
′
{
x
[
z
]
}
=
Z
{
x
[
z
+
1
]
}
z
+
1
{\displaystyle {\mathcal {Z}}'\{x[z]\}={\frac {{\mathcal {Z}}\{x[z+1]\}}{z+1}}}
== Partial differentiation ==
Partial differential equations and partial difference equations are unified as partial dynamic equations on time scales.
== Multiple integration ==
Multiple integration on time scales is treated in Bohner (2005).
== Stochastic dynamic equations on time scales ==
Stochastic differential equations and stochastic difference equations can be generalized to stochastic dynamic equations on time scales.
== Measure theory on time scales ==
Associated with every time scale is a natural measure defined via
μ
Δ
(
A
)
=
λ
(
ρ
−
1
(
A
)
)
,
{\displaystyle \mu ^{\Delta }(A)=\lambda (\rho ^{-1}(A)),}
where
λ
{\displaystyle \lambda }
denotes Lebesgue measure and
ρ
{\displaystyle \rho }
is the backward shift operator defined on
R
{\displaystyle \mathbb {R} }
. The delta integral turns out to be the usual Lebesgue–Stieltjes integral with respect to this measure
∫
r
s
f
(
t
)
Δ
t
=
∫
[
r
,
s
)
f
(
t
)
d
μ
Δ
(
t
)
{\displaystyle \int _{r}^{s}f(t)\Delta t=\int _{[r,s)}f(t)d\mu ^{\Delta }(t)}
and the delta derivative turns out to be the Radon–Nikodym derivative with respect to this measure
f
Δ
(
t
)
=
d
f
d
μ
Δ
(
t
)
.
{\displaystyle f^{\Delta }(t)={\frac {df}{d\mu ^{\Delta }}}(t).}
== Distributions on time scales ==
The Dirac delta and Kronecker delta are unified on time scales as the Hilger delta:
δ
a
H
(
t
)
=
{
1
μ
(
a
)
,
t
=
a
0
,
t
≠
a
{\displaystyle \delta _{a}^{\mathbb {H} }(t)={\begin{cases}{\dfrac {1}{\mu (a)}},&t=a\\0,&t\neq a\end{cases}}}
== Fractional calculus on time scales ==
Fractional calculus on time scales is treated in Bastos, Mozyrska, and Torres.
== See also ==
Analysis on fractals for dynamic equations on a Cantor set.
Multiple-scale analysis
Method of averaging
Krylov–Bogoliubov averaging method
== References ==
== Further reading ==
Agarwal, Ravi; Bohner, Martin; O’Regan, Donal; Peterson, Allan (2002). "Dynamic equations on time scales: a survey". Journal of Computational and Applied Mathematics. 141 (1–2): 1–26. Bibcode:2002JCoAM.141....1A. doi:10.1016/S0377-0427(01)00432-0.
Dynamic Equations on Time Scales Special issue of Journal of Computational and Applied Mathematics (2002)
Dynamic Equations And Applications Special Issue of Advances in Difference Equations (2006)
Dynamic Equations on Time Scales: Qualitative Analysis and Applications Special issue of Nonlinear Dynamics And Systems Theory (2009)
== External links ==
The Baylor University Time Scales Group
Timescalewiki.org | Wikipedia/Time_scale_calculus |
In mathematics, an abstract differential equation is a differential equation in which the unknown function and its derivatives take values in some generic abstract space (a Hilbert space, a Banach space, etc.). Equations of this kind arise e.g. in the study of partial differential equations: if to one of the variables is given a privileged position (e.g. time, in heat or wave equations) and all the others are put together, an ordinary "differential" equation with respect to the variable which was put in evidence is obtained. Adding boundary conditions can often be translated in terms of considering solutions in some convenient function spaces.
The classical abstract differential equation which is most frequently encountered is the equation
d
u
d
t
=
A
u
+
f
{\displaystyle {\frac {\mathrm {d} u}{\mathrm {d} t}}=Au+f}
where the unknown function
u
=
u
(
t
)
{\displaystyle u=u(t)}
belongs to some function space
X
{\displaystyle X}
,
0
≤
t
≤
T
≤
∞
{\displaystyle 0\leq t\leq T\leq \infty }
and
A
:
X
→
X
{\displaystyle A:X\to X}
is an operator (usually a linear operator) acting on this space. An exhaustive treatment of the homogeneous (
f
=
0
{\displaystyle f=0}
) case with a constant operator is given by the theory of C0-semigroups. Very often, the study of other abstract differential equations amounts (by e.g. reduction to a set of equations of the first order) to the study of this equation.
The theory of abstract differential equations has been founded by Einar Hille in several papers and in his book Functional Analysis and Semi-Groups. Other main contributors were Kōsaku Yosida, Ralph Phillips, Isao Miyadera, and Selim Grigorievich Krein.
== Abstract Cauchy problem ==
=== Definition ===
Let
A
{\displaystyle A}
and
B
{\displaystyle B}
be two linear operators, with domains
D
(
A
)
{\displaystyle D(A)}
and
D
(
B
)
{\displaystyle D(B)}
, acting in a Banach space
X
{\displaystyle X}
. A function
u
(
t
)
:
[
0
,
T
]
→
X
{\displaystyle u(t):[0,T]\to X}
is said to have strong derivative (or to be Frechet differentiable or simply differentiable) at the point
t
0
{\displaystyle t_{0}}
if there exists an element
y
∈
X
{\displaystyle y\in X}
such that
lim
h
→
0
‖
u
(
t
0
+
h
)
−
u
(
t
0
)
h
−
y
‖
=
0
{\displaystyle \lim _{h\to 0}\left\|{\frac {u(t_{0}+h)-u(t_{0})}{h}}-y\right\|=0}
and its derivative is
u
′
(
t
0
)
=
y
{\displaystyle u'(t_{0})=y}
.
A solution of the equation
B
d
u
d
t
=
A
u
{\displaystyle B{\frac {\mathrm {d} u}{\mathrm {d} t}}=Au}
is a function
u
(
t
)
:
[
0
,
∞
)
→
D
(
A
)
∩
D
(
B
)
{\displaystyle u(t):[0,\infty )\to D(A)\cap D(B)}
such that:
(
B
u
)
(
t
)
∈
C
(
[
0
,
∞
)
;
X
)
,
{\displaystyle (Bu)(t)\in C([0,\infty );X),}
the strong derivative
u
′
(
t
)
{\displaystyle u'(t)}
exists
∀
t
∈
[
0
,
∞
)
{\displaystyle \forall t\in [0,\infty )}
and
u
′
(
t
)
∈
D
(
B
)
{\displaystyle u'(t)\in D(B)}
for any such
t
{\displaystyle t}
, and
the previous equality holds
∀
t
∈
[
0
,
∞
)
{\displaystyle \forall t\in [0,\infty )}
.
The Cauchy problem consists in finding a solution of the equation, satisfying the initial condition
u
(
0
)
=
u
0
∈
D
(
A
)
∩
D
(
B
)
{\displaystyle u(0)=u_{0}\in D(A)\cap D(B)}
.
=== Well posedness ===
According to the definition of well-posed problem by Hadamard, the Cauchy problem is said to be well posed (or correct) on
[
0
,
∞
)
{\displaystyle [0,\infty )}
if:
for any
u
0
∈
D
(
A
)
∩
D
(
B
)
{\displaystyle u_{0}\in D(A)\cap D(B)}
it has a unique solution, and
this solution depends continuously on the initial data in the sense that if
u
n
(
0
)
→
0
{\displaystyle u_{n}(0)\to 0}
(
u
n
(
0
)
∈
D
(
A
)
∩
D
(
B
)
{\displaystyle u_{n}(0)\in D(A)\cap D(B)}
), then
u
n
(
t
)
→
0
{\displaystyle u_{n}(t)\to 0}
for the corresponding solution at every
t
∈
[
0
,
∞
)
.
{\displaystyle t\in [0,\infty ).}
A well posed Cauchy problem is said to be uniformly well posed if
u
n
(
0
)
→
0
{\displaystyle u_{n}(0)\to 0}
implies
u
n
(
t
)
→
0
{\displaystyle u_{n}(t)\to 0}
uniformly in
t
{\displaystyle t}
on each finite interval
[
0
,
T
]
{\displaystyle [0,T]}
.
=== Semigroup of operators associated to a Cauchy problem ===
To an abstract Cauchy problem one can associate a semigroup of operators
U
(
t
)
{\displaystyle U(t)}
, i.e. a family of bounded linear operators depending on a parameter
t
{\displaystyle t}
(
0
<
t
<
∞
{\displaystyle 0<t<\infty }
) such that
U
(
t
1
+
t
2
)
=
U
(
t
1
)
U
(
t
2
)
(
0
<
t
1
,
t
2
<
∞
)
.
{\displaystyle U(t_{1}+t_{2})=U(t_{1})U(t_{2})\quad (0<t_{1},t_{2}<\infty ).}
Consider the operator
U
(
t
)
{\displaystyle U(t)}
which assigns to the element
u
n
(
0
)
∈
D
(
A
)
∩
D
(
B
)
{\displaystyle u_{n}(0)\in D(A)\cap D(B)}
the value of the solution
u
(
t
)
{\displaystyle u(t)}
of the Cauchy problem (
u
(
0
)
=
u
0
{\displaystyle u(0)=u_{0}}
) at the moment of time
t
>
0
{\displaystyle t>0}
. If the Cauchy problem is well posed, then the operator
U
(
t
)
{\displaystyle U(t)}
is defined on
D
(
A
)
∩
D
(
B
)
{\displaystyle D(A)\cap D(B)}
and forms a semigroup.
Additionally, if
D
(
A
)
∩
D
(
B
)
{\displaystyle D(A)\cap D(B)}
is dense in
X
{\displaystyle X}
, the operator
U
(
t
)
{\displaystyle U(t)}
can be extended to a bounded linear operator defined on the entire space
X
{\displaystyle X}
. In this case one can associate to any
x
0
∈
X
{\displaystyle x_{0}\in X}
the function
U
(
t
)
x
0
{\displaystyle U(t)x_{0}}
, for any
t
>
0
{\displaystyle t>0}
. Such a function is called generalized solution of the Cauchy problem.
If
D
(
A
)
∩
D
(
B
)
{\displaystyle D(A)\cap D(B)}
is dense in
X
{\displaystyle X}
and the Cauchy problem is uniformly well posed, then the associated semigroup
U
(
t
)
{\displaystyle U(t)}
is a C0-semigroup in
X
{\displaystyle X}
.
Conversely, if
A
{\displaystyle A}
is the infinitesimal generator of a C0-semigroup
U
(
t
)
{\displaystyle U(t)}
, then the Cauchy problem
d
u
d
t
=
A
u
u
(
0
)
=
u
0
∈
D
(
A
)
{\displaystyle {\frac {\mathrm {d} u}{\mathrm {d} t}}=Au\quad u(0)=u_{0}\in D(A)}
is uniformly well posed and the solution is given by
u
(
t
)
=
U
(
t
)
u
0
.
{\displaystyle u(t)=U(t)u_{0}.}
== Nonhomogeneous problem ==
The Cauchy problem
d
u
d
t
=
A
u
+
f
u
(
0
)
=
u
0
∈
D
(
A
)
{\displaystyle {\frac {\mathrm {d} u}{\mathrm {d} t}}=Au+f\quad u(0)=u_{0}\in D(A)}
with
f
:
[
0
,
∞
)
→
X
{\displaystyle f:[0,\infty )\to X}
, is called nonhomogeneous when
f
(
t
)
≠
0
{\displaystyle f(t)\neq 0}
. The following theorem gives some sufficient conditions for the existence of the solution:
Theorem. If
A
{\displaystyle A}
is an infinitesimal generator of a C0-semigroup
T
(
t
)
{\displaystyle T(t)}
and
f
{\displaystyle f}
is continuously differentiable, then the function
u
(
t
)
=
T
(
t
)
u
0
+
∫
0
t
T
(
t
−
s
)
f
(
s
)
d
s
,
t
≥
0
{\displaystyle u(t)=T(t)u_{0}+\int _{0}^{t}T(t-s)f(s)\,ds,\quad t\geq 0}
is the unique solution to the (abstract) nonhomogeneous Cauchy problem.
The integral on the right-hand side as to be intended as a Bochner integral.
== Time-dependent problem ==
The problem of finding a solution to the initial value problem
d
u
d
t
=
A
(
t
)
u
+
f
u
(
0
)
=
u
0
∈
D
(
A
)
,
{\displaystyle {\frac {\mathrm {d} u}{\mathrm {d} t}}=A(t)u+f\quad u(0)=u_{0}\in D(A),}
where the unknown is a function
u
:
[
0
,
T
]
→
X
{\displaystyle u:[0,T]\to X}
,
f
:
[
0
,
T
]
→
X
{\displaystyle f:[0,T]\to X}
is given and, for each
t
∈
[
0
,
T
]
{\displaystyle t\in [0,T]}
,
A
(
t
)
{\displaystyle A(t)}
is a given, closed, linear operator in
X
{\displaystyle X}
with domain
D
[
A
(
t
)
]
=
D
{\displaystyle D[A(t)]=D}
, independent of
t
{\displaystyle t}
and dense in
X
{\displaystyle X}
, is called time-dependent Cauchy problem.
An operator valued function
U
(
t
,
τ
)
{\displaystyle U(t,\tau )}
with values in
B
(
X
)
{\displaystyle B(X)}
(the space of all bounded linear operators from
X
{\displaystyle X}
to
X
{\displaystyle X}
), defined and strongly continuous jointly in
t
,
τ
{\displaystyle t,\tau }
for
0
≤
τ
≤
t
≤
T
{\displaystyle 0\leq \tau \leq t\leq T}
, is called a fundamental solution of the time-dependent problem if:
the partial derivative
δ
U
(
t
,
τ
)
δ
t
{\displaystyle {\frac {\mathrm {\delta } U(t,\tau )}{\mathrm {\delta } t}}}
exists in the strong topology of
X
{\displaystyle X}
, belongs to
B
(
X
)
{\displaystyle B(X)}
for
0
≤
τ
≤
t
≤
T
{\displaystyle 0\leq \tau \leq t\leq T}
, and is strongly continuous in
t
{\displaystyle t}
for
0
≤
τ
≤
t
≤
T
{\displaystyle 0\leq \tau \leq t\leq T}
;
the range of
U
(
t
,
τ
)
{\displaystyle U(t,\tau )}
is in
D
{\displaystyle D}
;
δ
U
(
t
,
τ
)
δ
t
+
A
(
t
)
U
(
t
,
τ
)
=
0
,
0
≤
τ
≤
t
≤
T
,
{\displaystyle {\frac {\mathrm {\delta } U(t,\tau )}{\mathrm {\delta } t}}+A(t)U(t,\tau )=0,\quad 0\leq \tau \leq t\leq T,}
and
U
(
τ
,
τ
)
=
I
{\displaystyle U(\tau ,\tau )=I}
.
U
(
τ
,
τ
)
{\displaystyle U(\tau ,\tau )}
is also called evolution operator, propagator, solution operator or Green's function.
A function
u
:
[
0
,
T
]
→
X
{\displaystyle u:[0,T]\to X}
is called a mild solution of the time-dependent problem if it admits the integral representation
u
(
t
)
=
U
(
t
,
0
)
u
0
+
∫
0
t
U
(
t
,
s
)
f
(
s
)
d
s
,
t
≥
0.
{\displaystyle u(t)=U(t,0)u_{0}+\int _{0}^{t}U(t,s)f(s)\,ds,\quad t\geq 0.}
There are various known sufficient conditions for the existence of the evolution operator
U
(
t
,
τ
)
{\displaystyle U(t,\tau )}
. In practically all cases considered in the literature
−
A
(
t
)
{\displaystyle -A(t)}
is assumed to be the infinitesimal generator of a C0-semigroup on
X
{\displaystyle X}
. Roughly speaking, if
−
A
(
t
)
{\displaystyle -A(t)}
is the infinitesimal generator of a contraction semigroup the equation is said to be of hyperbolic type; if
−
A
(
t
)
{\displaystyle -A(t)}
is the infinitesimal generator of an analytic semigroup the equation is said to be of parabolic type.
== Non linear problem ==
The problem of finding a solution to either
d
u
d
t
=
f
(
t
,
u
)
u
(
0
)
=
u
0
∈
X
{\displaystyle {\frac {\mathrm {d} u}{\mathrm {d} t}}=f(t,u)\quad u(0)=u_{0}\in X}
where
f
:
[
0
,
T
]
×
X
→
X
{\displaystyle f:[0,T]\times X\to X}
is given, or
d
u
d
t
=
A
(
t
)
u
u
(
0
)
=
u
0
∈
D
(
A
)
{\displaystyle {\frac {\mathrm {d} u}{\mathrm {d} t}}=A(t)u\quad u(0)=u_{0}\in D(A)}
where
A
{\displaystyle A}
is a nonlinear operator with domain
D
(
A
)
∈
X
{\displaystyle D(A)\in X}
, is called nonlinear Cauchy problem.
== See also ==
Cauchy problem
C0-semigroup
== References == | Wikipedia/Abstract_differential_equation |
The infinite element method is a numerical method for solving problems of engineering and mathematical physics. It is a modification of finite element method. The method divides the domain concerned into sections of infinite length. In contrast with a finite element which is approximated by polynomial expressions on a finite support, the unbounded length of the infinite element is fitted with functions allowing the evaluation of the field at the asymptote. The number of functions and points of interpolations define the accuracy of the element in the infinite direction. The method is commonly used to solve acoustic problems and allows to respect the Sommerfeld condition of non-return of the acoustic waves and the diffusion of the pressure waves in the far field.
== References == | Wikipedia/Infinite_element_method |
In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.
Typically, the behavior of a nonlinear system is described in mathematics by a nonlinear system of equations, which is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one.
In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it.
As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such as solitons, chaos, and singularities are hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology.
Some authors use the term nonlinear science for the study of nonlinear systems. This term is disputed by others:
Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals.
== Definition ==
In mathematics, a linear map (or linear function)
f
(
x
)
{\displaystyle f(x)}
is one which satisfies both of the following properties:
Additivity or superposition principle:
f
(
x
+
y
)
=
f
(
x
)
+
f
(
y
)
;
{\displaystyle \textstyle f(x+y)=f(x)+f(y);}
Homogeneity:
f
(
α
x
)
=
α
f
(
x
)
.
{\displaystyle \textstyle f(\alpha x)=\alpha f(x).}
Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity. For example, an antilinear map is additive but not homogeneous. The conditions of additivity and homogeneity are often combined in the superposition principle
f
(
α
x
+
β
y
)
=
α
f
(
x
)
+
β
f
(
y
)
{\displaystyle f(\alpha x+\beta y)=\alpha f(x)+\beta f(y)}
An equation written as
f
(
x
)
=
C
{\displaystyle f(x)=C}
is called linear if
f
(
x
)
{\displaystyle f(x)}
is a linear map (as defined above) and nonlinear otherwise. The equation is called homogeneous if
C
=
0
{\displaystyle C=0}
and
f
(
x
)
{\displaystyle f(x)}
is a homogeneous function.
The definition
f
(
x
)
=
C
{\displaystyle f(x)=C}
is very general in that
x
{\displaystyle x}
can be any sensible mathematical object (number, vector, function, etc.), and the function
f
(
x
)
{\displaystyle f(x)}
can literally be any mapping, including integration or differentiation with associated constraints (such as boundary values). If
f
(
x
)
{\displaystyle f(x)}
contains differentiation with respect to
x
{\displaystyle x}
, the result will be a differential equation.
== Nonlinear systems of equations ==
A nonlinear system of equations consists of a set of equations in several variables such that at least one of them is not a linear equation.
For a single equation of the form
f
(
x
)
=
0
,
{\displaystyle f(x)=0,}
many methods have been designed; see Root-finding algorithm. In the case where f is a polynomial, one has a polynomial equation such as
x
2
+
x
−
1
=
0.
{\displaystyle x^{2}+x-1=0.}
The general root-finding algorithms apply to polynomial roots, but, generally they do not find all the roots, and when they fail to find a root, this does not imply that there is no roots. Specific methods for polynomials allow finding all roots or the real roots; see real-root isolation.
Solving systems of polynomial equations, that is finding the common zeros of a set of several polynomials in several variables is a difficult problem for which elaborate algorithms have been designed, such as Gröbner base algorithms.
For the general case of system of equations formed by equating to zero several differentiable functions, the main method is Newton's method and its variants. Generally they may provide a solution, but do not provide any information on the number of solutions.
== Nonlinear recurrence relations ==
A nonlinear recurrence relation defines successive terms of a sequence as a nonlinear function of preceding terms. Examples of nonlinear recurrence relations are the logistic map and the relations that define the various Hofstadter sequences. Nonlinear discrete models that represent a wide class of nonlinear recurrence relationships include the NARMAX (Nonlinear Autoregressive Moving Average with eXogenous inputs) model and the related nonlinear system identification and analysis procedures. These approaches can be used to study a wide class of complex nonlinear behaviors in the time, frequency, and spatio-temporal domains.
== Nonlinear differential equations ==
A system of differential equations is said to be nonlinear if it is not a system of linear equations. Problems involving nonlinear differential equations are extremely diverse, and methods of solution or analysis are problem dependent. Examples of nonlinear differential equations are the Navier–Stokes equations in fluid dynamics and the Lotka–Volterra equations in biology.
One of the greatest difficulties of nonlinear problems is that it is not generally possible to combine known solutions into new solutions. In linear problems, for example, a family of linearly independent solutions can be used to construct general solutions through the superposition principle. A good example of this is one-dimensional heat transport with Dirichlet boundary conditions, the solution of which can be written as a time-dependent linear combination of sinusoids of differing frequencies; this makes solutions very flexible. It is often possible to find several very specific solutions to nonlinear equations, however the lack of a superposition principle prevents the construction of new solutions.
=== Ordinary differential equations ===
First order ordinary differential equations are often exactly solvable by separation of variables, especially for autonomous equations. For example, the nonlinear equation
d
u
d
x
=
−
u
2
{\displaystyle {\frac {du}{dx}}=-u^{2}}
has
u
=
1
x
+
C
{\displaystyle u={\frac {1}{x+C}}}
as a general solution (and also the special solution
u
=
0
,
{\displaystyle u=0,}
corresponding to the limit of the general solution when C tends to infinity). The equation is nonlinear because it may be written as
d
u
d
x
+
u
2
=
0
{\displaystyle {\frac {du}{dx}}+u^{2}=0}
and the left-hand side of the equation is not a linear function of
u
{\displaystyle u}
and its derivatives. Note that if the
u
2
{\displaystyle u^{2}}
term were replaced with
u
{\displaystyle u}
, the problem would be linear (the exponential decay problem).
Second and higher order ordinary differential equations (more generally, systems of nonlinear equations) rarely yield closed-form solutions, though implicit solutions and solutions involving nonelementary integrals are encountered.
Common methods for the qualitative analysis of nonlinear ordinary differential equations include:
Examination of any conserved quantities, especially in Hamiltonian systems
Examination of dissipative quantities (see Lyapunov function) analogous to conserved quantities
Linearization via Taylor expansion
Change of variables into something easier to study
Bifurcation theory
Perturbation methods (can be applied to algebraic equations too)
Existence of solutions of Finite-Duration, which can happen under specific conditions for some non-linear ordinary differential equations.
=== Partial differential equations ===
The most common basic approach to studying nonlinear partial differential equations is to change the variables (or otherwise transform the problem) so that the resulting problem is simpler (possibly linear). Sometimes, the equation may be transformed into one or more ordinary differential equations, as seen in separation of variables, which is always useful whether or not the resulting ordinary differential equation(s) is solvable.
Another common (though less mathematical) tactic, often exploited in fluid and heat mechanics, is to use scale analysis to simplify a general, natural equation in a certain specific boundary value problem. For example, the (very) nonlinear Navier-Stokes equations can be simplified into one linear partial differential equation in the case of transient, laminar, one dimensional flow in a circular pipe; the scale analysis provides conditions under which the flow is laminar and one dimensional and also yields the simplified equation.
Other methods include examining the characteristics and using the methods outlined above for ordinary differential equations.
=== Pendula ===
A classic, extensively studied nonlinear problem is the dynamics of a frictionless pendulum under the influence of gravity. Using Lagrangian mechanics, it may be shown that the motion of a pendulum can be described by the dimensionless nonlinear equation
d
2
θ
d
t
2
+
sin
(
θ
)
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\sin(\theta )=0}
where gravity points "downwards" and
θ
{\displaystyle \theta }
is the angle the pendulum forms with its rest position, as shown in the figure at right. One approach to "solving" this equation is to use
d
θ
/
d
t
{\displaystyle d\theta /dt}
as an integrating factor, which would eventually yield
∫
d
θ
C
0
+
2
cos
(
θ
)
=
t
+
C
1
{\displaystyle \int {\frac {d\theta }{\sqrt {C_{0}+2\cos(\theta )}}}=t+C_{1}}
which is an implicit solution involving an elliptic integral. This "solution" generally does not have many uses because most of the nature of the solution is hidden in the nonelementary integral (nonelementary unless
C
0
=
2
{\displaystyle C_{0}=2}
).
Another way to approach the problem is to linearize any nonlinearity (the sine function term in this case) at the various points of interest through Taylor expansions. For example, the linearization at
θ
=
0
{\displaystyle \theta =0}
, called the small angle approximation, is
d
2
θ
d
t
2
+
θ
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\theta =0}
since
sin
(
θ
)
≈
θ
{\displaystyle \sin(\theta )\approx \theta }
for
θ
≈
0
{\displaystyle \theta \approx 0}
. This is a simple harmonic oscillator corresponding to oscillations of the pendulum near the bottom of its path. Another linearization would be at
θ
=
π
{\displaystyle \theta =\pi }
, corresponding to the pendulum being straight up:
d
2
θ
d
t
2
+
π
−
θ
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\pi -\theta =0}
since
sin
(
θ
)
≈
π
−
θ
{\displaystyle \sin(\theta )\approx \pi -\theta }
for
θ
≈
π
{\displaystyle \theta \approx \pi }
. The solution to this problem involves hyperbolic sinusoids, and note that unlike the small angle approximation, this approximation is unstable, meaning that
|
θ
|
{\displaystyle |\theta |}
will usually grow without limit, though bounded solutions are possible. This corresponds to the difficulty of balancing a pendulum upright, it is literally an unstable state.
One more interesting linearization is possible around
θ
=
π
/
2
{\displaystyle \theta =\pi /2}
, around which
sin
(
θ
)
≈
1
{\displaystyle \sin(\theta )\approx 1}
:
d
2
θ
d
t
2
+
1
=
0.
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+1=0.}
This corresponds to a free fall problem. A very useful qualitative picture of the pendulum's dynamics may be obtained by piecing together such linearizations, as seen in the figure at right. Other techniques may be used to find (exact) phase portraits and approximate periods.
== Types of nonlinear dynamic behaviors ==
Amplitude death – any oscillations present in the system cease due to some kind of interaction with other system or feedback by the same system
Chaos – values of a system cannot be predicted indefinitely far into the future, and fluctuations are aperiodic
Multistability – the presence of two or more stable states
Solitons – self-reinforcing solitary waves
Limit cycles – asymptotic periodic orbits to which destabilized fixed points are attracted.
Self-oscillations – feedback oscillations taking place in open dissipative physical systems.
== Examples of nonlinear equations ==
== See also ==
== References ==
== Further reading ==
== External links ==
Command and Control Research Program (CCRP)
New England Complex Systems Institute: Concepts in Complex Systems
Nonlinear Dynamics I: Chaos at MIT's OpenCourseWare
Nonlinear Model Library – (in MATLAB) a Database of Physical Systems
The Center for Nonlinear Studies at Los Alamos National Laboratory | Wikipedia/Non-linear_differential_equations |
In mathematical analysis, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form:
f
(
x
1
,
x
2
,
x
3
,
…
,
x
n
;
u
(
x
1
,
x
2
,
x
3
,
…
,
x
n
)
;
I
1
(
u
)
,
I
2
(
u
)
,
I
3
(
u
)
,
…
,
I
m
(
u
)
)
=
0
{\displaystyle f(x_{1},x_{2},x_{3},\ldots ,x_{n};u(x_{1},x_{2},x_{3},\ldots ,x_{n});I^{1}(u),I^{2}(u),I^{3}(u),\ldots ,I^{m}(u))=0}
where
I
i
(
u
)
{\displaystyle I^{i}(u)}
is an integral operator acting on u. Hence, integral equations may be viewed as the analog to differential equations where instead of the equation involving derivatives, the equation contains integrals. A direct comparison can be seen with the mathematical form of the general integral equation above with the general form of a differential equation which may be expressed as follows:
f
(
x
1
,
x
2
,
x
3
,
…
,
x
n
;
u
(
x
1
,
x
2
,
x
3
,
…
,
x
n
)
;
D
1
(
u
)
,
D
2
(
u
)
,
D
3
(
u
)
,
…
,
D
m
(
u
)
)
=
0
{\displaystyle f(x_{1},x_{2},x_{3},\ldots ,x_{n};u(x_{1},x_{2},x_{3},\ldots ,x_{n});D^{1}(u),D^{2}(u),D^{3}(u),\ldots ,D^{m}(u))=0}
where
D
i
(
u
)
{\displaystyle D^{i}(u)}
may be viewed as a differential operator of order i. Due to this close connection between differential and integral equations, one can often convert between the two. For example, one method of solving a boundary value problem is by converting the differential equation with its boundary conditions into an integral equation and solving the integral equation. In addition, because one can convert between the two, differential equations in physics such as Maxwell's equations often have an analog integral and differential form. See also, for example, Green's function and Fredholm theory.
== Classification and overview ==
Various classification methods for integral equations exist. A few standard classifications include distinctions between linear and nonlinear; homogeneous and inhomogeneous; Fredholm and Volterra; first order, second order, and third order; and singular and regular integral equations. These distinctions usually rest on some fundamental property such as the consideration of the linearity of the equation or the homogeneity of the equation. These comments are made concrete through the following definitions and examples:
=== Linearity ===
Linear: An integral equation is linear if the unknown function u(x) and its integrals appear linearly in the equation. Hence, an example of a linear equation would be:
u
(
x
)
=
f
(
x
)
+
λ
∫
α
(
x
)
β
(
x
)
K
(
x
,
t
)
⋅
u
(
t
)
d
t
{\displaystyle u(x)=f(x)+\lambda \int _{\alpha (x)}^{\beta (x)}K(x,t)\cdot u(t)\,dt}
As a note on naming convention: i) u(x) is called the unknown function, ii) f(x) is called a known function, iii) K(x,t) is a function of two variables and often called the Kernel function, and iv) λ is an unknown factor or parameter, which plays the same role as the eigenvalue in linear algebra.
Nonlinear: An integral equation is nonlinear if the unknown function ''u(x) or any of its integrals appear nonlinear in the equation. Hence, examples of nonlinear equations would be the equation above if we replaced u(t) with
u
2
(
x
)
,
cos
(
u
(
x
)
)
,
or
e
u
(
x
)
{\displaystyle u^{2}(x),\,\,\cos(u(x)),\,{\text{or }}\,e^{u(x)}}
, such as:
u
(
x
)
=
f
(
x
)
+
∫
α
(
x
)
β
(
x
)
K
(
x
,
t
)
⋅
u
2
(
t
)
d
t
{\displaystyle u(x)=f(x)+\int _{\alpha (x)}^{\beta (x)}K(x,t)\cdot u^{2}(t)\,dt}
Certain kinds of nonlinear integral equations have specific names. A selection of such equations are:
Nonlinear Volterra integral equations of the second kind which have the general form:
u
(
x
)
=
f
(
x
)
+
λ
∫
a
x
K
(
x
,
t
)
F
(
x
,
t
,
u
(
t
)
)
d
t
,
{\displaystyle u(x)=f(x)+\lambda \int _{a}^{x}K(x,t)\,F(x,t,u(t))\,dt,}
where F is a known function.
Nonlinear Fredholm integral equations of the second kind which have the general form:
f
(
x
)
=
F
(
x
,
∫
a
b
K
(
x
,
y
,
f
(
x
)
,
f
(
y
)
)
d
y
)
{\displaystyle f(x)=F\left(x,\int _{a}^{b}K(x,y,f(x),f(y))\,dy\right)}
.
A special type of nonlinear Fredholm integral equations of the second kind is given by the form:
f
(
x
)
=
g
(
x
)
+
∫
a
b
K
(
x
,
y
,
f
(
x
)
,
f
(
y
)
)
d
y
{\displaystyle f(x)=g(x)+\int _{a}^{b}K(x,y,f(x),f(y))\,dy}
, which has the two special subclasses:
Urysohn equation:
f
(
x
)
=
g
(
x
)
+
∫
a
b
k
(
x
,
y
,
f
(
y
)
)
d
y
{\displaystyle f(x)=g(x)+\int _{a}^{b}k(x,y,f(y))\,dy}
.
Hammerstein equation:
f
(
x
)
=
g
(
x
)
+
∫
a
b
k
(
x
,
y
)
G
(
y
,
f
(
y
)
)
d
y
{\displaystyle f(x)=g(x)+\int _{a}^{b}k(x,y)\,G(y,f(y))\,dy}
.
More information on the Hammerstein equation and different versions of the Hammerstein equation can be found in the Hammerstein section below.
=== Location of the unknown equation ===
First kind: An integral equation is called an integral equation of the first kind if the unknown function appears only under the integral sign. An example would be:
f
(
x
)
=
∫
a
b
K
(
x
,
t
)
u
(
t
)
d
t
{\displaystyle f(x)=\int _{a}^{b}K(x,t)\,u(t)\,dt}
.
Second kind: An integral equation is called an integral equation of the second kind if the unknown function also appears outside the integral.
Third kind: An integral equation is called an integral equation of the third kind if it is a linear Integral equation of the following form:
g
(
t
)
u
(
t
)
+
λ
∫
a
b
K
(
t
,
x
)
u
(
x
)
d
x
=
f
(
t
)
{\displaystyle g(t)u(t)+\lambda \int _{a}^{b}K(t,x)u(x)\,dx=f(t)}
where g(t) vanishes at least once in the interval [a,b] or where g(t) vanishes at a finite number of points in (a,b).
=== Limits of Integration ===
Fredholm: An integral equation is called a Fredholm integral equation if both of the limits of integration in all integrals are fixed and constant. An example would be that the integral is taken over a fixed subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
. Hence, the following two examples are Fredholm equations:
Fredholm equation of the first type:
f
(
x
)
=
∫
a
b
K
(
x
,
t
)
u
(
t
)
d
t
{\displaystyle f(x)=\int _{a}^{b}K(x,t)\,u(t)\,dt}
.
Fredholm equation of the second type:
u
(
x
)
=
f
(
x
)
+
λ
∫
a
b
K
(
x
,
t
)
u
(
t
)
d
t
.
{\displaystyle u(x)=f(x)+\lambda \int _{a}^{b}K(x,t)\,u(t)\,dt.}
Note that we can express integral equations such as those above also using integral operator notation. For example, we can define the Fredholm integral operator as:
(
F
y
)
(
t
)
:=
∫
t
0
T
K
(
t
,
s
)
y
(
s
)
d
s
.
{\displaystyle ({\mathcal {F}}y)(t):=\int _{t_{0}}^{T}K(t,s)\,y(s)\,ds.}
Hence, the above Fredholm equation of the second kind may be written compactly as:
y
(
t
)
=
g
(
t
)
+
λ
(
F
y
)
(
t
)
.
{\displaystyle y(t)=g(t)+\lambda ({\mathcal {F}}y)(t).}
Volterra: An integral equation is called a Volterra integral equation if at least one of the limits of integration is a variable. Hence, the integral is taken over a domain varying with the variable of integration. Examples of Volterra equations would be:
Volterra integral equation of the first kind:
f
(
x
)
=
∫
a
x
K
(
x
,
t
)
u
(
t
)
d
t
{\displaystyle f(x)=\int _{a}^{x}K(x,t)\,u(t)\,dt}
Volterra integral equation of the second kind:
u
(
x
)
=
f
(
x
)
+
λ
∫
a
x
K
(
x
,
t
)
u
(
t
)
d
t
.
{\displaystyle u(x)=f(x)+\lambda \int _{a}^{x}K(x,t)\,u(t)\,dt.}
As with Fredholm equations, we can again adopt operator notation. Thus, we can define the linear Volterra integral operator
V
:
C
(
I
)
→
C
(
I
)
{\displaystyle {\mathcal {V}}:C(I)\to C(I)}
, as follows:
(
V
φ
)
(
t
)
:=
∫
t
0
t
K
(
t
,
s
)
φ
(
s
)
d
s
{\displaystyle ({\mathcal {V}}\varphi )(t):=\int _{t_{0}}^{t}K(t,s)\,\varphi (s)\,ds}
where
t
∈
I
=
[
t
0
,
T
]
{\displaystyle t\in I=[t_{0},T]}
and K(t,s) is called the kernel and must be continuous on the interval
D
:=
{
(
t
,
s
)
:
0
≤
s
≤
t
≤
T
≤
∞
}
{\displaystyle D:=\{(t,s):0\leq s\leq t\leq T\leq \infty \}}
. Hence, the Volterra integral equation of the first kind may be written as:
(
V
y
)
(
t
)
=
g
(
t
)
{\displaystyle ({\mathcal {V}}y)(t)=g(t)}
with
g
(
0
)
=
0
{\displaystyle g(0)=0}
. In addition, a linear Volterra integral equation of the second kind for an unknown function
y
(
t
)
{\displaystyle y(t)}
and a given continuous function
g
(
t
)
{\displaystyle g(t)}
on the interval
I
{\displaystyle I}
where
t
∈
I
{\displaystyle t\in I}
:
y
(
t
)
=
g
(
t
)
+
(
V
y
)
(
t
)
.
{\displaystyle y(t)=g(t)+({\mathcal {V}}y)(t).}
Volterra–Fredholm: In higher dimensions, integral equations such as Fredholm–Volterra integral equations (VFIE) exist. A VFIE has the form:
u
(
t
,
x
)
=
g
(
t
,
x
)
+
(
T
u
)
(
t
,
x
)
{\displaystyle u(t,x)=g(t,x)+({\mathcal {T}}u)(t,x)}
with
x
∈
Ω
{\displaystyle x\in \Omega }
and
Ω
{\displaystyle \Omega }
being a closed bounded region in
R
d
{\displaystyle \mathbb {R} ^{d}}
with piecewise smooth boundary. The Fredholm-Volterra Integral Operator
T
:
C
(
I
×
Ω
)
→
C
(
I
×
Ω
)
{\displaystyle {\mathcal {T}}:C(I\times \Omega )\to C(I\times \Omega )}
is defined as:
(
T
u
)
(
t
,
x
)
:=
∫
0
t
∫
Ω
K
(
t
,
s
,
x
,
ξ
)
G
(
u
(
s
,
ξ
)
)
d
ξ
d
s
.
{\displaystyle ({\mathcal {T}}u)(t,x):=\int _{0}^{t}\int _{\Omega }K(t,s,x,\xi )\,G(u(s,\xi ))\,d\xi \,ds.}
Note that while throughout this article, the bounds of the integral are usually written as intervals, this need not be the case. In general, integral equations don't always need to be defined over an interval
[
a
,
b
]
=
I
{\displaystyle [a,b]=I}
, but could also be defined over a curve or surface.
=== Homogeneity ===
Homogeneous: An integral equation is called homogeneous if the known function
f
{\displaystyle f}
is identically zero.
Inhomogeneous: An integral equation is called inhomogeneous if the known function
f
{\displaystyle f}
is nonzero.
=== Regularity ===
Regular: An integral equation is called regular if the integrals used are all proper integrals.
Singular or weakly singular: An integral equation is called singular or weakly singular if the integral is an improper integral. This could be either because at least one of the limits of integration is infinite or the kernel becomes unbounded, meaning infinite, on at least one point in the interval or domain over which is being integrated.
Examples include:
F
(
λ
)
=
∫
−
∞
∞
e
−
i
λ
x
u
(
x
)
d
x
{\displaystyle F(\lambda )=\int _{-\infty }^{\infty }e^{-i\lambda x}u(x)\,dx}
L
[
u
(
x
)
]
=
∫
0
∞
e
−
λ
x
u
(
x
)
d
x
{\displaystyle L[u(x)]=\int _{0}^{\infty }e^{-\lambda x}u(x)\,dx}
These two integral equations are the Fourier transform and the Laplace transform of u(x), respectively, with both being Fredholm equations of the first kind with kernel
K
(
x
,
t
)
=
e
−
i
λ
x
{\displaystyle K(x,t)=e^{-i\lambda x}}
and
K
(
x
,
t
)
=
e
−
λ
x
{\displaystyle K(x,t)=e^{-\lambda x}}
, respectively. Another example of a singular integral equation in which the kernel becomes unbounded is:
x
2
=
∫
0
x
1
x
−
t
u
(
t
)
d
t
.
{\displaystyle x^{2}=\int _{0}^{x}{\frac {1}{\sqrt {x-t}}}\,u(t)\,dt.}
This equation is a special form of the more general weakly singular Volterra integral equation of the first kind, called Abel's integral equation:
g
(
x
)
=
∫
a
x
f
(
y
)
x
−
y
d
y
{\displaystyle g(x)=\int _{a}^{x}{\frac {f(y)}{\sqrt {x-y}}}\,dy}
Strongly singular: An integral equation is called strongly singular if the integral is defined by a special regularisation, for example, by the Cauchy principal value.
=== Integro-differential equations ===
An Integro-differential equation, as the name suggests, combines differential and integral operators into one equation. There are many version including the Volterra integro-differential equation and delay type equations as defined below. For example, using the Volterra operator as defined above, the Volterra integro-differential equation may be written as:
y
′
(
t
)
=
f
(
t
,
y
(
t
)
)
+
(
V
α
y
)
(
t
)
{\displaystyle y'(t)=f(t,y(t))+(V_{\alpha }y)(t)}
For delay problems, we can define the delay integral operator
(
W
θ
,
α
y
)
{\displaystyle ({\mathcal {W}}_{\theta ,\alpha }y)}
as:
(
W
θ
,
α
y
)
(
t
)
:=
∫
θ
(
t
)
t
(
t
−
s
)
−
α
⋅
k
2
(
t
,
s
,
y
(
s
)
,
y
′
(
s
)
)
d
s
{\displaystyle ({\mathcal {W}}_{\theta ,\alpha }y)(t):=\int _{\theta (t)}^{t}(t-s)^{-\alpha }\cdot k_{2}(t,s,y(s),y'(s))\,ds}
where the delay integro-differential equation may be expressed as:
y
′
(
t
)
=
f
(
t
,
y
(
t
)
,
y
(
θ
(
t
)
)
)
+
(
W
θ
,
α
y
)
(
t
)
.
{\displaystyle y'(t)=f(t,y(t),y(\theta (t)))+({\mathcal {W}}_{\theta ,\alpha }y)(t).}
== Volterra integral equations ==
=== Uniqueness and existence theorems in 1D ===
The solution to a linear Volterra integral equation of the first kind, given by the equation:
(
V
y
)
(
t
)
=
g
(
t
)
{\displaystyle ({\mathcal {V}}y)(t)=g(t)}
can be described by the following uniqueness and existence theorem. Recall that the Volterra integral operator
V
:
C
(
I
)
→
C
(
I
)
{\displaystyle {\mathcal {V}}:C(I)\to C(I)}
, can be defined as follows:
(
V
φ
)
(
t
)
:=
∫
t
0
t
K
(
t
,
s
)
φ
(
s
)
d
s
{\displaystyle ({\mathcal {V}}\varphi )(t):=\int _{t_{0}}^{t}K(t,s)\,\varphi (s)\,ds}
where
t
∈
I
=
[
t
0
,
T
]
{\displaystyle t\in I=[t_{0},T]}
and K(t,s) is called the kernel and must be continuous on the interval
D
:=
{
(
t
,
s
)
:
0
≤
s
≤
t
≤
T
≤
∞
}
{\displaystyle D:=\{(t,s):0\leq s\leq t\leq T\leq \infty \}}
.
The solution to a linear Volterra integral equation of the second kind, given by the equation:
y
(
t
)
=
g
(
t
)
+
(
V
y
)
(
t
)
{\displaystyle y(t)=g(t)+({\mathcal {V}}y)(t)}
can be described by the following uniqueness and existence theorem.
=== Volterra integral equations in R2 ===
A Volterra Integral equation of the second kind can be expressed as follows:
u
(
t
,
x
)
=
g
(
t
,
x
)
+
∫
0
x
∫
0
y
K
(
x
,
ξ
,
y
,
η
)
u
(
ξ
,
η
)
d
η
d
ξ
{\displaystyle u(t,x)=g(t,x)+\int _{0}^{x}\int _{0}^{y}K(x,\xi ,y,\eta )\,u(\xi ,\eta )\,d\eta \,d\xi }
where
(
x
,
y
)
∈
Ω
:=
[
0
,
X
]
×
[
0
,
Y
]
{\displaystyle (x,y)\in \Omega :=[0,X]\times [0,Y]}
,
g
∈
C
(
Ω
)
{\displaystyle g\in C(\Omega )}
,
K
∈
C
(
D
2
)
{\displaystyle K\in C(D_{2})}
and
D
2
:=
{
(
x
,
ξ
,
y
,
η
)
:
0
≤
ξ
≤
x
≤
X
,
0
≤
η
≤
y
≤
Y
}
{\displaystyle D_{2}:=\{(x,\xi ,y,\eta ):0\leq \xi \leq x\leq X,0\leq \eta \leq y\leq Y\}}
. This integral equation has a unique solution
u
∈
C
(
Ω
)
{\displaystyle u\in C(\Omega )}
given by:
u
(
t
,
x
)
=
g
(
t
,
x
)
+
∫
0
x
∫
0
y
R
(
x
,
ξ
,
y
,
η
)
g
(
ξ
,
η
)
d
η
d
ξ
{\displaystyle u(t,x)=g(t,x)+\int _{0}^{x}\int _{0}^{y}R(x,\xi ,y,\eta )\,g(\xi ,\eta )\,d\eta \,d\xi }
where
R
{\displaystyle R}
is the resolvent kernel of K.
=== Uniqueness and existence theorems of Fredholm–Volterra equations ===
As defined above, a VFIE has the form:
u
(
t
,
x
)
=
g
(
t
,
x
)
+
(
T
u
)
(
t
,
x
)
{\displaystyle u(t,x)=g(t,x)+({\mathcal {T}}u)(t,x)}
with
x
∈
Ω
{\displaystyle x\in \Omega }
and
Ω
{\displaystyle \Omega }
being a closed bounded region in
R
d
{\displaystyle \mathbb {R} ^{d}}
with piecewise smooth boundary. The Fredholm–Volterrra Integral Operator
T
:
C
(
I
×
Ω
)
→
C
(
I
×
Ω
)
{\displaystyle {\mathcal {T}}:C(I\times \Omega )\to C(I\times \Omega )}
is defined as:
(
T
u
)
(
t
,
x
)
:=
∫
0
t
∫
Ω
K
(
t
,
s
,
x
,
ξ
)
G
(
u
(
s
,
ξ
)
)
d
ξ
d
s
.
{\displaystyle ({\mathcal {T}}u)(t,x):=\int _{0}^{t}\int _{\Omega }K(t,s,x,\xi )\,G(u(s,\xi ))\,d\xi \,ds.}
In the case where the Kernel K may be written as
K
(
t
,
s
,
x
,
ξ
)
=
k
(
t
−
s
)
H
(
x
,
ξ
)
{\displaystyle K(t,s,x,\xi )=k(t-s)H(x,\xi )}
, K is called the positive memory kernel. With this in mind, we can now introduce the following theorem:
=== Special Volterra equations ===
A special type of Volterra equation which is used in various applications is defined as follows:
y
(
t
)
=
g
(
t
)
+
(
V
α
y
)
(
t
)
{\displaystyle y(t)=g(t)+(V_{\alpha }y)(t)}
where
t
∈
I
=
[
t
0
,
T
]
{\displaystyle t\in I=[t_{0},T]}
, the function g(t) is continuous on the interval
I
{\displaystyle I}
, and the Volterra integral operator
(
V
α
t
)
{\displaystyle (V_{\alpha }t)}
is given by:
(
V
α
t
)
(
t
)
:=
∫
t
0
t
(
t
−
s
)
−
α
⋅
k
(
t
,
s
,
y
(
s
)
)
d
s
{\displaystyle (V_{\alpha }t)(t):=\int _{t_{0}}^{t}(t-s)^{-\alpha }\cdot k(t,s,y(s))\,ds}
with
(
0
≤
α
<
1
)
{\displaystyle (0\leq \alpha <1)}
.
== Converting IVP to integral equations ==
In the following section, we give an example of how to convert an initial value problem (IVP) into an integral equation. There are multiple motivations for doing so, among them being that integral equations can often be more readily solvable and are more suitable for proving existence and uniqueness theorems.
The following example was provided by Wazwaz on pages 1 and 2 in his book. We examine the IVP given by the equation:
u
′
(
t
)
=
2
t
u
(
t
)
,
x
≥
0
{\displaystyle u'(t)=2tu(t),\,\,\,\,\,\,\,x\geq 0}
and the initial condition:
u
(
0
)
=
1
{\displaystyle u(0)=1}
If we integrate both sides of the equation, we get:
∫
0
x
u
′
(
t
)
d
t
=
∫
0
x
2
t
u
(
t
)
d
t
{\displaystyle \int _{0}^{x}u'(t)\,dt=\int _{0}^{x}2tu(t)\,dt}
and by the fundamental theorem of calculus, we obtain:
u
(
x
)
−
u
(
0
)
=
∫
0
x
2
t
u
(
t
)
d
t
{\displaystyle u(x)-u(0)=\int _{0}^{x}2tu(t)\,dt}
Rearranging the equation above, we get the integral equation:
u
(
x
)
=
1
+
∫
0
x
2
t
u
(
t
)
d
t
{\displaystyle u(x)=1+\int _{0}^{x}2tu(t)\,dt}
which is a Volterra integral equation of the form:
u
(
x
)
=
f
(
x
)
+
∫
α
(
x
)
β
(
x
)
K
(
x
,
t
)
⋅
u
(
t
)
d
t
{\displaystyle u(x)=f(x)+\int _{\alpha (x)}^{\beta (x)}K(x,t)\cdot u(t)\,dt}
where K(x,t) is called the kernel and equal to 2t, and f(x) = 1.
== Numerical solution ==
It is worth noting that integral equations often do not have an analytical solution, and must be solved numerically. An example of this is evaluating the electric-field integral equation (EFIE) or magnetic-field integral equation (MFIE) over an arbitrarily shaped object in an electromagnetic scattering problem.
One method to solve numerically requires discretizing variables and replacing integral by a quadrature rule
∑
j
=
1
n
w
j
K
(
s
i
,
t
j
)
u
(
t
j
)
=
f
(
s
i
)
,
i
=
0
,
1
,
…
,
n
.
{\displaystyle \sum _{j=1}^{n}w_{j}K(s_{i},t_{j})u(t_{j})=f(s_{i}),\qquad i=0,1,\dots ,n.}
Then we have a system with n equations and n variables. By solving it we get the value of the n variables
u
(
t
0
)
,
u
(
t
1
)
,
…
,
u
(
t
n
)
.
{\displaystyle u(t_{0}),u(t_{1}),\dots ,u(t_{n}).}
== Integral equations as a generalization of eigenvalue equations ==
Certain homogeneous linear integral equations can be viewed as the continuum limit of eigenvalue equations. Using index notation, an eigenvalue equation can be written as
∑
j
M
i
,
j
v
j
=
λ
v
i
{\displaystyle \sum _{j}M_{i,j}v_{j}=\lambda v_{i}}
where M = [Mi,j] is a matrix, v is one of its eigenvectors, and λ is the associated eigenvalue.
Taking the continuum limit, i.e., replacing the discrete indices i and j with continuous variables x and y, yields
∫
K
(
x
,
y
)
φ
(
y
)
d
y
=
λ
φ
(
x
)
,
{\displaystyle \int K(x,y)\,\varphi (y)\,dy=\lambda \,\varphi (x),}
where the sum over j has been replaced by an integral over y and the matrix M and the vector v have been replaced by the kernel K(x, y) and the eigenfunction φ(y). (The limits on the integral are fixed, analogously to the limits on the sum over j.) This gives a linear homogeneous Fredholm equation of the second type.
In general, K(x, y) can be a distribution, rather than a function in the strict sense. If the distribution K has support only at the point x = y, then the integral equation reduces to a differential eigenfunction equation.
In general, Volterra and Fredholm integral equations can arise from a single differential equation, depending on which sort of conditions are applied at the boundary of the domain of its solution.
== Wiener–Hopf integral equations ==
y
(
t
)
=
λ
x
(
t
)
+
∫
0
∞
k
(
t
−
s
)
x
(
s
)
d
s
,
0
≤
t
<
∞
.
{\displaystyle y(t)=\lambda x(t)+\int _{0}^{\infty }k(t-s)\,x(s)\,ds,\qquad 0\leq t<\infty .}
Originally, such equations were studied in connection with problems in radiative transfer, and more recently, they have been related to the solution of boundary integral equations for planar problems in which the boundary is only piecewise smooth.
== Hammerstein equations ==
A Hammerstein equation is a nonlinear first-kind Volterra integral equation of the form:
g
(
t
)
=
∫
0
t
K
(
t
,
s
)
G
(
s
,
y
(
s
)
)
d
s
.
{\displaystyle g(t)=\int _{0}^{t}K(t,s)\,G(s,y(s))\,ds.}
Under certain regularity conditions, the equation is equivalent to the implicit Volterra integral equation of the second-kind:
G
(
t
,
y
(
t
)
)
=
g
1
(
t
)
−
∫
0
t
K
1
(
t
,
s
)
G
(
s
,
y
(
s
)
)
d
s
{\displaystyle G(t,y(t))=g_{1}(t)-\int _{0}^{t}K_{1}(t,s)\,G(s,y(s))\,ds}
where:
g
1
(
t
)
:=
g
′
(
t
)
K
(
t
,
t
)
and
K
1
(
t
,
s
)
:=
−
1
K
(
t
,
t
)
∂
K
(
t
,
s
)
∂
t
.
{\displaystyle g_{1}(t):={\frac {g'(t)}{K(t,t)}}\,\,\,\,\,\,\,{\text{and}}\,\,\,\,\,\,\,K_{1}(t,s):=-{\frac {1}{K(t,t)}}{\frac {\partial K(t,s)}{\partial t}}.}
The equation may however also be expressed in operator form which motivates the definition of the following operator called the nonlinear Volterra-Hammerstein operator:
(
H
y
)
(
t
)
:=
∫
0
t
K
(
t
,
s
)
G
(
s
,
y
(
s
)
)
d
s
{\displaystyle ({\mathcal {H}}y)(t):=\int _{0}^{t}K(t,s)\,G(s,y(s))\,ds}
Here
G
:
I
×
R
→
R
{\displaystyle G:I\times \mathbb {R} \to \mathbb {R} }
is a smooth function while the kernel K may be continuous, i.e. bounded, or weakly singular. The corresponding second-kind Volterra integral equation called the Volterra-Hammerstein Integral Equation of the second kind, or simply Hammerstein equation for short, can be expressed as:
y
(
t
)
=
g
(
t
)
+
(
H
y
)
(
t
)
{\displaystyle y(t)=g(t)+({\mathcal {H}}y)(t)}
In certain applications, the nonlinearity of the function G may be treated as being only semi-linear in the form of:
G
(
s
,
y
)
=
y
+
H
(
s
,
y
)
{\displaystyle G(s,y)=y+H(s,y)}
In this case, we the following semi-linear Volterra integral equation:
y
(
t
)
=
g
(
t
)
+
(
H
y
)
(
t
)
=
g
(
t
)
+
∫
0
t
K
(
t
,
s
)
[
y
(
s
)
+
H
(
s
,
y
(
s
)
)
]
d
s
{\displaystyle y(t)=g(t)+({\mathcal {H}}y)(t)=g(t)+\int _{0}^{t}K(t,s)[y(s)+H(s,y(s))]\,ds}
In this form, we can state an existence and uniqueness theorem for the semi-linear Hammerstein integral equation.
We can also write the Hammerstein equation using a different operator called the Niemytzki operator, or substitution operator,
N
{\displaystyle {\mathcal {N}}}
defined as follows:
(
N
φ
)
(
t
)
:=
G
(
t
,
φ
(
t
)
)
{\displaystyle ({\mathcal {N}}\varphi )(t):=G(t,\varphi (t))}
More about this can be found on page 75 of this book.
== Applications ==
Integral equations are important in many applications. Problems in which integral equations are encountered include radiative transfer, and the oscillation of a string, membrane, or axle. Oscillation problems may also be solved as differential equations.
Actuarial science (ruin theory)
Computational electromagnetics
Boundary element method
Inverse problems
Marchenko equation (inverse scattering transform)
Options pricing under jump-diffusion
Radiative transfer
Renewal theory
Viscoelasticity
Fluid mechanics
== See also ==
Differential equation
Integro-differential equation
Ruin theory
Volterra integral equation
== Bibliography ==
Agarwal, Ravi P., and Donal O'Regan. Integral and Integrodifferential Equations: Theory, Method and Applications. Gordon and Breach Science Publishers, 2000.
Brunner, Hermann. Collocation Methods for Volterra Integral and Related Functional Differential Equations. Cambridge University Press, 2004.
Burton, T. A. Volterra Integral and Differential Equations. Elsevier, 2005.
Chapter 7 It Mod 02-14-05 – Ira A. Fulton College of Engineering. https://www.et.byu.edu/~vps/ET502WWW/NOTES/CH7m.pdf.
Corduneanu, C. Integral Equations and Applications. Cambridge University Press, 2008.
Hackbusch, Wolfgang. Integral Equations Theory and Numerical Treatment. Birkhäuser, 1995.
Hochstadt, Harry. Integral Equations. Wiley-Interscience/John Wiley & Sons, 1989.
"Integral Equation." From Wolfram MathWorld, https://mathworld.wolfram.com/IntegralEquation.html.
"Integral Equation." Integral Equation – Encyclopedia of Mathematics, https://encyclopediaofmath.org/wiki/Integral_equation.
Jerri, Abdul J. Introduction to Integral Equations with Applications. Sampling Publishing, 2007.
Pipkin, A. C. A Course on Integral Equations. Springer-Verlag, 1991.
Polëiìanin A. D., and Alexander V. Manzhirov. Handbook of Integral Equations. Chapman & Hall/CRC, 2008.
Wazwaz, Abdul-Majid. A First Course in Integral Equations. World Scientific, 2015.
== References ==
== Further reading ==
Kendall E. Atkinson. The Numerical Solution of Integral Equations of the Second Kind. Cambridge Monographs on Applied and Computational Mathematics, 1997.
George Arfken and Hans Weber. Mathematical Methods for Physicists. Harcourt/Academic Press, 2000.
Harry Bateman (1910) History and Present State of the Theory of Integral Equations, Report of the British Association.
Andrei D. Polyanin and Alexander V. Manzhirov Handbook of Integral Equations. CRC Press, Boca Raton, 1998. ISBN 0-8493-2876-4.
E. T. Whittaker and G. N. Watson. A Course of Modern Analysis Cambridge Mathematical Library.
M. Krasnov, A. Kiselev, G. Makarenko, Problems and Exercises in Integral Equations, Mir Publishers, Moscow, 1971
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Chapter 19. Integral Equations and Inverse Theory". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Archived from the original on 2011-08-11. Retrieved 2011-08-17.
== External links ==
Integral Equations: Exact Solutions at EqWorld: The World of Mathematical Equations.
Integral Equations: Index at EqWorld: The World of Mathematical Equations.
"Integral equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Integral Equations (MIT OpenCourseWare) | Wikipedia/Integral_equations |
In mathematics, a differential-algebraic system of equations (DAE) is a system of equations that either contains differential equations and algebraic equations, or is equivalent to such a system.
The set of the solutions of such a system is a differential algebraic variety, and corresponds to an ideal in a differential algebra of differential polynomials.
In the univariate case, a DAE in the variable t can be written as a single equation of the form
F
(
x
˙
,
x
,
t
)
=
0
,
{\displaystyle F({\dot {x}},x,t)=0,}
where
x
(
t
)
{\displaystyle x(t)}
is a vector of unknown functions and the overdot denotes the time derivative, i.e.,
x
˙
=
d
x
d
t
{\displaystyle {\dot {x}}={\frac {dx}{dt}}}
.
They are distinct from ordinary differential equation (ODE) in that a DAE is not completely solvable for the derivatives of all components of the function x because these may not all appear (i.e. some equations are algebraic); technically the distinction between an implicit ODE system [that may be rendered explicit] and a DAE system is that the Jacobian matrix
∂
F
(
x
˙
,
x
,
t
)
∂
x
˙
{\displaystyle {\frac {\partial F({\dot {x}},x,t)}{\partial {\dot {x}}}}}
is a singular matrix for a DAE system. This distinction between ODEs and DAEs is made because DAEs have different characteristics and are generally more difficult to solve.
In practical terms, the distinction between DAEs and ODEs is often that the solution of a DAE system depends on the derivatives of the input signal and not just the signal itself as in the case of ODEs; this issue is commonly encountered in nonlinear systems with hysteresis, such as the Schmitt trigger.
This difference is more clearly visible if the system may be rewritten so that instead of x we consider a pair
(
x
,
y
)
{\displaystyle (x,y)}
of vectors of dependent variables and the DAE has the form
x
˙
(
t
)
=
f
(
x
(
t
)
,
y
(
t
)
,
t
)
,
0
=
g
(
x
(
t
)
,
y
(
t
)
,
t
)
.
{\displaystyle {\begin{aligned}{\dot {x}}(t)&=f(x(t),y(t),t),\\0&=g(x(t),y(t),t).\end{aligned}}}
where
x
(
t
)
∈
R
n
{\displaystyle x(t)\in \mathbb {R} ^{n}}
,
y
(
t
)
∈
R
m
{\displaystyle y(t)\in \mathbb {R} ^{m}}
,
f
:
R
n
+
m
+
1
→
R
n
{\displaystyle f:\mathbb {R} ^{n+m+1}\to \mathbb {R} ^{n}}
and
g
:
R
n
+
m
+
1
→
R
m
.
{\displaystyle g:\mathbb {R} ^{n+m+1}\to \mathbb {R} ^{m}.}
A DAE system of this form is called semi-explicit. Every solution of the second half g of the equation defines a unique direction for x via the first half f of the equations, while the direction for y is arbitrary. But not every point (x,y,t) is a solution of g. The variables in x and the first half f of the equations get the attribute differential. The components of y and the second half g of the equations are called the algebraic variables or equations of the system. [The term algebraic in the context of DAEs only means free of derivatives and is not related to (abstract) algebra.]
The solution of a DAE consists of two parts, first the search for consistent initial values and second the computation of a trajectory. To find consistent initial values it is often necessary to consider the derivatives of some of the component functions of the DAE. The highest order of a derivative that is necessary for this process is called the differentiation index. The equations derived in computing the index and consistent initial values may also be of use in the computation of the trajectory. A semi-explicit DAE system can be converted to an implicit one by decreasing the differentiation index by one, and vice versa.
== Other forms of DAEs ==
The distinction of DAEs to ODEs becomes apparent if some of the dependent variables occur without their derivatives. The vector of dependent variables may then be written as pair
(
x
,
y
)
{\displaystyle (x,y)}
and the system of differential equations of the DAE appears in the form
F
(
x
˙
,
x
,
y
,
t
)
=
0
{\displaystyle F\left({\dot {x}},x,y,t\right)=0}
where
x
{\displaystyle x}
, a vector in
R
n
{\displaystyle \mathbb {R} ^{n}}
, are dependent variables for which derivatives are present (differential variables),
y
{\displaystyle y}
, a vector in
R
m
{\displaystyle \mathbb {R} ^{m}}
, are dependent variables for which no derivatives are present (algebraic variables),
t
{\displaystyle t}
, a scalar (usually time) is an independent variable.
F
{\displaystyle F}
is a vector of
n
+
m
{\displaystyle n+m}
functions that involve subsets of these
n
+
m
+
1
{\displaystyle n+m+1}
variables and
n
{\displaystyle n}
derivatives.
As a whole, the set of DAEs is a function
F
:
R
(
2
n
+
m
+
1
)
→
R
(
n
+
m
)
.
{\displaystyle F:\mathbb {R} ^{(2n+m+1)}\to \mathbb {R} ^{(n+m)}.}
Initial conditions must be a solution of the system of equations of the form
F
(
x
˙
(
t
0
)
,
x
(
t
0
)
,
y
(
t
0
)
,
t
0
)
=
0.
{\displaystyle F\left({\dot {x}}(t_{0}),\,x(t_{0}),y(t_{0}),t_{0}\right)=0.}
== Examples ==
The behaviour of a pendulum of length L with center in (0,0) in Cartesian coordinates (x,y) is described by the Euler–Lagrange equations
x
˙
=
u
,
y
˙
=
v
,
u
˙
=
λ
x
,
v
˙
=
λ
y
−
g
,
x
2
+
y
2
=
L
2
,
{\displaystyle {\begin{aligned}{\dot {x}}&=u,&{\dot {y}}&=v,\\{\dot {u}}&=\lambda x,&{\dot {v}}&=\lambda y-g,\\x^{2}+y^{2}&=L^{2},\end{aligned}}}
where
λ
{\displaystyle \lambda }
is a Lagrange multiplier. The momentum variables u and v should be constrained by the law of conservation of energy and their direction should point along the circle. Neither condition is explicit in those equations. Differentiation of the last equation leads to
x
˙
x
+
y
˙
y
=
0
⇒
u
x
+
v
y
=
0
,
{\displaystyle {\begin{aligned}&&{\dot {x}}\,x+{\dot {y}}\,y&=0\\\Rightarrow &&u\,x+v\,y&=0,\end{aligned}}}
restricting the direction of motion to the tangent of the circle. The next derivative of this equation implies
u
˙
x
+
v
˙
y
+
u
x
˙
+
v
y
˙
=
0
,
⇒
λ
(
x
2
+
y
2
)
−
g
y
+
u
2
+
v
2
=
0
,
⇒
L
2
λ
−
g
y
+
u
2
+
v
2
=
0
,
{\displaystyle {\begin{aligned}&&{\dot {u}}\,x+{\dot {v}}\,y+u\,{\dot {x}}+v\,{\dot {y}}&=0,\\\Rightarrow &&\lambda (x^{2}+y^{2})-gy+u^{2}+v^{2}&=0,\\\Rightarrow &&L^{2}\,\lambda -gy+u^{2}+v^{2}&=0,\end{aligned}}}
and the derivative of that last identity simplifies to
L
2
λ
˙
−
3
g
v
=
0
{\displaystyle L^{2}{\dot {\lambda }}-3gv=0}
which implies the conservation of energy since after integration the constant
E
=
3
2
g
y
−
1
2
L
2
λ
=
1
2
(
u
2
+
v
2
)
+
g
y
{\displaystyle E={\tfrac {3}{2}}gy-{\tfrac {1}{2}}L^{2}\lambda ={\frac {1}{2}}(u^{2}+v^{2})+gy}
is the sum of kinetic and potential energy.
To obtain unique derivative values for all dependent variables the last equation was three times differentiated. This gives a differentiation index of 3, which is typical for constrained mechanical systems.
If initial values
(
x
0
,
u
0
)
{\displaystyle (x_{0},u_{0})}
and a sign for y are given, the other variables are determined via
y
=
±
L
2
−
x
2
{\displaystyle y=\pm {\sqrt {L^{2}-x^{2}}}}
, and if
y
≠
0
{\displaystyle y\neq 0}
then
v
=
−
u
x
/
y
{\displaystyle v=-ux/y}
and
λ
=
(
g
y
−
u
2
−
v
2
)
/
L
2
{\displaystyle \lambda =(gy-u^{2}-v^{2})/L^{2}}
. To proceed to the next point it is sufficient to get the derivatives of x and u, that is, the system to solve is now
x
˙
=
u
,
u
˙
=
λ
x
,
0
=
x
2
+
y
2
−
L
2
,
0
=
u
x
+
v
y
,
0
=
u
2
−
g
y
+
v
2
+
L
2
λ
.
{\displaystyle {\begin{aligned}{\dot {x}}&=u,\\{\dot {u}}&=\lambda x,\\[0.3em]0&=x^{2}+y^{2}-L^{2},\\0&=ux+vy,\\0&=u^{2}-gy+v^{2}+L^{2}\,\lambda .\end{aligned}}}
This is a semi-explicit DAE of index 1. Another set of similar equations may be obtained starting from
(
y
0
,
v
0
)
{\displaystyle (y_{0},v_{0})}
and a sign for x.
DAEs also naturally occur in the modelling of circuits with non-linear devices. Modified nodal analysis employing DAEs is used for example in the ubiquitous SPICE family of numeric circuit simulators. Similarly, Fraunhofer's Analog Insydes Mathematica package can be used to derive DAEs from a netlist and then simplify or even solve the equations symbolically in some cases. It is worth noting that the index of a DAE (of a circuit) can be made arbitrarily high by cascading/coupling via capacitors operational amplifiers with positive feedback.
== Semi-explicit DAE of index 1 ==
DAE of the form
x
˙
=
f
(
x
,
y
,
t
)
,
0
=
g
(
x
,
y
,
t
)
.
{\displaystyle {\begin{aligned}{\dot {x}}&=f(x,y,t),\\0&=g(x,y,t).\end{aligned}}}
are called semi-explicit. The index-1 property requires that g is solvable for y. In other words, the differentiation index is 1 if by differentiation of the algebraic equations for t an implicit ODE system results,
x
˙
=
f
(
x
,
y
,
t
)
0
=
∂
x
g
(
x
,
y
,
t
)
x
˙
+
∂
y
g
(
x
,
y
,
t
)
y
˙
+
∂
t
g
(
x
,
y
,
t
)
,
{\displaystyle {\begin{aligned}{\dot {x}}&=f(x,y,t)\\0&=\partial _{x}g(x,y,t){\dot {x}}+\partial _{y}g(x,y,t){\dot {y}}+\partial _{t}g(x,y,t),\end{aligned}}}
which is solvable for
(
x
˙
,
y
˙
)
{\displaystyle ({\dot {x}},\,{\dot {y}})}
if
det
(
∂
y
g
(
x
,
y
,
t
)
)
≠
0.
{\displaystyle \det \left(\partial _{y}g(x,y,t)\right)\neq 0.}
Every sufficiently smooth DAE is almost everywhere reducible to this semi-explicit index-1 form.
== Numerical treatment of DAE and applications ==
Two major problems in solving DAEs are index reduction and consistent initial conditions. Most numerical solvers require ordinary differential equations and algebraic equations of the form
d
x
d
t
=
f
(
x
,
y
,
t
)
,
0
=
g
(
x
,
y
,
t
)
.
{\displaystyle {\begin{aligned}{\frac {dx}{dt}}&=f\left(x,y,t\right),\\0&=g\left(x,y,t\right).\end{aligned}}}
It is a non-trivial task to convert arbitrary DAE systems into ODEs for solution by pure ODE solvers. Techniques which can be employed include Pantelides algorithm and dummy derivative index reduction method. Alternatively, a direct solution of high-index DAEs with inconsistent initial conditions is also possible. This solution approach involves a transformation of the derivative elements through orthogonal collocation on finite elements or direct transcription into algebraic expressions. This allows DAEs of any index to be solved without rearrangement in the open equation form
0
=
f
(
d
x
d
t
,
x
,
y
,
t
)
,
0
=
g
(
x
,
y
,
t
)
.
{\displaystyle {\begin{aligned}0&=f\left({\frac {dx}{dt}},x,y,t\right),\\0&=g\left(x,y,t\right).\end{aligned}}}
Once the model has been converted to algebraic equation form, it is solvable by large-scale nonlinear programming solvers (see APMonitor).
=== Tractability ===
Several measures of DAEs tractability in terms of numerical methods have developed, such as differentiation index, perturbation index, tractability index, geometric index, and the Kronecker index.
== Structural analysis for DAEs ==
We use the
Σ
{\displaystyle \Sigma }
-method to analyze a DAE. We construct for the DAE a signature matrix
Σ
=
(
σ
i
,
j
)
{\displaystyle \Sigma =(\sigma _{i,j})}
, where each row corresponds to each equation
f
i
{\displaystyle f_{i}}
and each column corresponds to each variable
x
j
{\displaystyle x_{j}}
. The entry in position
(
i
,
j
)
{\displaystyle (i,j)}
is
σ
i
,
j
{\displaystyle \sigma _{i,j}}
, which denotes the highest order of derivative to which
x
j
{\displaystyle x_{j}}
occurs in
f
i
{\displaystyle f_{i}}
, or
−
∞
{\displaystyle -\infty }
if
x
j
{\displaystyle x_{j}}
does not occur in
f
i
{\displaystyle f_{i}}
.
For the pendulum DAE above, the variables are
(
x
1
,
x
2
,
x
3
,
x
4
,
x
5
)
=
(
x
,
y
,
u
,
v
,
λ
)
{\displaystyle (x_{1},x_{2},x_{3},x_{4},x_{5})=(x,y,u,v,\lambda )}
. The corresponding signature matrix is
Σ
=
[
1
−
0
∙
−
−
−
1
∙
−
0
−
0
−
1
−
0
∙
−
0
−
1
∙
0
0
∙
0
−
−
−
]
{\displaystyle \Sigma ={\begin{bmatrix}1&-&0^{\bullet }&-&-\\-&1^{\bullet }&-&0&-\\0&-&1&-&0^{\bullet }\\-&0&-&1^{\bullet }&0\\0^{\bullet }&0&-&-&-\end{bmatrix}}}
== See also ==
Algebraic differential equation, a different concept despite the similar name
Delay differential equation
Partial differential algebraic equation
Modelica Language
== References ==
== Further reading ==
== External links ==
http://www.scholarpedia.org/article/Differential-algebraic_equations | Wikipedia/Differential_algebraic_equation |
In mathematics, the method of undetermined coefficients is an approach to finding a particular solution to certain nonhomogeneous ordinary differential equations and recurrence relations. It is closely related to the annihilator method, but instead of using a particular kind of differential operator (the annihilator) in order to find the best possible form of the particular solution, an ansatz or 'guess' is made as to the appropriate form, which is then tested by differentiating the resulting equation. For complex equations, the annihilator method or variation of parameters is less time-consuming to perform.
Undetermined coefficients is not as general a method as variation of parameters, since it only works for differential equations that follow certain forms.
== Description of the method ==
Consider a linear non-homogeneous ordinary differential equation of the form
∑
i
=
0
n
c
i
y
(
i
)
+
y
(
n
+
1
)
=
g
(
x
)
{\displaystyle \sum _{i=0}^{n}c_{i}y^{(i)}+y^{(n+1)}=g(x)}
where
y
(
i
)
{\displaystyle y^{(i)}}
denotes the i-th derivative of
y
{\displaystyle y}
, and
c
i
{\displaystyle c_{i}}
denotes a function of
x
{\displaystyle x}
.
The method of undetermined coefficients provides a straightforward method of obtaining the solution to this ODE when two criteria are met:
c
i
{\displaystyle c_{i}}
are constants.
g(x) is a constant, a polynomial function, exponential function
e
α
x
{\displaystyle e^{\alpha x}}
, sine or cosine functions
sin
β
x
{\displaystyle \sin {\beta x}}
or
cos
β
x
{\displaystyle \cos {\beta x}}
, or finite sums and products of these functions (
α
{\displaystyle {\alpha }}
,
β
{\displaystyle {\beta }}
constants).
The method consists of finding the general homogeneous solution
y
c
{\displaystyle y_{c}}
for the complementary linear homogeneous differential equation
∑
i
=
0
n
c
i
y
(
i
)
+
y
(
n
+
1
)
=
0
,
{\displaystyle \sum _{i=0}^{n}c_{i}y^{(i)}+y^{(n+1)}=0,}
and a particular integral
y
p
{\displaystyle y_{p}}
of the linear non-homogeneous ordinary differential equation based on
g
(
x
)
{\displaystyle g(x)}
. Then the general solution
y
{\displaystyle y}
to the linear non-homogeneous ordinary differential equation would be
y
=
y
c
+
y
p
.
{\displaystyle y=y_{c}+y_{p}.}
If
g
(
x
)
{\displaystyle g(x)}
consists of the sum of two functions
h
(
x
)
+
w
(
x
)
{\displaystyle h(x)+w(x)}
and we say that
y
p
1
{\displaystyle y_{p_{1}}}
is the solution based on
h
(
x
)
{\displaystyle h(x)}
and
y
p
2
{\displaystyle y_{p_{2}}}
the solution based on
w
(
x
)
{\displaystyle w(x)}
. Then, using a superposition principle, we can say that the particular integral
y
p
{\displaystyle y_{p}}
is
y
p
=
y
p
1
+
y
p
2
.
{\displaystyle y_{p}=y_{p_{1}}+y_{p_{2}}.}
== Typical forms of the particular integral ==
In order to find the particular integral, we need to 'guess' its form, with some coefficients left as variables to be solved for. This takes the form of the first derivative of the complementary function. Below is a table of some typical functions and the solution to guess for them.
If a term in the above particular integral for y appears in the homogeneous solution, it is necessary to multiply by a sufficiently large power of x in order to make the solution independent. If the function of x is a sum of terms in the above table, the particular integral can be guessed using a sum of the corresponding terms for y.
== Examples ==
=== Example 1 ===
Find a particular integral of the equation
y
″
+
y
=
t
cos
t
.
{\displaystyle y''+y=t\cos t.}
The right side t cos t has the form
P
n
e
α
t
cos
β
t
{\displaystyle P_{n}e^{\alpha t}\cos {\beta t}}
with n = 2, α = 0, and β = 1.
Since α + iβ = i is a simple root of the characteristic equation
λ
2
+
1
=
0
{\displaystyle \lambda ^{2}+1=0}
we should try a particular integral of the form
y
p
=
t
[
F
1
(
t
)
e
α
t
cos
β
t
+
G
1
(
t
)
e
α
t
sin
β
t
]
=
t
[
F
1
(
t
)
cos
t
+
G
1
(
t
)
sin
t
]
=
t
[
(
A
0
t
+
A
1
)
cos
t
+
(
B
0
t
+
B
1
)
sin
t
]
=
(
A
0
t
2
+
A
1
t
)
cos
t
+
(
B
0
t
2
+
B
1
t
)
sin
t
.
{\displaystyle {\begin{aligned}y_{p}&=t\left[F_{1}(t)e^{\alpha t}\cos {\beta t}+G_{1}(t)e^{\alpha t}\sin {\beta t}\right]\\&=t\left[F_{1}(t)\cos t+G_{1}(t)\sin t\right]\\&=t\left[\left(A_{0}t+A_{1}\right)\cos t+\left(B_{0}t+B_{1}\right)\sin t\right]\\&=\left(A_{0}t^{2}+A_{1}t\right)\cos t+\left(B_{0}t^{2}+B_{1}t\right)\sin t.\end{aligned}}}
Substituting yp into the differential equation, we have the identity
t
cos
t
=
y
p
″
+
y
p
=
[
(
A
0
t
2
+
A
1
t
)
cos
t
+
(
B
0
t
2
+
B
1
t
)
sin
t
]
″
+
[
(
A
0
t
2
+
A
1
t
)
cos
t
+
(
B
0
t
2
+
B
1
t
)
sin
t
]
=
[
2
A
0
cos
t
+
2
(
2
A
0
t
+
A
1
)
(
−
sin
t
)
+
(
A
0
t
2
+
A
1
t
)
(
−
cos
t
)
+
2
B
0
sin
t
+
2
(
2
B
0
t
+
B
1
)
cos
t
+
(
B
0
t
2
+
B
1
t
)
(
−
sin
t
)
]
+
[
(
A
0
t
2
+
A
1
t
)
cos
t
+
(
B
0
t
2
+
B
1
t
)
sin
t
]
=
[
4
B
0
t
+
(
2
A
0
+
2
B
1
)
]
cos
t
+
[
−
4
A
0
t
+
(
−
2
A
1
+
2
B
0
)
]
sin
t
.
{\displaystyle {\begin{aligned}t\cos t&=y_{p}''+y_{p}\\&=\left[\left(A_{0}t^{2}+A_{1}t\right)\cos t+\left(B_{0}t^{2}+B_{1}t\right)\sin t\right]''+\left[\left(A_{0}t^{2}+A_{1}t\right)\cos t+\left(B_{0}t^{2}+B_{1}t\right)\sin t\right]\\&=\left[2A_{0}\cos t+2\left(2A_{0}t+A_{1}\right)(-\sin t)+\left(A_{0}t^{2}+A_{1}t\right)(-\cos t)+2B_{0}\sin t+2\left(2B_{0}t+B_{1}\right)\cos t+\left(B_{0}t^{2}+B_{1}t\right)(-\sin t)\right]\\&\qquad +\left[\left(A_{0}t^{2}+A_{1}t\right)\cos t+\left(B_{0}t^{2}+B_{1}t\right)\sin t\right]\\&=[4B_{0}t+(2A_{0}+2B_{1})]\cos t+[-4A_{0}t+(-2A_{1}+2B_{0})]\sin t.\end{aligned}}}
Comparing both sides, we have
{
1
=
4
B
0
0
=
2
A
0
+
2
B
1
0
=
−
4
A
0
0
=
−
2
A
1
+
2
B
0
{\displaystyle {\begin{cases}1=4B_{0}\\0=2A_{0}+2B_{1}\\0=-4A_{0}\\0=-2A_{1}+2B_{0}\end{cases}}}
which has the solution
A
0
=
0
,
A
1
=
B
0
=
1
4
,
B
1
=
0.
{\displaystyle A_{0}=0,\quad A_{1}=B_{0}={\frac {1}{4}},\quad B_{1}=0.}
We then have a particular integral
y
p
=
1
4
t
cos
t
+
1
4
t
2
sin
t
.
{\displaystyle y_{p}={\frac {1}{4}}t\cos t+{\frac {1}{4}}t^{2}\sin t.}
=== Example 2 ===
Consider the following linear nonhomogeneous differential equation:
d
y
d
x
=
y
+
e
x
.
{\displaystyle {\frac {dy}{dx}}=y+e^{x}.}
This is like the first example above, except that the nonhomogeneous part (
e
x
{\displaystyle e^{x}}
) is not linearly independent to the general solution of the homogeneous part (
c
1
e
x
{\displaystyle c_{1}e^{x}}
); as a result, we have to multiply our guess by a sufficiently large power of x to make it linearly independent.
Here our guess becomes:
y
p
=
A
x
e
x
.
{\displaystyle y_{p}=Axe^{x}.}
By substituting this function and its derivative into the differential equation, one can solve for A:
d
d
x
(
A
x
e
x
)
=
A
x
e
x
+
e
x
{\displaystyle {\frac {d}{dx}}\left(Axe^{x}\right)=Axe^{x}+e^{x}}
A
x
e
x
+
A
e
x
=
A
x
e
x
+
e
x
{\displaystyle Axe^{x}+Ae^{x}=Axe^{x}+e^{x}}
A
=
1.
{\displaystyle A=1.}
So, the general solution to this differential equation is:
y
=
c
1
e
x
+
x
e
x
.
{\displaystyle y=c_{1}e^{x}+xe^{x}.}
=== Example 3 ===
Find the general solution of the equation:
d
y
d
t
=
t
2
−
y
{\displaystyle {\frac {dy}{dt}}=t^{2}-y}
t
2
{\displaystyle t^{2}}
is a polynomial of degree 2, so we look for a solution using the same form,
y
p
=
A
t
2
+
B
t
+
C
,
{\displaystyle y_{p}=At^{2}+Bt+C,}
Plugging this particular function into the original equation yields,
2
A
t
+
B
=
t
2
−
(
A
t
2
+
B
t
+
C
)
,
{\displaystyle 2At+B=t^{2}-(At^{2}+Bt+C),}
2
A
t
+
B
=
(
1
−
A
)
t
2
−
B
t
−
C
,
{\displaystyle 2At+B=(1-A)t^{2}-Bt-C,}
(
A
−
1
)
t
2
+
(
2
A
+
B
)
t
+
(
B
+
C
)
=
0.
{\displaystyle (A-1)t^{2}+(2A+B)t+(B+C)=0.}
which gives:
A
−
1
=
0
,
2
A
+
B
=
0
,
B
+
C
=
0.
{\displaystyle A-1=0,\quad 2A+B=0,\quad B+C=0.}
Solving for constants we get:
y
p
=
t
2
−
2
t
+
2
{\displaystyle y_{p}=t^{2}-2t+2}
To solve for the general solution,
y
=
y
p
+
y
c
{\displaystyle y=y_{p}+y_{c}}
where
y
c
{\displaystyle y_{c}}
is the homogeneous solution
y
c
=
c
1
e
−
t
{\displaystyle y_{c}=c_{1}e^{-t}}
, therefore, the general solution is:
y
=
t
2
−
2
t
+
2
+
c
1
e
−
t
{\displaystyle y=t^{2}-2t+2+c_{1}e^{-t}}
== References ==
Boyce, W. E.; DiPrima, R. C. (1986). Elementary Differential Equations and Boundary Value Problems (4th ed.). John Wiley & Sons. ISBN 0-471-83824-1.
Riley, K. F.; Bence, S. J. (2010). Mathematical Methods for Physics and Engineering. Cambridge University Press. ISBN 978-0-521-86153-3.
Tenenbaum, Morris; Pollard, Harry (1985). Ordinary Differential Equations. Dover. ISBN 978-0-486-64940-5.
de Oliveira, O. R. B. (2013). "A formula substituting the undetermined coefficients and the annihilator methods". Int. J. Math. Educ. Sci. Technol. 44 (3): 462–468. arXiv:1110.4425. Bibcode:2013IJMES..44..462R. doi:10.1080/0020739X.2012.714496. S2CID 55834468. | Wikipedia/Method_of_undetermined_coefficients |
In mathematics, and more specifically in analysis, a holonomic function is a smooth function of several variables that is a solution of a system of linear homogeneous differential equations with polynomial coefficients and satisfies a suitable dimension condition in terms of D-modules theory. More precisely, a holonomic function is an element of a holonomic module of smooth functions. Holonomic functions can also be described as differentiably finite functions, also known as D-finite functions. When a power series in the variables is the Taylor expansion of a holonomic function, the sequence of its coefficients, in one or several indices, is also called holonomic. Holonomic sequences are also called P-recursive sequences: they are defined recursively by multivariate recurrences satisfied by the whole sequence and by suitable specializations of it. The situation simplifies in the univariate case: any univariate sequence that satisfies a linear homogeneous recurrence relation with polynomial coefficients, or equivalently a linear homogeneous difference equation with polynomial coefficients, is holonomic.
== Holonomic functions and sequences in one variable ==
=== Definitions ===
Let
K
{\displaystyle \mathbb {K} }
be a field of characteristic 0 (for example,
K
=
Q
{\displaystyle \mathbb {K} =\mathbb {Q} }
or
K
=
C
{\displaystyle \mathbb {K} =\mathbb {C} }
).
A function
f
=
f
(
x
)
{\displaystyle f=f(x)}
is called D-finite (or holonomic) if there exist polynomials
0
≠
a
r
(
x
)
,
a
r
−
1
(
x
)
,
…
,
a
0
(
x
)
∈
K
[
x
]
{\displaystyle 0\neq a_{r}(x),a_{r-1}(x),\ldots ,a_{0}(x)\in \mathbb {K} [x]}
such that
a
r
(
x
)
f
(
r
)
(
x
)
+
a
r
−
1
(
x
)
f
(
r
−
1
)
(
x
)
+
⋯
+
a
1
(
x
)
f
′
(
x
)
+
a
0
(
x
)
f
(
x
)
=
0
{\displaystyle a_{r}(x)f^{(r)}(x)+a_{r-1}(x)f^{(r-1)}(x)+\cdots +a_{1}(x)f'(x)+a_{0}(x)f(x)=0}
holds for all x. This can also be written as
A
f
=
0
{\displaystyle Af=0}
where
A
=
∑
k
=
0
r
a
k
D
x
k
{\displaystyle A=\sum _{k=0}^{r}a_{k}D_{x}^{k}}
and
D
x
{\displaystyle D_{x}}
is the differential operator that maps
f
(
x
)
{\displaystyle f(x)}
to
f
′
(
x
)
{\displaystyle f'(x)}
.
A
{\displaystyle A}
is called an annihilating operator of f (the annihilating operators of
f
{\displaystyle f}
form an ideal in the ring
K
[
x
]
[
D
x
]
{\displaystyle \mathbb {K} [x][D_{x}]}
, called the annihilator of
f
{\displaystyle f}
). The quantity r is called the order of the annihilating operator. By extension, the holonomic function f is said to be of order r when an annihilating operator of such order exists.
A sequence
c
=
c
0
,
c
1
,
…
{\displaystyle c=c_{0},c_{1},\ldots }
is called P-recursive (or holonomic) if there exist polynomials
a
r
(
n
)
,
a
r
−
1
(
n
)
,
…
,
a
0
(
n
)
∈
K
[
n
]
{\displaystyle a_{r}(n),a_{r-1}(n),\ldots ,a_{0}(n)\in \mathbb {K} [n]}
such that
a
r
(
n
)
c
n
+
r
+
a
r
−
1
(
n
)
c
n
+
r
−
1
+
⋯
+
a
0
(
n
)
c
n
=
0
{\displaystyle a_{r}(n)c_{n+r}+a_{r-1}(n)c_{n+r-1}+\cdots +a_{0}(n)c_{n}=0}
holds for all n. This can also be written as
A
c
=
0
{\displaystyle Ac=0}
where
A
=
∑
k
=
0
r
a
k
S
n
{\displaystyle A=\sum _{k=0}^{r}a_{k}S_{n}}
and
S
n
{\displaystyle S_{n}}
the shift operator that maps
c
0
,
c
1
,
…
{\displaystyle c_{0},c_{1},\ldots }
to
c
1
,
c
2
,
…
{\displaystyle c_{1},c_{2},\ldots }
.
A
{\displaystyle A}
is called an annihilating operator of c (the annihilating operators of
c
{\displaystyle c}
form an ideal in the ring
K
[
n
]
[
S
n
]
{\displaystyle \mathbb {K} [n][S_{n}]}
, called the annihilator of
c
{\displaystyle c}
). The quantity r is called the order of the annihilating operator. By extension, the holonomic sequence c is said to be of order r when an annihilating operator of such order exists.
Holonomic functions are precisely the generating functions of holonomic sequences: if
f
(
x
)
{\displaystyle f(x)}
is holonomic, then the coefficients
c
n
{\displaystyle c_{n}}
in the power series expansion
f
(
x
)
=
∑
n
=
0
∞
c
n
x
n
{\displaystyle f(x)=\sum _{n=0}^{\infty }c_{n}x^{n}}
form a holonomic sequence. Conversely, for a given holonomic sequence
c
n
{\displaystyle c_{n}}
, the function defined by the above sum is holonomic (this is true in the sense of formal power series, even if the sum has a zero radius of convergence).
=== Closure properties ===
Holonomic functions (or sequences) satisfy several closure properties. In particular, holonomic functions (or sequences) form a ring. They are not closed under division, however, and therefore do not form a field.
If
f
(
x
)
=
∑
n
=
0
∞
f
n
x
n
{\displaystyle f(x)=\sum _{n=0}^{\infty }f_{n}x^{n}}
and
g
(
x
)
=
∑
n
=
0
∞
g
n
x
n
{\displaystyle g(x)=\sum _{n=0}^{\infty }g_{n}x^{n}}
are holonomic functions, then the following functions are also holonomic:
h
(
x
)
=
α
f
(
x
)
+
β
g
(
x
)
{\displaystyle h(x)=\alpha f(x)+\beta g(x)}
, where
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
are constants
h
(
x
)
=
f
(
x
)
g
(
x
)
{\displaystyle h(x)=f(x)g(x)}
(the Cauchy product of the sequences)
h
(
x
)
=
∑
n
=
0
∞
f
n
g
n
x
n
{\displaystyle h(x)=\sum _{n=0}^{\infty }f_{n}g_{n}x^{n}}
(the Hadamard product of the sequences)
h
(
x
)
=
∫
0
x
f
(
t
)
d
t
{\displaystyle h(x)=\int _{0}^{x}f(t)dt}
h
(
x
)
=
∑
n
=
0
∞
(
∑
k
=
0
n
f
k
)
x
n
{\displaystyle h(x)=\sum _{n=0}^{\infty }(\sum _{k=0}^{n}f_{k})x^{n}}
h
(
x
)
=
f
(
a
(
x
)
)
{\displaystyle h(x)=f(a(x))}
, where
a
(
x
)
{\displaystyle a(x)}
is any algebraic function. However,
a
(
f
(
x
)
)
{\displaystyle a(f(x))}
is generally not holonomic.
A crucial property of holonomic functions is that the closure properties are effective: given annihilating operators for
f
{\displaystyle f}
and
g
{\displaystyle g}
, an annihilating operator for
h
{\displaystyle h}
as defined using any of the above operations can be computed explicitly.
=== Examples of holonomic functions and sequences ===
Examples of holonomic functions include:
all algebraic functions, including polynomials and rational functions
the sine and cosine functions (but not tangent, cotangent, secant, or cosecant)
the hyperbolic sine and cosine functions (but not hyperbolic tangent, cotangent, secant, or cosecant)
exponential functions and logarithms (to any base)
the generalized hypergeometric function
p
F
q
(
a
1
,
…
,
a
p
,
b
1
,
…
,
b
q
,
x
)
{\displaystyle {}_{p}F_{q}(a_{1},\ldots ,a_{p},b_{1},\ldots ,b_{q},x)}
, considered as a function of
x
{\displaystyle x}
with all the parameters
a
i
{\displaystyle a_{i}}
,
b
i
{\displaystyle b_{i}}
held fixed
the error function
erf
(
x
)
{\displaystyle \operatorname {erf} (x)}
the Bessel functions
J
n
(
x
)
{\displaystyle J_{n}(x)}
,
Y
n
(
x
)
{\displaystyle Y_{n}(x)}
,
I
n
(
x
)
{\displaystyle I_{n}(x)}
,
K
n
(
x
)
{\displaystyle K_{n}(x)}
the Airy functions
Ai
(
x
)
{\displaystyle \operatorname {Ai} (x)}
,
Bi
(
x
)
{\displaystyle \operatorname {Bi} (x)}
The class of holonomic functions is a strict superset of the class of hypergeometric functions. Examples of special functions that are holonomic but not hypergeometric include the Heun functions.
Examples of holonomic sequences include:
the sequence of Fibonacci numbers
F
n
{\displaystyle F_{n}}
, and more generally, all constant-recursive sequences
the sequence of factorials
n
!
{\displaystyle n!}
the sequence of binomial coefficients
(
n
k
)
{\displaystyle {n \choose k}}
(as functions of either n or k)
the sequence of harmonic numbers
H
n
=
∑
k
=
1
n
1
k
{\displaystyle H_{n}=\sum _{k=1}^{n}{\frac {1}{k}}}
, and more generally
H
n
,
m
=
∑
k
=
1
n
1
k
m
{\displaystyle H_{n,m}=\sum _{k=1}^{n}{\frac {1}{k^{m}}}}
for any integer m
the sequence of Catalan numbers
the sequence of Motzkin numbers
the enumeration of derangements.
Hypergeometric functions, Bessel functions, and classical orthogonal polynomials, in addition to being holonomic functions of their variable, are also holonomic sequences with respect to their parameters. For example, the Bessel functions
J
n
{\displaystyle J_{n}}
and
Y
n
{\displaystyle Y_{n}}
satisfy the second-order linear recurrence
x
(
f
n
+
1
+
f
n
−
1
)
=
2
n
f
n
{\displaystyle x(f_{n+1}+f_{n-1})=2nf_{n}}
.
=== Examples of nonholonomic functions and sequences ===
Examples of nonholonomic functions include:
the function
x
e
x
−
1
{\displaystyle {\frac {x}{e^{x}-1}}}
the function tan(x) + sec(x)
the quotient of two holonomic functions is generally not holonomic.
Examples of nonholonomic sequences include:
the Bernoulli numbers
the numbers of alternating permutations
the numbers of integer partitions
the numbers
log
(
n
)
{\displaystyle \log(n)}
the numbers
n
α
{\displaystyle n^{\alpha }}
where
α
∉
Z
{\displaystyle \alpha \not \in \mathbb {Z} }
the prime numbers
the enumerations of irreducible and connected permutations.
== Algorithms and software ==
Holonomic functions are a powerful tool in computer algebra. A holonomic function or sequence can be represented by a finite amount of data, namely an annihilating operator and a finite set of initial values, and the closure properties allow carrying out operations such as equality testing, summation and integration in an algorithmic fashion. In recent years, these techniques have allowed giving automated proofs of a large number of special function and combinatorial identities.
Moreover, there exist fast algorithms for evaluating holonomic functions to arbitrary precision at any point in the complex plane, and for numerically computing any entry in a holonomic sequence.
Software for working with holonomic functions includes:
The HolonomicFunctions [1] package for Mathematica, developed by Christoph Koutschan, which supports computing closure properties and proving identities for univariate and multivariate holonomic functions
The algolib [2] library for Maple, which includes the following packages:
gfun, developed by Bruno Salvy, Paul Zimmermann and Eithne Murray, for univariate closure properties and proving [3]
mgfun, developed by Frédéric Chyzak, for multivariate closure properties and proving [4]
numgfun, developed by Marc Mezzarobba, for numerical evaluation
== See also ==
Dynamic Dictionary of Mathematical functions Archived 2010-07-06 at the Wayback Machine, an online software, based on holonomic functions for automatically studying many classical and special functions (evaluation at a point, Taylor series and asymptotic expansion to any user-given precision, differential equation, recurrence for the coefficients of the Taylor series, derivative, indefinite integral, plotting, ...)
== Notes ==
== References ==
Flajolet, Philippe; Gerhold, Stefan; Salvy, Bruno (2005), "On the non-holonomic character of logarithms, powers, and the n-th prime function", Electronic Journal of Combinatorics, 11 (2), doi:10.37236/1894, S2CID 184136.
Flajolet, Philippe; Sedgewick, Robert (2009). Analytic Combinatorics. Cambridge University Press. ISBN 978-0521898065.
Kauers, Manuel; Paule, Peter (2011). The Concrete Tetrahedron: Symbolic Sums, Recurrence Equations, Generating Functions, Asymptotic Estimates. Text and Monographs in Symbolic Computation. Springer. ISBN 978-3-7091-0444-6.
Klazar, Martin (2003). "Irreducible and connected permutations" (PDF). (ITI Series preprint)
Mallinger, Christian (1996). Algorithmic Manipulations and Transformations of Univariate Holonomic Functions and Sequences (PDF) (Thesis). Retrieved 4 June 2013.
Stanley, Richard P. (1999). Enumerative Combinatorics. Vol. 2. Cambridge University Press. ISBN 978-0-521-56069-6.
Zeilberger, Doron (1990). "A holonomic systems approach to special functions identities". Journal of Computational and Applied Mathematics. 32 (3): 321–368. doi:10.1016/0377-0427(90)90042-X. ISSN 0377-0427. MR 1090884.
Kauers, Manuel (2023). D-Finite Functions. Algorithms and Computation in Mathematics. Vol. 30. Springer. doi:10.1007/978-3-031-34652-1. ISBN 978-3-031-34652-1. | Wikipedia/Holonomic_function |
In mathematical analysis, the Dirac delta function (or δ distribution), also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Thus it can be represented heuristically as
δ
(
x
)
=
{
0
,
x
≠
0
∞
,
x
=
0
{\displaystyle \delta (x)={\begin{cases}0,&x\neq 0\\{\infty },&x=0\end{cases}}}
such that
∫
−
∞
∞
δ
(
x
)
d
x
=
1.
{\displaystyle \int _{-\infty }^{\infty }\delta (x)dx=1.}
Since there is no function having this property, modelling the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.
The delta function was introduced by physicist Paul Dirac, and has since been applied routinely in physics and engineering to model point masses and instantaneous impulses. It is called the delta function because it is a continuous analogue of the Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1. The mathematical rigor of the delta function was disputed until Laurent Schwartz developed the theory of distributions, where it is defined as a linear form acting on functions.
== Motivation and overview ==
The graph of the Dirac delta is usually thought of as following the whole x-axis and the positive y-axis.: 174 The Dirac delta is used to model a tall narrow spike function (an impulse), and other similar abstractions such as a point charge, point mass or electron point. For example, to calculate the dynamics of a billiard ball being struck, one can approximate the force of the impact by a Dirac delta. In doing so, one not only simplifies the equations, but one also is able to calculate the motion of the ball, by only considering the total impulse of the collision, without a detailed model of all of the elastic energy transfer at subatomic levels (for instance).
To be specific, suppose that a billiard ball is at rest. At time
t
=
0
{\displaystyle t=0}
it is struck by another ball, imparting it with a momentum P, with units kg⋅m⋅s−1. The exchange of momentum is not actually instantaneous, being mediated by elastic processes at the molecular and subatomic level, but for practical purposes it is convenient to consider that energy transfer as effectively instantaneous. The force therefore is P δ(t); the units of δ(t) are s−1.
To model this situation more rigorously, suppose that the force instead is uniformly distributed over a small time interval
Δ
t
=
[
0
,
T
]
{\displaystyle \Delta t=[0,T]}
. That is,
F
Δ
t
(
t
)
=
{
P
/
Δ
t
0
<
t
≤
T
,
0
otherwise
.
{\displaystyle F_{\Delta t}(t)={\begin{cases}P/\Delta t&0<t\leq T,\\0&{\text{otherwise}}.\end{cases}}}
Then the momentum at any time t is found by integration:
p
(
t
)
=
∫
0
t
F
Δ
t
(
τ
)
d
τ
=
{
P
t
≥
T
P
t
/
Δ
t
0
≤
t
≤
T
0
otherwise.
{\displaystyle p(t)=\int _{0}^{t}F_{\Delta t}(\tau )\,d\tau ={\begin{cases}P&t\geq T\\P\,t/\Delta t&0\leq t\leq T\\0&{\text{otherwise.}}\end{cases}}}
Now, the model situation of an instantaneous transfer of momentum requires taking the limit as Δt → 0, giving a result everywhere except at 0:
p
(
t
)
=
{
P
t
>
0
0
t
<
0.
{\displaystyle p(t)={\begin{cases}P&t>0\\0&t<0.\end{cases}}}
Here the functions
F
Δ
t
{\displaystyle F_{\Delta t}}
are thought of as useful approximations to the idea of instantaneous transfer of momentum.
The delta function allows us to construct an idealized limit of these approximations. Unfortunately, the actual limit of the functions (in the sense of pointwise convergence)
lim
Δ
t
→
0
+
F
Δ
t
{\textstyle \lim _{\Delta t\to 0^{+}}F_{\Delta t}}
is zero everywhere but a single point, where it is infinite. To make proper sense of the Dirac delta, we should instead insist that the property
∫
−
∞
∞
F
Δ
t
(
t
)
d
t
=
P
,
{\displaystyle \int _{-\infty }^{\infty }F_{\Delta t}(t)\,dt=P,}
which holds for all
Δ
t
>
0
{\displaystyle \Delta t>0}
, should continue to hold in the limit. So, in the equation
F
(
t
)
=
P
δ
(
t
)
=
lim
Δ
t
→
0
F
Δ
t
(
t
)
{\textstyle F(t)=P\,\delta (t)=\lim _{\Delta t\to 0}F_{\Delta t}(t)}
, it is understood that the limit is always taken outside the integral.
In applied mathematics, as we have done here, the delta function is often manipulated as a kind of limit (a weak limit) of a sequence of functions, each member of which has a tall spike at the origin: for example, a sequence of Gaussian distributions centered at the origin with variance tending to zero.
The Dirac delta is not truly a function, at least not a usual one with domain and range in real numbers. For example, the objects f(x) = δ(x) and g(x) = 0 are equal everywhere except at x = 0 yet have integrals that are different. According to Lebesgue integration theory, if f and g are functions such that f = g almost everywhere, then f is integrable if and only if g is integrable and the integrals of f and g are identical. A rigorous approach to regarding the Dirac delta function as a mathematical object in its own right requires measure theory or the theory of distributions.
== History ==
In physics, the Dirac delta function was popularized by Paul Dirac in this book The Principles of Quantum Mechanics published in 1930. However, Oliver Heaviside, 35 years before Dirac, described an impulsive function called the Heaviside step function for purposes and with properties analogous to Dirac's work. Even earlier several mathematicians and physicists used limits of sharply peaked functions in derivations.
An infinitesimal formula for an infinitely tall, unit impulse delta function (infinitesimal version of Cauchy distribution) explicitly appears in an 1827 text of Augustin-Louis Cauchy. Siméon Denis Poisson considered the issue in connection with the study of wave propagation as did Gustav Kirchhoff somewhat later. Kirchhoff and Hermann von Helmholtz also introduced the unit impulse as a limit of Gaussians, which also corresponded to Lord Kelvin's notion of a point heat source. The Dirac delta function as such was introduced by Paul Dirac in his 1927 paper The Physical Interpretation of the Quantum Dynamics. He called it the "delta function" since he used it as a continuum analogue of the discrete Kronecker delta.
Mathematicians refer to the same concept as a distribution rather than a function.: 33
Joseph Fourier presented what is now called the Fourier integral theorem in his treatise Théorie analytique de la chaleur in the form:
f
(
x
)
=
1
2
π
∫
−
∞
∞
d
α
f
(
α
)
∫
−
∞
∞
d
p
cos
(
p
x
−
p
α
)
,
{\displaystyle f(x)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\ \ d\alpha \,f(\alpha )\ \int _{-\infty }^{\infty }dp\ \cos(px-p\alpha )\ ,}
which is tantamount to the introduction of the δ-function in the form:
δ
(
x
−
α
)
=
1
2
π
∫
−
∞
∞
d
p
cos
(
p
x
−
p
α
)
.
{\displaystyle \delta (x-\alpha )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }dp\ \cos(px-p\alpha )\ .}
Later, Augustin Cauchy expressed the theorem using exponentials:
f
(
x
)
=
1
2
π
∫
−
∞
∞
e
i
p
x
(
∫
−
∞
∞
e
−
i
p
α
f
(
α
)
d
α
)
d
p
.
{\displaystyle f(x)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\ e^{ipx}\left(\int _{-\infty }^{\infty }e^{-ip\alpha }f(\alpha )\,d\alpha \right)\,dp.}
Cauchy pointed out that in some circumstances the order of integration is significant in this result (contrast Fubini's theorem).
As justified using the theory of distributions, the Cauchy equation can be rearranged to resemble Fourier's original formulation and expose the δ-function as
f
(
x
)
=
1
2
π
∫
−
∞
∞
e
i
p
x
(
∫
−
∞
∞
e
−
i
p
α
f
(
α
)
d
α
)
d
p
=
1
2
π
∫
−
∞
∞
(
∫
−
∞
∞
e
i
p
x
e
−
i
p
α
d
p
)
f
(
α
)
d
α
=
∫
−
∞
∞
δ
(
x
−
α
)
f
(
α
)
d
α
,
{\displaystyle {\begin{aligned}f(x)&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ipx}\left(\int _{-\infty }^{\infty }e^{-ip\alpha }f(\alpha )\,d\alpha \right)\,dp\\[4pt]&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\left(\int _{-\infty }^{\infty }e^{ipx}e^{-ip\alpha }\,dp\right)f(\alpha )\,d\alpha =\int _{-\infty }^{\infty }\delta (x-\alpha )f(\alpha )\,d\alpha ,\end{aligned}}}
where the δ-function is expressed as
δ
(
x
−
α
)
=
1
2
π
∫
−
∞
∞
e
i
p
(
x
−
α
)
d
p
.
{\displaystyle \delta (x-\alpha )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ip(x-\alpha )}\,dp\ .}
A rigorous interpretation of the exponential form and the various limitations upon the function f necessary for its application extended over several centuries. The problems with a classical interpretation are explained as follows:
The greatest drawback of the classical Fourier transformation is a rather narrow class of functions (originals) for which it can be effectively computed. Namely, it is necessary that these functions decrease sufficiently rapidly to zero (in the neighborhood of infinity) to ensure the existence of the Fourier integral. For example, the Fourier transform of such simple functions as polynomials does not exist in the classical sense. The extension of the classical Fourier transformation to distributions considerably enlarged the class of functions that could be transformed and this removed many obstacles.
Further developments included generalization of the Fourier integral, "beginning with Plancherel's pathbreaking L2-theory (1910), continuing with Wiener's and Bochner's works (around 1930) and culminating with the amalgamation into L. Schwartz's theory of distributions (1945) ...", and leading to the formal development of the Dirac delta function.
== Definitions ==
The Dirac delta function
δ
(
x
)
{\displaystyle \delta (x)}
can be loosely thought of as a function on the real line which is zero everywhere except at the origin, where it is infinite,
δ
(
x
)
≃
{
+
∞
,
x
=
0
0
,
x
≠
0
{\displaystyle \delta (x)\simeq {\begin{cases}+\infty ,&x=0\\0,&x\neq 0\end{cases}}}
and which is also constrained to satisfy the identity
∫
−
∞
∞
δ
(
x
)
d
x
=
1.
{\displaystyle \int _{-\infty }^{\infty }\delta (x)\,dx=1.}
This is merely a heuristic characterization. The Dirac delta is not a function in the traditional sense as no extended real number valued function defined on the real numbers has these properties.
=== As a measure ===
One way to rigorously capture the notion of the Dirac delta function is to define a measure, called Dirac measure, which accepts a subset A of the real line R as an argument, and returns δ(A) = 1 if 0 ∈ A, and δ(A) = 0 otherwise. If the delta function is conceptualized as modeling an idealized point mass at 0, then δ(A) represents the mass contained in the set A. One may then define the integral against δ as the integral of a function against this mass distribution. Formally, the Lebesgue integral provides the necessary analytic device. The Lebesgue integral with respect to the measure δ satisfies
∫
−
∞
∞
f
(
x
)
δ
(
d
x
)
=
f
(
0
)
{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (dx)=f(0)}
for all continuous compactly supported functions f. The measure δ is not absolutely continuous with respect to the Lebesgue measure—in fact, it is a singular measure. Consequently, the delta measure has no Radon–Nikodym derivative (with respect to Lebesgue measure)—no true function for which the property
∫
−
∞
∞
f
(
x
)
δ
(
x
)
d
x
=
f
(
0
)
{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (x)\,dx=f(0)}
holds. As a result, the latter notation is a convenient abuse of notation, and not a standard (Riemann or Lebesgue) integral.
As a probability measure on R, the delta measure is characterized by its cumulative distribution function, which is the unit step function.
H
(
x
)
=
{
1
if
x
≥
0
0
if
x
<
0.
{\displaystyle H(x)={\begin{cases}1&{\text{if }}x\geq 0\\0&{\text{if }}x<0.\end{cases}}}
This means that H(x) is the integral of the cumulative indicator function 1(−∞, x] with respect to the measure δ; to wit,
H
(
x
)
=
∫
R
1
(
−
∞
,
x
]
(
t
)
δ
(
d
t
)
=
δ
(
(
−
∞
,
x
]
)
,
{\displaystyle H(x)=\int _{\mathbf {R} }\mathbf {1} _{(-\infty ,x]}(t)\,\delta (dt)=\delta \!\left((-\infty ,x]\right),}
the latter being the measure of this interval. Thus in particular the integration of the delta function against a continuous function can be properly understood as a Riemann–Stieltjes integral:
∫
−
∞
∞
f
(
x
)
δ
(
d
x
)
=
∫
−
∞
∞
f
(
x
)
d
H
(
x
)
.
{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (dx)=\int _{-\infty }^{\infty }f(x)\,dH(x).}
All higher moments of δ are zero. In particular, characteristic function and moment generating function are both equal to one.
=== As a distribution ===
In the theory of distributions, a generalized function is considered not a function in itself but only through how it affects other functions when "integrated" against them. In keeping with this philosophy, to define the delta function properly, it is enough to say what the "integral" of the delta function is against a sufficiently "good" test function φ. Test functions are also known as bump functions. If the delta function is already understood as a measure, then the Lebesgue integral of a test function against that measure supplies the necessary integral.
A typical space of test functions consists of all smooth functions on R with compact support that have as many derivatives as required. As a distribution, the Dirac delta is a linear functional on the space of test functions and is defined by
for every test function φ.
For δ to be properly a distribution, it must be continuous in a suitable topology on the space of test functions. In general, for a linear functional S on the space of test functions to define a distribution, it is necessary and sufficient that, for every positive integer N there is an integer MN and a constant CN such that for every test function φ, one has the inequality
|
S
[
φ
]
|
≤
C
N
∑
k
=
0
M
N
sup
x
∈
[
−
N
,
N
]
|
φ
(
k
)
(
x
)
|
{\displaystyle \left|S[\varphi ]\right|\leq C_{N}\sum _{k=0}^{M_{N}}\sup _{x\in [-N,N]}\left|\varphi ^{(k)}(x)\right|}
where sup represents the supremum. With the δ distribution, one has such an inequality (with CN = 1) with MN = 0 for all N. Thus δ is a distribution of order zero. It is, furthermore, a distribution with compact support (the support being {0}).
The delta distribution can also be defined in several equivalent ways. For instance, it is the distributional derivative of the Heaviside step function. This means that for every test function φ, one has
δ
[
φ
]
=
−
∫
−
∞
∞
φ
′
(
x
)
H
(
x
)
d
x
.
{\displaystyle \delta [\varphi ]=-\int _{-\infty }^{\infty }\varphi '(x)\,H(x)\,dx.}
Intuitively, if integration by parts were permitted, then the latter integral should simplify to
∫
−
∞
∞
φ
(
x
)
H
′
(
x
)
d
x
=
∫
−
∞
∞
φ
(
x
)
δ
(
x
)
d
x
,
{\displaystyle \int _{-\infty }^{\infty }\varphi (x)\,H'(x)\,dx=\int _{-\infty }^{\infty }\varphi (x)\,\delta (x)\,dx,}
and indeed, a form of integration by parts is permitted for the Stieltjes integral, and in that case, one does have
−
∫
−
∞
∞
φ
′
(
x
)
H
(
x
)
d
x
=
∫
−
∞
∞
φ
(
x
)
d
H
(
x
)
.
{\displaystyle -\int _{-\infty }^{\infty }\varphi '(x)\,H(x)\,dx=\int _{-\infty }^{\infty }\varphi (x)\,dH(x).}
In the context of measure theory, the Dirac measure gives rise to distribution by integration. Conversely, equation (1) defines a Daniell integral on the space of all compactly supported continuous functions φ which, by the Riesz representation theorem, can be represented as the Lebesgue integral of φ with respect to some Radon measure.
Generally, when the term Dirac delta function is used, it is in the sense of distributions rather than measures, the Dirac measure being among several terms for the corresponding notion in measure theory. Some sources may also use the term Dirac delta distribution.
=== Generalizations ===
The delta function can be defined in n-dimensional Euclidean space Rn as the measure such that
∫
R
n
f
(
x
)
δ
(
d
x
)
=
f
(
0
)
{\displaystyle \int _{\mathbf {R} ^{n}}f(\mathbf {x} )\,\delta (d\mathbf {x} )=f(\mathbf {0} )}
for every compactly supported continuous function f. As a measure, the n-dimensional delta function is the product measure of the 1-dimensional delta functions in each variable separately. Thus, formally, with x = (x1, x2, ..., xn), one has
The delta function can also be defined in the sense of distributions exactly as above in the one-dimensional case. However, despite widespread use in engineering contexts, (2) should be manipulated with care, since the product of distributions can only be defined under quite narrow circumstances.
The notion of a Dirac measure makes sense on any set. Thus if X is a set, x0 ∈ X is a marked point, and Σ is any sigma algebra of subsets of X, then the measure defined on sets A ∈ Σ by
δ
x
0
(
A
)
=
{
1
if
x
0
∈
A
0
if
x
0
∉
A
{\displaystyle \delta _{x_{0}}(A)={\begin{cases}1&{\text{if }}x_{0}\in A\\0&{\text{if }}x_{0}\notin A\end{cases}}}
is the delta measure or unit mass concentrated at x0.
Another common generalization of the delta function is to a differentiable manifold where most of its properties as a distribution can also be exploited because of the differentiable structure. The delta function on a manifold M centered at the point x0 ∈ M is defined as the following distribution:
for all compactly supported smooth real-valued functions φ on M. A common special case of this construction is a case in which M is an open set in the Euclidean space Rn.
On a locally compact Hausdorff space X, the Dirac delta measure concentrated at a point x is the Radon measure associated with the Daniell integral (3) on compactly supported continuous functions φ. At this level of generality, calculus as such is no longer possible, however a variety of techniques from abstract analysis are available. For instance, the mapping
x
0
↦
δ
x
0
{\displaystyle x_{0}\mapsto \delta _{x_{0}}}
is a continuous embedding of X into the space of finite Radon measures on X, equipped with its vague topology. Moreover, the convex hull of the image of X under this embedding is dense in the space of probability measures on X.
== Properties ==
=== Scaling and symmetry ===
The delta function satisfies the following scaling property for a non-zero scalar α:
∫
−
∞
∞
δ
(
α
x
)
d
x
=
∫
−
∞
∞
δ
(
u
)
d
u
|
α
|
=
1
|
α
|
{\displaystyle \int _{-\infty }^{\infty }\delta (\alpha x)\,dx=\int _{-\infty }^{\infty }\delta (u)\,{\frac {du}{|\alpha |}}={\frac {1}{|\alpha |}}}
and so
Scaling property proof:
∫
−
∞
∞
d
x
g
(
x
)
δ
(
a
x
)
=
1
a
∫
−
∞
∞
d
x
′
g
(
x
′
a
)
δ
(
x
′
)
=
1
a
g
(
0
)
.
{\displaystyle \int \limits _{-\infty }^{\infty }dx\ g(x)\delta (ax)={\frac {1}{a}}\int \limits _{-\infty }^{\infty }dx'\ g\left({\frac {x'}{a}}\right)\delta (x')={\frac {1}{a}}g(0).}
where a change of variable x′ = ax is used. If a is negative, i.e., a = −|a|, then
∫
−
∞
∞
d
x
g
(
x
)
δ
(
a
x
)
=
1
−
|
a
|
∫
∞
−
∞
d
x
′
g
(
x
′
a
)
δ
(
x
′
)
=
1
|
a
|
∫
−
∞
∞
d
x
′
g
(
x
′
a
)
δ
(
x
′
)
=
1
|
a
|
g
(
0
)
.
{\displaystyle \int \limits _{-\infty }^{\infty }dx\ g(x)\delta (ax)={\frac {1}{-\left\vert a\right\vert }}\int \limits _{\infty }^{-\infty }dx'\ g\left({\frac {x'}{a}}\right)\delta (x')={\frac {1}{\left\vert a\right\vert }}\int \limits _{-\infty }^{\infty }dx'\ g\left({\frac {x'}{a}}\right)\delta (x')={\frac {1}{\left\vert a\right\vert }}g(0).}
Thus,
δ
(
a
x
)
=
1
|
a
|
δ
(
x
)
{\displaystyle \delta (ax)={\frac {1}{\left\vert a\right\vert }}\delta (x)}
.
In particular, the delta function is an even distribution (symmetry), in the sense that
δ
(
−
x
)
=
δ
(
x
)
{\displaystyle \delta (-x)=\delta (x)}
which is homogeneous of degree −1.
=== Algebraic properties ===
The distributional product of δ with x is equal to zero:
x
δ
(
x
)
=
0.
{\displaystyle x\,\delta (x)=0.}
More generally,
(
x
−
a
)
n
δ
(
x
−
a
)
=
0
{\displaystyle (x-a)^{n}\delta (x-a)=0}
for all positive integers
n
{\displaystyle n}
.
Conversely, if xf(x) = xg(x), where f and g are distributions, then
f
(
x
)
=
g
(
x
)
+
c
δ
(
x
)
{\displaystyle f(x)=g(x)+c\delta (x)}
for some constant c.
=== Translation ===
The integral of any function multiplied by the time-delayed Dirac delta
δ
T
(
t
)
=
δ
(
t
−
T
)
{\displaystyle \delta _{T}(t){=}\delta (t{-}T)}
is
∫
−
∞
∞
f
(
t
)
δ
(
t
−
T
)
d
t
=
f
(
T
)
.
{\displaystyle \int _{-\infty }^{\infty }f(t)\,\delta (t-T)\,dt=f(T).}
This is sometimes referred to as the sifting property or the sampling property. The delta function is said to "sift out" the value of f(t) at t = T.
It follows that the effect of convolving a function f(t) with the time-delayed Dirac delta is to time-delay f(t) by the same amount:
(
f
∗
δ
T
)
(
t
)
=
d
e
f
∫
−
∞
∞
f
(
τ
)
δ
(
t
−
T
−
τ
)
d
τ
=
∫
−
∞
∞
f
(
τ
)
δ
(
τ
−
(
t
−
T
)
)
d
τ
since
δ
(
−
x
)
=
δ
(
x
)
by (4)
=
f
(
t
−
T
)
.
{\displaystyle {\begin{aligned}(f*\delta _{T})(t)\ &{\stackrel {\mathrm {def} }{=}}\ \int _{-\infty }^{\infty }f(\tau )\,\delta (t-T-\tau )\,d\tau \\&=\int _{-\infty }^{\infty }f(\tau )\,\delta (\tau -(t-T))\,d\tau \qquad {\text{since}}~\delta (-x)=\delta (x)~~{\text{by (4)}}\\&=f(t-T).\end{aligned}}}
The sifting property holds under the precise condition that f be a tempered distribution (see the discussion of the Fourier transform below). As a special case, for instance, we have the identity (understood in the distribution sense)
∫
−
∞
∞
δ
(
ξ
−
x
)
δ
(
x
−
η
)
d
x
=
δ
(
η
−
ξ
)
.
{\displaystyle \int _{-\infty }^{\infty }\delta (\xi -x)\delta (x-\eta )\,dx=\delta (\eta -\xi ).}
=== Composition with a function ===
More generally, the delta distribution may be composed with a smooth function g(x) in such a way that the familiar change of variables formula holds (where
u
=
g
(
x
)
{\displaystyle u=g(x)}
), that
∫
R
δ
(
g
(
x
)
)
f
(
g
(
x
)
)
|
g
′
(
x
)
|
d
x
=
∫
g
(
R
)
δ
(
u
)
f
(
u
)
d
u
{\displaystyle \int _{\mathbb {R} }\delta {\bigl (}g(x){\bigr )}f{\bigl (}g(x){\bigr )}\left|g'(x)\right|dx=\int _{g(\mathbb {R} )}\delta (u)\,f(u)\,du}
provided that g is a continuously differentiable function with g′ nowhere zero. That is, there is a unique way to assign meaning to the distribution
δ
∘
g
{\displaystyle \delta \circ g}
so that this identity holds for all compactly supported test functions f. Therefore, the domain must be broken up to exclude the g′ = 0 point. This distribution satisfies δ(g(x)) = 0 if g is nowhere zero, and otherwise if g has a real root at x0, then
δ
(
g
(
x
)
)
=
δ
(
x
−
x
0
)
|
g
′
(
x
0
)
|
.
{\displaystyle \delta (g(x))={\frac {\delta (x-x_{0})}{|g'(x_{0})|}}.}
It is natural therefore to define the composition δ(g(x)) for continuously differentiable functions g by
δ
(
g
(
x
)
)
=
∑
i
δ
(
x
−
x
i
)
|
g
′
(
x
i
)
|
{\displaystyle \delta (g(x))=\sum _{i}{\frac {\delta (x-x_{i})}{|g'(x_{i})|}}}
where the sum extends over all roots of g(x), which are assumed to be simple. Thus, for example
δ
(
x
2
−
α
2
)
=
1
2
|
α
|
[
δ
(
x
+
α
)
+
δ
(
x
−
α
)
]
.
{\displaystyle \delta \left(x^{2}-\alpha ^{2}\right)={\frac {1}{2|\alpha |}}{\Big [}\delta \left(x+\alpha \right)+\delta \left(x-\alpha \right){\Big ]}.}
In the integral form, the generalized scaling property may be written as
∫
−
∞
∞
f
(
x
)
δ
(
g
(
x
)
)
d
x
=
∑
i
f
(
x
i
)
|
g
′
(
x
i
)
|
.
{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (g(x))\,dx=\sum _{i}{\frac {f(x_{i})}{|g'(x_{i})|}}.}
=== Indefinite integral ===
For a constant
a
∈
R
{\displaystyle a\in \mathbb {R} }
and a "well-behaved" arbitrary real-valued function y(x),
∫
y
(
x
)
δ
(
x
−
a
)
d
x
=
y
(
a
)
H
(
x
−
a
)
+
c
,
{\displaystyle \displaystyle {\int }y(x)\delta (x-a)dx=y(a)H(x-a)+c,}
where H(x) is the Heaviside step function and c is an integration constant.
=== Properties in n dimensions ===
The delta distribution in an n-dimensional space satisfies the following scaling property instead,
δ
(
α
x
)
=
|
α
|
−
n
δ
(
x
)
,
{\displaystyle \delta (\alpha {\boldsymbol {x}})=|\alpha |^{-n}\delta ({\boldsymbol {x}})~,}
so that δ is a homogeneous distribution of degree −n.
Under any reflection or rotation ρ, the delta function is invariant,
δ
(
ρ
x
)
=
δ
(
x
)
.
{\displaystyle \delta (\rho {\boldsymbol {x}})=\delta ({\boldsymbol {x}})~.}
As in the one-variable case, it is possible to define the composition of δ with a bi-Lipschitz function g: Rn → Rn uniquely so that the following holds
∫
R
n
δ
(
g
(
x
)
)
f
(
g
(
x
)
)
|
det
g
′
(
x
)
|
d
x
=
∫
g
(
R
n
)
δ
(
u
)
f
(
u
)
d
u
{\displaystyle \int _{\mathbb {R} ^{n}}\delta (g({\boldsymbol {x}}))\,f(g({\boldsymbol {x}}))\left|\det g'({\boldsymbol {x}})\right|d{\boldsymbol {x}}=\int _{g(\mathbb {R} ^{n})}\delta ({\boldsymbol {u}})f({\boldsymbol {u}})\,d{\boldsymbol {u}}}
for all compactly supported functions f.
Using the coarea formula from geometric measure theory, one can also define the composition of the delta function with a submersion from one Euclidean space to another one of different dimension; the result is a type of current. In the special case of a continuously differentiable function g : Rn → R such that the gradient of g is nowhere zero, the following identity holds
∫
R
n
f
(
x
)
δ
(
g
(
x
)
)
d
x
=
∫
g
−
1
(
0
)
f
(
x
)
|
∇
g
|
d
σ
(
x
)
{\displaystyle \int _{\mathbb {R} ^{n}}f({\boldsymbol {x}})\,\delta (g({\boldsymbol {x}}))\,d{\boldsymbol {x}}=\int _{g^{-1}(0)}{\frac {f({\boldsymbol {x}})}{|{\boldsymbol {\nabla }}g|}}\,d\sigma ({\boldsymbol {x}})}
where the integral on the right is over g−1(0), the (n − 1)-dimensional surface defined by g(x) = 0 with respect to the Minkowski content measure. This is known as a simple layer integral.
More generally, if S is a smooth hypersurface of Rn, then we can associate to S the distribution that integrates any compactly supported smooth function g over S:
δ
S
[
g
]
=
∫
S
g
(
s
)
d
σ
(
s
)
{\displaystyle \delta _{S}[g]=\int _{S}g({\boldsymbol {s}})\,d\sigma ({\boldsymbol {s}})}
where σ is the hypersurface measure associated to S. This generalization is associated with the potential theory of simple layer potentials on S. If D is a domain in Rn with smooth boundary S, then δS is equal to the normal derivative of the indicator function of D in the distribution sense,
−
∫
R
n
g
(
x
)
∂
1
D
(
x
)
∂
n
d
x
=
∫
S
g
(
s
)
d
σ
(
s
)
,
{\displaystyle -\int _{\mathbb {R} ^{n}}g({\boldsymbol {x}})\,{\frac {\partial 1_{D}({\boldsymbol {x}})}{\partial n}}\,d{\boldsymbol {x}}=\int _{S}\,g({\boldsymbol {s}})\,d\sigma ({\boldsymbol {s}}),}
where n is the outward normal. For a proof, see e.g. the article on the surface delta function.
In three dimensions, the delta function is represented in spherical coordinates by:
δ
(
r
−
r
0
)
=
{
1
r
2
sin
θ
δ
(
r
−
r
0
)
δ
(
θ
−
θ
0
)
δ
(
ϕ
−
ϕ
0
)
x
0
,
y
0
,
z
0
≠
0
1
2
π
r
2
sin
θ
δ
(
r
−
r
0
)
δ
(
θ
−
θ
0
)
x
0
=
y
0
=
0
,
z
0
≠
0
1
4
π
r
2
δ
(
r
−
r
0
)
x
0
=
y
0
=
z
0
=
0
{\displaystyle \delta ({\boldsymbol {r}}-{\boldsymbol {r}}_{0})={\begin{cases}\displaystyle {\frac {1}{r^{2}\sin \theta }}\delta (r-r_{0})\delta (\theta -\theta _{0})\delta (\phi -\phi _{0})&x_{0},y_{0},z_{0}\neq 0\\\displaystyle {\frac {1}{2\pi r^{2}\sin \theta }}\delta (r-r_{0})\delta (\theta -\theta _{0})&x_{0}=y_{0}=0,\ z_{0}\neq 0\\\displaystyle {\frac {1}{4\pi r^{2}}}\delta (r-r_{0})&x_{0}=y_{0}=z_{0}=0\end{cases}}}
== Derivatives ==
The derivative of the Dirac delta distribution, denoted δ′ and also called the Dirac delta prime or Dirac delta derivative as described in Laplacian of the indicator, is defined on compactly supported smooth test functions φ by
δ
′
[
φ
]
=
−
δ
[
φ
′
]
=
−
φ
′
(
0
)
.
{\displaystyle \delta '[\varphi ]=-\delta [\varphi ']=-\varphi '(0).}
The first equality here is a kind of integration by parts, for if δ were a true function then
∫
−
∞
∞
δ
′
(
x
)
φ
(
x
)
d
x
=
δ
(
x
)
φ
(
x
)
|
−
∞
∞
−
∫
−
∞
∞
δ
(
x
)
φ
′
(
x
)
d
x
=
−
∫
−
∞
∞
δ
(
x
)
φ
′
(
x
)
d
x
=
−
φ
′
(
0
)
.
{\displaystyle \int _{-\infty }^{\infty }\delta '(x)\varphi (x)\,dx=\delta (x)\varphi (x)|_{-\infty }^{\infty }-\int _{-\infty }^{\infty }\delta (x)\varphi '(x)\,dx=-\int _{-\infty }^{\infty }\delta (x)\varphi '(x)\,dx=-\varphi '(0).}
By mathematical induction, the k-th derivative of δ is defined similarly as the distribution given on test functions by
δ
(
k
)
[
φ
]
=
(
−
1
)
k
φ
(
k
)
(
0
)
.
{\displaystyle \delta ^{(k)}[\varphi ]=(-1)^{k}\varphi ^{(k)}(0).}
In particular, δ is an infinitely differentiable distribution.
The first derivative of the delta function is the distributional limit of the difference quotients:
δ
′
(
x
)
=
lim
h
→
0
δ
(
x
+
h
)
−
δ
(
x
)
h
.
{\displaystyle \delta '(x)=\lim _{h\to 0}{\frac {\delta (x+h)-\delta (x)}{h}}.}
More properly, one has
δ
′
=
lim
h
→
0
1
h
(
τ
h
δ
−
δ
)
{\displaystyle \delta '=\lim _{h\to 0}{\frac {1}{h}}(\tau _{h}\delta -\delta )}
where τh is the translation operator, defined on functions by τhφ(x) = φ(x + h), and on a distribution S by
(
τ
h
S
)
[
φ
]
=
S
[
τ
−
h
φ
]
.
{\displaystyle (\tau _{h}S)[\varphi ]=S[\tau _{-h}\varphi ].}
In the theory of electromagnetism, the first derivative of the delta function represents a point magnetic dipole situated at the origin. Accordingly, it is referred to as a dipole or the doublet function.
The derivative of the delta function satisfies a number of basic properties, including:
δ
′
(
−
x
)
=
−
δ
′
(
x
)
x
δ
′
(
x
)
=
−
δ
(
x
)
{\displaystyle {\begin{aligned}\delta '(-x)&=-\delta '(x)\\x\delta '(x)&=-\delta (x)\end{aligned}}}
which can be shown by applying a test function and integrating by parts.
The latter of these properties can also be demonstrated by applying distributional derivative definition, Leibniz 's theorem and linearity of inner product:
⟨
x
δ
′
,
φ
⟩
=
⟨
δ
′
,
x
φ
⟩
=
−
⟨
δ
,
(
x
φ
)
′
⟩
=
−
⟨
δ
,
x
′
φ
+
x
φ
′
⟩
=
−
⟨
δ
,
x
′
φ
⟩
−
⟨
δ
,
x
φ
′
⟩
=
−
x
′
(
0
)
φ
(
0
)
−
x
(
0
)
φ
′
(
0
)
=
−
x
′
(
0
)
⟨
δ
,
φ
⟩
−
x
(
0
)
⟨
δ
,
φ
′
⟩
=
−
x
′
(
0
)
⟨
δ
,
φ
⟩
+
x
(
0
)
⟨
δ
′
,
φ
⟩
=
⟨
x
(
0
)
δ
′
−
x
′
(
0
)
δ
,
φ
⟩
⟹
x
(
t
)
δ
′
(
t
)
=
x
(
0
)
δ
′
(
t
)
−
x
′
(
0
)
δ
(
t
)
=
−
x
′
(
0
)
δ
(
t
)
=
−
δ
(
t
)
{\displaystyle {\begin{aligned}\langle x\delta ',\varphi \rangle \,&=\,\langle \delta ',x\varphi \rangle \,=\,-\langle \delta ,(x\varphi )'\rangle \,=\,-\langle \delta ,x'\varphi +x\varphi '\rangle \,=\,-\langle \delta ,x'\varphi \rangle -\langle \delta ,x\varphi '\rangle \,=\,-x'(0)\varphi (0)-x(0)\varphi '(0)\\&=\,-x'(0)\langle \delta ,\varphi \rangle -x(0)\langle \delta ,\varphi '\rangle \,=\,-x'(0)\langle \delta ,\varphi \rangle +x(0)\langle \delta ',\varphi \rangle \,=\,\langle x(0)\delta '-x'(0)\delta ,\varphi \rangle \\\Longrightarrow x(t)\delta '(t)&=x(0)\delta '(t)-x'(0)\delta (t)=-x'(0)\delta (t)=-\delta (t)\end{aligned}}}
Furthermore, the convolution of δ′ with a compactly-supported, smooth function f is
δ
′
∗
f
=
δ
∗
f
′
=
f
′
,
{\displaystyle \delta '*f=\delta *f'=f',}
which follows from the properties of the distributional derivative of a convolution.
=== Higher dimensions ===
More generally, on an open set U in the n-dimensional Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
, the Dirac delta distribution centered at a point a ∈ U is defined by
δ
a
[
φ
]
=
φ
(
a
)
{\displaystyle \delta _{a}[\varphi ]=\varphi (a)}
for all
φ
∈
C
c
∞
(
U
)
{\displaystyle \varphi \in C_{c}^{\infty }(U)}
, the space of all smooth functions with compact support on U. If
α
=
(
α
1
,
…
,
α
n
)
{\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n})}
is any multi-index with
|
α
|
=
α
1
+
⋯
+
α
n
{\displaystyle |\alpha |=\alpha _{1}+\cdots +\alpha _{n}}
and
∂
α
{\displaystyle \partial ^{\alpha }}
denotes the associated mixed partial derivative operator, then the α-th derivative ∂αδa of δa is given by
⟨
∂
α
δ
a
,
φ
⟩
=
(
−
1
)
|
α
|
⟨
δ
a
,
∂
α
φ
⟩
=
(
−
1
)
|
α
|
∂
α
φ
(
x
)
|
x
=
a
for all
φ
∈
C
c
∞
(
U
)
.
{\displaystyle \left\langle \partial ^{\alpha }\delta _{a},\,\varphi \right\rangle =(-1)^{|\alpha |}\left\langle \delta _{a},\partial ^{\alpha }\varphi \right\rangle =(-1)^{|\alpha |}\partial ^{\alpha }\varphi (x){\Big |}_{x=a}\quad {\text{ for all }}\varphi \in C_{c}^{\infty }(U).}
That is, the α-th derivative of δa is the distribution whose value on any test function φ is the α-th derivative of φ at a (with the appropriate positive or negative sign).
The first partial derivatives of the delta function are thought of as double layers along the coordinate planes. More generally, the normal derivative of a simple layer supported on a surface is a double layer supported on that surface and represents a laminar magnetic monopole. Higher derivatives of the delta function are known in physics as multipoles.
Higher derivatives enter into mathematics naturally as the building blocks for the complete structure of distributions with point support. If S is any distribution on U supported on the set {a} consisting of a single point, then there is an integer m and coefficients cα such that
S
=
∑
|
α
|
≤
m
c
α
∂
α
δ
a
.
{\displaystyle S=\sum _{|\alpha |\leq m}c_{\alpha }\partial ^{\alpha }\delta _{a}.}
== Representations ==
=== Nascent delta function ===
The delta function can be viewed as the limit of a sequence of functions
δ
(
x
)
=
lim
ε
→
0
+
η
ε
(
x
)
,
{\displaystyle \delta (x)=\lim _{\varepsilon \to 0^{+}}\eta _{\varepsilon }(x),}
where ηε(x) is sometimes called a nascent delta function. This limit is meant in a weak sense: either that
for all continuous functions f having compact support, or that this limit holds for all smooth functions f with compact support. The difference between these two slightly different modes of weak convergence is often subtle: the former is convergence in the vague topology of measures, and the latter is convergence in the sense of distributions.
==== Approximations to the identity ====
Typically a nascent delta function ηε can be constructed in the following manner. Let η be an absolutely integrable function on R of total integral 1, and define
η
ε
(
x
)
=
ε
−
1
η
(
x
ε
)
.
{\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-1}\eta \left({\frac {x}{\varepsilon }}\right).}
In n dimensions, one uses instead the scaling
η
ε
(
x
)
=
ε
−
n
η
(
x
ε
)
.
{\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-n}\eta \left({\frac {x}{\varepsilon }}\right).}
Then a simple change of variables shows that ηε also has integral 1. One may show that (5) holds for all continuous compactly supported functions f, and so ηε converges weakly to δ in the sense of measures.
The ηε constructed in this way are known as an approximation to the identity. This terminology is because the space L1(R) of absolutely integrable functions is closed under the operation of convolution of functions: f ∗ g ∈ L1(R) whenever f and g are in L1(R). However, there is no identity in L1(R) for the convolution product: no element h such that f ∗ h = f for all f. Nevertheless, the sequence ηε does approximate such an identity in the sense that
f
∗
η
ε
→
f
as
ε
→
0.
{\displaystyle f*\eta _{\varepsilon }\to f\quad {\text{as }}\varepsilon \to 0.}
This limit holds in the sense of mean convergence (convergence in L1). Further conditions on the ηε, for instance that it be a mollifier associated to a compactly supported function, are needed to ensure pointwise convergence almost everywhere.
If the initial η = η1 is itself smooth and compactly supported then the sequence is called a mollifier. The standard mollifier is obtained by choosing η to be a suitably normalized bump function, for instance
η
(
x
)
=
{
1
I
n
exp
(
−
1
1
−
|
x
|
2
)
if
|
x
|
<
1
0
if
|
x
|
≥
1.
{\displaystyle \eta (x)={\begin{cases}{\frac {1}{I_{n}}}\exp {\Big (}-{\frac {1}{1-|x|^{2}}}{\Big )}&{\text{if }}|x|<1\\0&{\text{if }}|x|\geq 1.\end{cases}}}
(
I
n
{\displaystyle I_{n}}
ensuring that the total integral is 1).
In some situations such as numerical analysis, a piecewise linear approximation to the identity is desirable. This can be obtained by taking η1 to be a hat function. With this choice of η1, one has
η
ε
(
x
)
=
ε
−
1
max
(
1
−
|
x
ε
|
,
0
)
{\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-1}\max \left(1-\left|{\frac {x}{\varepsilon }}\right|,0\right)}
which are all continuous and compactly supported, although not smooth and so not a mollifier.
==== Probabilistic considerations ====
In the context of probability theory, it is natural to impose the additional condition that the initial η1 in an approximation to the identity should be positive, as such a function then represents a probability distribution. Convolution with a probability distribution is sometimes favorable because it does not result in overshoot or undershoot, as the output is a convex combination of the input values, and thus falls between the maximum and minimum of the input function. Taking η1 to be any probability distribution at all, and letting ηε(x) = η1(x/ε)/ε as above will give rise to an approximation to the identity. In general this converges more rapidly to a delta function if, in addition, η has mean 0 and has small higher moments. For instance, if η1 is the uniform distribution on
[
−
1
2
,
1
2
]
{\textstyle \left[-{\frac {1}{2}},{\frac {1}{2}}\right]}
, also known as the rectangular function, then:
η
ε
(
x
)
=
1
ε
rect
(
x
ε
)
=
{
1
ε
,
−
ε
2
<
x
<
ε
2
,
0
,
otherwise
.
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\varepsilon }}\operatorname {rect} \left({\frac {x}{\varepsilon }}\right)={\begin{cases}{\frac {1}{\varepsilon }},&-{\frac {\varepsilon }{2}}<x<{\frac {\varepsilon }{2}},\\0,&{\text{otherwise}}.\end{cases}}}
Another example is with the Wigner semicircle distribution
η
ε
(
x
)
=
{
2
π
ε
2
ε
2
−
x
2
,
−
ε
<
x
<
ε
,
0
,
otherwise
.
{\displaystyle \eta _{\varepsilon }(x)={\begin{cases}{\frac {2}{\pi \varepsilon ^{2}}}{\sqrt {\varepsilon ^{2}-x^{2}}},&-\varepsilon <x<\varepsilon ,\\0,&{\text{otherwise}}.\end{cases}}}
This is continuous and compactly supported, but not a mollifier because it is not smooth.
==== Semigroups ====
Nascent delta functions often arise as convolution semigroups. This amounts to the further constraint that the convolution of ηε with ηδ must satisfy
η
ε
∗
η
δ
=
η
ε
+
δ
{\displaystyle \eta _{\varepsilon }*\eta _{\delta }=\eta _{\varepsilon +\delta }}
for all ε, δ > 0. Convolution semigroups in L1 that form a nascent delta function are always an approximation to the identity in the above sense, however the semigroup condition is quite a strong restriction.
In practice, semigroups approximating the delta function arise as fundamental solutions or Green's functions to physically motivated elliptic or parabolic partial differential equations. In the context of applied mathematics, semigroups arise as the output of a linear time-invariant system. Abstractly, if A is a linear operator acting on functions of x, then a convolution semigroup arises by solving the initial value problem
{
∂
∂
t
η
(
t
,
x
)
=
A
η
(
t
,
x
)
,
t
>
0
lim
t
→
0
+
η
(
t
,
x
)
=
δ
(
x
)
{\displaystyle {\begin{cases}{\dfrac {\partial }{\partial t}}\eta (t,x)=A\eta (t,x),\quad t>0\\[5pt]\displaystyle \lim _{t\to 0^{+}}\eta (t,x)=\delta (x)\end{cases}}}
in which the limit is as usual understood in the weak sense. Setting ηε(x) = η(ε, x) gives the associated nascent delta function.
Some examples of physically important convolution semigroups arising from such a fundamental solution include the following.
===== The heat kernel =====
The heat kernel, defined by
η
ε
(
x
)
=
1
2
π
ε
e
−
x
2
2
ε
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\sqrt {2\pi \varepsilon }}}\mathrm {e} ^{-{\frac {x^{2}}{2\varepsilon }}}}
represents the temperature in an infinite wire at time t > 0, if a unit of heat energy is stored at the origin of the wire at time t = 0. This semigroup evolves according to the one-dimensional heat equation:
∂
u
∂
t
=
1
2
∂
2
u
∂
x
2
.
{\displaystyle {\frac {\partial u}{\partial t}}={\frac {1}{2}}{\frac {\partial ^{2}u}{\partial x^{2}}}.}
In probability theory, ηε(x) is a normal distribution of variance ε and mean 0. It represents the probability density at time t = ε of the position of a particle starting at the origin following a standard Brownian motion. In this context, the semigroup condition is then an expression of the Markov property of Brownian motion.
In higher-dimensional Euclidean space Rn, the heat kernel is
η
ε
=
1
(
2
π
ε
)
n
/
2
e
−
x
⋅
x
2
ε
,
{\displaystyle \eta _{\varepsilon }={\frac {1}{(2\pi \varepsilon )^{n/2}}}\mathrm {e} ^{-{\frac {x\cdot x}{2\varepsilon }}},}
and has the same physical interpretation, mutatis mutandis. It also represents a nascent delta function in the sense that ηε → δ in the distribution sense as ε → 0.
===== The Poisson kernel =====
The Poisson kernel
η
ε
(
x
)
=
1
π
I
m
{
1
x
−
i
ε
}
=
1
π
ε
ε
2
+
x
2
=
1
2
π
∫
−
∞
∞
e
i
ξ
x
−
|
ε
ξ
|
d
ξ
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\pi }}\mathrm {Im} \left\{{\frac {1}{x-\mathrm {i} \varepsilon }}\right\}={\frac {1}{\pi }}{\frac {\varepsilon }{\varepsilon ^{2}+x^{2}}}={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\mathrm {e} ^{\mathrm {i} \xi x-|\varepsilon \xi |}\,d\xi }
is the fundamental solution of the Laplace equation in the upper half-plane. It represents the electrostatic potential in a semi-infinite plate whose potential along the edge is held at fixed at the delta function. The Poisson kernel is also closely related to the Cauchy distribution and Epanechnikov and Gaussian kernel functions. This semigroup evolves according to the equation
∂
u
∂
t
=
−
(
−
∂
2
∂
x
2
)
1
2
u
(
t
,
x
)
{\displaystyle {\frac {\partial u}{\partial t}}=-\left(-{\frac {\partial ^{2}}{\partial x^{2}}}\right)^{\frac {1}{2}}u(t,x)}
where the operator is rigorously defined as the Fourier multiplier
F
[
(
−
∂
2
∂
x
2
)
1
2
f
]
(
ξ
)
=
|
2
π
ξ
|
F
f
(
ξ
)
.
{\displaystyle {\mathcal {F}}\left[\left(-{\frac {\partial ^{2}}{\partial x^{2}}}\right)^{\frac {1}{2}}f\right](\xi )=|2\pi \xi |{\mathcal {F}}f(\xi ).}
==== Oscillatory integrals ====
In areas of physics such as wave propagation and wave mechanics, the equations involved are hyperbolic and so may have more singular solutions. As a result, the nascent delta functions that arise as fundamental solutions of the associated Cauchy problems are generally oscillatory integrals. An example, which comes from a solution of the Euler–Tricomi equation of transonic gas dynamics, is the rescaled Airy function
ε
−
1
/
3
Ai
(
x
ε
−
1
/
3
)
.
{\displaystyle \varepsilon ^{-1/3}\operatorname {Ai} \left(x\varepsilon ^{-1/3}\right).}
Although using the Fourier transform, it is easy to see that this generates a semigroup in some sense—it is not absolutely integrable and so cannot define a semigroup in the above strong sense. Many nascent delta functions constructed as oscillatory integrals only converge in the sense of distributions (an example is the Dirichlet kernel below), rather than in the sense of measures.
Another example is the Cauchy problem for the wave equation in R1+1:
c
−
2
∂
2
u
∂
t
2
−
Δ
u
=
0
u
=
0
,
∂
u
∂
t
=
δ
for
t
=
0.
{\displaystyle {\begin{aligned}c^{-2}{\frac {\partial ^{2}u}{\partial t^{2}}}-\Delta u&=0\\u=0,\quad {\frac {\partial u}{\partial t}}=\delta &\qquad {\text{for }}t=0.\end{aligned}}}
The solution u represents the displacement from equilibrium of an infinite elastic string, with an initial disturbance at the origin.
Other approximations to the identity of this kind include the sinc function (used widely in electronics and telecommunications)
η
ε
(
x
)
=
1
π
x
sin
(
x
ε
)
=
1
2
π
∫
−
1
ε
1
ε
cos
(
k
x
)
d
k
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\pi x}}\sin \left({\frac {x}{\varepsilon }}\right)={\frac {1}{2\pi }}\int _{-{\frac {1}{\varepsilon }}}^{\frac {1}{\varepsilon }}\cos(kx)\,dk}
and the Bessel function
η
ε
(
x
)
=
1
ε
J
1
ε
(
x
+
1
ε
)
.
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\varepsilon }}J_{\frac {1}{\varepsilon }}\left({\frac {x+1}{\varepsilon }}\right).}
=== Plane wave decomposition ===
One approach to the study of a linear partial differential equation
L
[
u
]
=
f
,
{\displaystyle L[u]=f,}
where L is a differential operator on Rn, is to seek first a fundamental solution, which is a solution of the equation
L
[
u
]
=
δ
.
{\displaystyle L[u]=\delta .}
When L is particularly simple, this problem can often be resolved using the Fourier transform directly (as in the case of the Poisson kernel and heat kernel already mentioned). For more complicated operators, it is sometimes easier first to consider an equation of the form
L
[
u
]
=
h
{\displaystyle L[u]=h}
where h is a plane wave function, meaning that it has the form
h
=
h
(
x
⋅
ξ
)
{\displaystyle h=h(x\cdot \xi )}
for some vector ξ. Such an equation can be resolved (if the coefficients of L are analytic functions) by the Cauchy–Kovalevskaya theorem or (if the coefficients of L are constant) by quadrature. So, if the delta function can be decomposed into plane waves, then one can in principle solve linear partial differential equations.
Such a decomposition of the delta function into plane waves was part of a general technique first introduced essentially by Johann Radon, and then developed in this form by Fritz John (1955). Choose k so that n + k is an even integer, and for a real number s, put
g
(
s
)
=
Re
[
−
s
k
log
(
−
i
s
)
k
!
(
2
π
i
)
n
]
=
{
|
s
|
k
4
k
!
(
2
π
i
)
n
−
1
n
odd
−
|
s
|
k
log
|
s
|
k
!
(
2
π
i
)
n
n
even.
{\displaystyle g(s)=\operatorname {Re} \left[{\frac {-s^{k}\log(-is)}{k!(2\pi i)^{n}}}\right]={\begin{cases}{\frac {|s|^{k}}{4k!(2\pi i)^{n-1}}}&n{\text{ odd}}\\[5pt]-{\frac {|s|^{k}\log |s|}{k!(2\pi i)^{n}}}&n{\text{ even.}}\end{cases}}}
Then δ is obtained by applying a power of the Laplacian to the integral with respect to the unit sphere measure dω of g(x · ξ) for ξ in the unit sphere Sn−1:
δ
(
x
)
=
Δ
x
(
n
+
k
)
/
2
∫
S
n
−
1
g
(
x
⋅
ξ
)
d
ω
ξ
.
{\displaystyle \delta (x)=\Delta _{x}^{(n+k)/2}\int _{S^{n-1}}g(x\cdot \xi )\,d\omega _{\xi }.}
The Laplacian here is interpreted as a weak derivative, so that this equation is taken to mean that, for any test function φ,
φ
(
x
)
=
∫
R
n
φ
(
y
)
d
y
Δ
x
n
+
k
2
∫
S
n
−
1
g
(
(
x
−
y
)
⋅
ξ
)
d
ω
ξ
.
{\displaystyle \varphi (x)=\int _{\mathbf {R} ^{n}}\varphi (y)\,dy\,\Delta _{x}^{\frac {n+k}{2}}\int _{S^{n-1}}g((x-y)\cdot \xi )\,d\omega _{\xi }.}
The result follows from the formula for the Newtonian potential (the fundamental solution of Poisson's equation). This is essentially a form of the inversion formula for the Radon transform because it recovers the value of φ(x) from its integrals over hyperplanes. For instance, if n is odd and k = 1, then the integral on the right hand side is
c
n
Δ
x
n
+
1
2
∬
S
n
−
1
φ
(
y
)
|
(
y
−
x
)
⋅
ξ
|
d
ω
ξ
d
y
=
c
n
Δ
x
(
n
+
1
)
/
2
∫
S
n
−
1
d
ω
ξ
∫
−
∞
∞
|
p
|
R
φ
(
ξ
,
p
+
x
⋅
ξ
)
d
p
{\displaystyle {\begin{aligned}&c_{n}\Delta _{x}^{\frac {n+1}{2}}\iint _{S^{n-1}}\varphi (y)|(y-x)\cdot \xi |\,d\omega _{\xi }\,dy\\[5pt]&\qquad =c_{n}\Delta _{x}^{(n+1)/2}\int _{S^{n-1}}\,d\omega _{\xi }\int _{-\infty }^{\infty }|p|R\varphi (\xi ,p+x\cdot \xi )\,dp\end{aligned}}}
where Rφ(ξ, p) is the Radon transform of φ:
R
φ
(
ξ
,
p
)
=
∫
x
⋅
ξ
=
p
f
(
x
)
d
n
−
1
x
.
{\displaystyle R\varphi (\xi ,p)=\int _{x\cdot \xi =p}f(x)\,d^{n-1}x.}
An alternative equivalent expression of the plane wave decomposition is:
δ
(
x
)
=
{
(
n
−
1
)
!
(
2
π
i
)
n
∫
S
n
−
1
(
x
⋅
ξ
)
−
n
d
ω
ξ
n
even
1
2
(
2
π
i
)
n
−
1
∫
S
n
−
1
δ
(
n
−
1
)
(
x
⋅
ξ
)
d
ω
ξ
n
odd
.
{\displaystyle \delta (x)={\begin{cases}{\frac {(n-1)!}{(2\pi i)^{n}}}\displaystyle \int _{S^{n-1}}(x\cdot \xi )^{-n}\,d\omega _{\xi }&n{\text{ even}}\\{\frac {1}{2(2\pi i)^{n-1}}}\displaystyle \int _{S^{n-1}}\delta ^{(n-1)}(x\cdot \xi )\,d\omega _{\xi }&n{\text{ odd}}.\end{cases}}}
=== Fourier transform ===
The delta function is a tempered distribution, and therefore it has a well-defined Fourier transform. Formally, one finds
δ
^
(
ξ
)
=
∫
−
∞
∞
e
−
2
π
i
x
ξ
δ
(
x
)
d
x
=
1.
{\displaystyle {\widehat {\delta }}(\xi )=\int _{-\infty }^{\infty }e^{-2\pi ix\xi }\,\delta (x)dx=1.}
Properly speaking, the Fourier transform of a distribution is defined by imposing self-adjointness of the Fourier transform under the duality pairing
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
of tempered distributions with Schwartz functions. Thus
δ
^
{\displaystyle {\widehat {\delta }}}
is defined as the unique tempered distribution satisfying
⟨
δ
^
,
φ
⟩
=
⟨
δ
,
φ
^
⟩
{\displaystyle \langle {\widehat {\delta }},\varphi \rangle =\langle \delta ,{\widehat {\varphi }}\rangle }
for all Schwartz functions φ. And indeed it follows from this that
δ
^
=
1.
{\displaystyle {\widehat {\delta }}=1.}
As a result of this identity, the convolution of the delta function with any other tempered distribution S is simply S:
S
∗
δ
=
S
.
{\displaystyle S*\delta =S.}
That is to say that δ is an identity element for the convolution on tempered distributions, and in fact, the space of compactly supported distributions under convolution is an associative algebra with identity the delta function. This property is fundamental in signal processing, as convolution with a tempered distribution is a linear time-invariant system, and applying the linear time-invariant system measures its impulse response. The impulse response can be computed to any desired degree of accuracy by choosing a suitable approximation for δ, and once it is known, it characterizes the system completely. See LTI system theory § Impulse response and convolution.
The inverse Fourier transform of the tempered distribution f(ξ) = 1 is the delta function. Formally, this is expressed as
∫
−
∞
∞
1
⋅
e
2
π
i
x
ξ
d
ξ
=
δ
(
x
)
{\displaystyle \int _{-\infty }^{\infty }1\cdot e^{2\pi ix\xi }\,d\xi =\delta (x)}
and more rigorously, it follows since
⟨
1
,
f
^
⟩
=
f
(
0
)
=
⟨
δ
,
f
⟩
{\displaystyle \langle 1,{\widehat {f}}\rangle =f(0)=\langle \delta ,f\rangle }
for all Schwartz functions f.
In these terms, the delta function provides a suggestive statement of the orthogonality property of the Fourier kernel on R. Formally, one has
∫
−
∞
∞
e
i
2
π
ξ
1
t
[
e
i
2
π
ξ
2
t
]
∗
d
t
=
∫
−
∞
∞
e
−
i
2
π
(
ξ
2
−
ξ
1
)
t
d
t
=
δ
(
ξ
2
−
ξ
1
)
.
{\displaystyle \int _{-\infty }^{\infty }e^{i2\pi \xi _{1}t}\left[e^{i2\pi \xi _{2}t}\right]^{*}\,dt=\int _{-\infty }^{\infty }e^{-i2\pi (\xi _{2}-\xi _{1})t}\,dt=\delta (\xi _{2}-\xi _{1}).}
This is, of course, shorthand for the assertion that the Fourier transform of the tempered distribution
f
(
t
)
=
e
i
2
π
ξ
1
t
{\displaystyle f(t)=e^{i2\pi \xi _{1}t}}
is
f
^
(
ξ
2
)
=
δ
(
ξ
1
−
ξ
2
)
{\displaystyle {\widehat {f}}(\xi _{2})=\delta (\xi _{1}-\xi _{2})}
which again follows by imposing self-adjointness of the Fourier transform.
By analytic continuation of the Fourier transform, the Laplace transform of the delta function is found to be
∫
0
∞
δ
(
t
−
a
)
e
−
s
t
d
t
=
e
−
s
a
.
{\displaystyle \int _{0}^{\infty }\delta (t-a)\,e^{-st}\,dt=e^{-sa}.}
==== Fourier kernels ====
In the study of Fourier series, a major question consists of determining whether and in what sense the Fourier series associated with a periodic function converges to the function. The n-th partial sum of the Fourier series of a function f of period 2π is defined by convolution (on the interval [−π,π]) with the Dirichlet kernel:
D
N
(
x
)
=
∑
n
=
−
N
N
e
i
n
x
=
sin
(
(
N
+
1
2
)
x
)
sin
(
x
/
2
)
.
{\displaystyle D_{N}(x)=\sum _{n=-N}^{N}e^{inx}={\frac {\sin \left(\left(N+{\frac {1}{2}}\right)x\right)}{\sin(x/2)}}.}
Thus,
s
N
(
f
)
(
x
)
=
D
N
∗
f
(
x
)
=
∑
n
=
−
N
N
a
n
e
i
n
x
{\displaystyle s_{N}(f)(x)=D_{N}*f(x)=\sum _{n=-N}^{N}a_{n}e^{inx}}
where
a
n
=
1
2
π
∫
−
π
π
f
(
y
)
e
−
i
n
y
d
y
.
{\displaystyle a_{n}={\frac {1}{2\pi }}\int _{-\pi }^{\pi }f(y)e^{-iny}\,dy.}
A fundamental result of elementary Fourier series states that the Dirichlet kernel restricted to the interval [−π,π] tends to a multiple of the delta function as N → ∞. This is interpreted in the distribution sense, that
s
N
(
f
)
(
0
)
=
∫
−
π
π
D
N
(
x
)
f
(
x
)
d
x
→
2
π
f
(
0
)
{\displaystyle s_{N}(f)(0)=\int _{-\pi }^{\pi }D_{N}(x)f(x)\,dx\to 2\pi f(0)}
for every compactly supported smooth function f. Thus, formally one has
δ
(
x
)
=
1
2
π
∑
n
=
−
∞
∞
e
i
n
x
{\displaystyle \delta (x)={\frac {1}{2\pi }}\sum _{n=-\infty }^{\infty }e^{inx}}
on the interval [−π,π].
Despite this, the result does not hold for all compactly supported continuous functions: that is DN does not converge weakly in the sense of measures. The lack of convergence of the Fourier series has led to the introduction of a variety of summability methods to produce convergence. The method of Cesàro summation leads to the Fejér kernel
F
N
(
x
)
=
1
N
∑
n
=
0
N
−
1
D
n
(
x
)
=
1
N
(
sin
N
x
2
sin
x
2
)
2
.
{\displaystyle F_{N}(x)={\frac {1}{N}}\sum _{n=0}^{N-1}D_{n}(x)={\frac {1}{N}}\left({\frac {\sin {\frac {Nx}{2}}}{\sin {\frac {x}{2}}}}\right)^{2}.}
The Fejér kernels tend to the delta function in a stronger sense that
∫
−
π
π
F
N
(
x
)
f
(
x
)
d
x
→
2
π
f
(
0
)
{\displaystyle \int _{-\pi }^{\pi }F_{N}(x)f(x)\,dx\to 2\pi f(0)}
for every compactly supported continuous function f. The implication is that the Fourier series of any continuous function is Cesàro summable to the value of the function at every point.
=== Hilbert space theory ===
The Dirac delta distribution is a densely defined unbounded linear functional on the Hilbert space L2 of square-integrable functions. Indeed, smooth compactly supported functions are dense in L2, and the action of the delta distribution on such functions is well-defined. In many applications, it is possible to identify subspaces of L2 and to give a stronger topology on which the delta function defines a bounded linear functional.
==== Sobolev spaces ====
The Sobolev embedding theorem for Sobolev spaces on the real line R implies that any square-integrable function f such that
‖
f
‖
H
1
2
=
∫
−
∞
∞
|
f
^
(
ξ
)
|
2
(
1
+
|
ξ
|
2
)
d
ξ
<
∞
{\displaystyle \|f\|_{H^{1}}^{2}=\int _{-\infty }^{\infty }|{\widehat {f}}(\xi )|^{2}(1+|\xi |^{2})\,d\xi <\infty }
is automatically continuous, and satisfies in particular
δ
[
f
]
=
|
f
(
0
)
|
<
C
‖
f
‖
H
1
.
{\displaystyle \delta [f]=|f(0)|<C\|f\|_{H^{1}}.}
Thus δ is a bounded linear functional on the Sobolev space H1. Equivalently δ is an element of the continuous dual space H−1 of H1. More generally, in n dimensions, one has δ ∈ H−s(Rn) provided s > n/2.
==== Spaces of holomorphic functions ====
In complex analysis, the delta function enters via Cauchy's integral formula, which asserts that if D is a domain in the complex plane with smooth boundary, then
f
(
z
)
=
1
2
π
i
∮
∂
D
f
(
ζ
)
d
ζ
ζ
−
z
,
z
∈
D
{\displaystyle f(z)={\frac {1}{2\pi i}}\oint _{\partial D}{\frac {f(\zeta )\,d\zeta }{\zeta -z}},\quad z\in D}
for all holomorphic functions f in D that are continuous on the closure of D. As a result, the delta function δz is represented in this class of holomorphic functions by the Cauchy integral:
δ
z
[
f
]
=
f
(
z
)
=
1
2
π
i
∮
∂
D
f
(
ζ
)
d
ζ
ζ
−
z
.
{\displaystyle \delta _{z}[f]=f(z)={\frac {1}{2\pi i}}\oint _{\partial D}{\frac {f(\zeta )\,d\zeta }{\zeta -z}}.}
Moreover, let H2(∂D) be the Hardy space consisting of the closure in L2(∂D) of all holomorphic functions in D continuous up to the boundary of D. Then functions in H2(∂D) uniquely extend to holomorphic functions in D, and the Cauchy integral formula continues to hold. In particular for z ∈ D, the delta function δz is a continuous linear functional on H2(∂D). This is a special case of the situation in several complex variables in which, for smooth domains D, the Szegő kernel plays the role of the Cauchy integral.
Another representation of the delta function in a space of holomorphic functions is on the space
H
(
D
)
∩
L
2
(
D
)
{\displaystyle H(D)\cap L^{2}(D)}
of square-integrable holomorphic functions in an open set
D
⊂
C
n
{\displaystyle D\subset \mathbb {C} ^{n}}
. This is a closed subspace of
L
2
(
D
)
{\displaystyle L^{2}(D)}
, and therefore is a Hilbert space. On the other hand, the functional that evaluates a holomorphic function in
H
(
D
)
∩
L
2
(
D
)
{\displaystyle H(D)\cap L^{2}(D)}
at a point
z
{\displaystyle z}
of
D
{\displaystyle D}
is a continuous functional, and so by the Riesz representation theorem, is represented by integration against a kernel
K
z
(
ζ
)
{\displaystyle K_{z}(\zeta )}
, the Bergman kernel. This kernel is the analog of the delta function in this Hilbert space. A Hilbert space having such a kernel is called a reproducing kernel Hilbert space. In the special case of the unit disc, one has
δ
w
[
f
]
=
f
(
w
)
=
1
π
∬
|
z
|
<
1
f
(
z
)
d
x
d
y
(
1
−
z
¯
w
)
2
.
{\displaystyle \delta _{w}[f]=f(w)={\frac {1}{\pi }}\iint _{|z|<1}{\frac {f(z)\,dx\,dy}{(1-{\bar {z}}w)^{2}}}.}
==== Resolutions of the identity ====
Given a complete orthonormal basis set of functions {φn} in a separable Hilbert space, for example, the normalized eigenvectors of a compact self-adjoint operator, any vector f can be expressed as
f
=
∑
n
=
1
∞
α
n
φ
n
.
{\displaystyle f=\sum _{n=1}^{\infty }\alpha _{n}\varphi _{n}.}
The coefficients {αn} are found as
α
n
=
⟨
φ
n
,
f
⟩
,
{\displaystyle \alpha _{n}=\langle \varphi _{n},f\rangle ,}
which may be represented by the notation:
α
n
=
φ
n
†
f
,
{\displaystyle \alpha _{n}=\varphi _{n}^{\dagger }f,}
a form of the bra–ket notation of Dirac. Adopting this notation, the expansion of f takes the dyadic form:
f
=
∑
n
=
1
∞
φ
n
(
φ
n
†
f
)
.
{\displaystyle f=\sum _{n=1}^{\infty }\varphi _{n}\left(\varphi _{n}^{\dagger }f\right).}
Letting I denote the identity operator on the Hilbert space, the expression
I
=
∑
n
=
1
∞
φ
n
φ
n
†
,
{\displaystyle I=\sum _{n=1}^{\infty }\varphi _{n}\varphi _{n}^{\dagger },}
is called a resolution of the identity. When the Hilbert space is the space L2(D) of square-integrable functions on a domain D, the quantity:
φ
n
φ
n
†
,
{\displaystyle \varphi _{n}\varphi _{n}^{\dagger },}
is an integral operator, and the expression for f can be rewritten
f
(
x
)
=
∑
n
=
1
∞
∫
D
(
φ
n
(
x
)
φ
n
∗
(
ξ
)
)
f
(
ξ
)
d
ξ
.
{\displaystyle f(x)=\sum _{n=1}^{\infty }\int _{D}\,\left(\varphi _{n}(x)\varphi _{n}^{*}(\xi )\right)f(\xi )\,d\xi .}
The right-hand side converges to f in the L2 sense. It need not hold in a pointwise sense, even when f is a continuous function. Nevertheless, it is common to abuse notation and write
f
(
x
)
=
∫
δ
(
x
−
ξ
)
f
(
ξ
)
d
ξ
,
{\displaystyle f(x)=\int \,\delta (x-\xi )f(\xi )\,d\xi ,}
resulting in the representation of the delta function:
δ
(
x
−
ξ
)
=
∑
n
=
1
∞
φ
n
(
x
)
φ
n
∗
(
ξ
)
.
{\displaystyle \delta (x-\xi )=\sum _{n=1}^{\infty }\varphi _{n}(x)\varphi _{n}^{*}(\xi ).}
With a suitable rigged Hilbert space (Φ, L2(D), Φ*) where Φ ⊂ L2(D) contains all compactly supported smooth functions, this summation may converge in Φ*, depending on the properties of the basis φn. In most cases of practical interest, the orthonormal basis comes from an integral or differential operator (e.g. the heat kernel), in which case the series converges in the distribution sense.
=== Infinitesimal delta functions ===
Cauchy used an infinitesimal α to write down a unit impulse, infinitely tall and narrow Dirac-type delta function δα satisfying
∫
F
(
x
)
δ
α
(
x
)
d
x
=
F
(
0
)
{\textstyle \int F(x)\delta _{\alpha }(x)\,dx=F(0)}
in a number of articles in 1827. Cauchy defined an infinitesimal in Cours d'Analyse (1827) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's and Lazare Carnot's terminology.
Non-standard analysis allows one to rigorously treat infinitesimals. The article by Yamashita (2007) contains a bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by the hyperreals. Here the Dirac delta can be given by an actual function, having the property that for every real function F one has
∫
F
(
x
)
δ
α
(
x
)
d
x
=
F
(
0
)
{\textstyle \int F(x)\delta _{\alpha }(x)\,dx=F(0)}
as anticipated by Fourier and Cauchy.
== Dirac comb ==
A so-called uniform "pulse train" of Dirac delta measures, which is known as a Dirac comb, or as the Sha distribution, creates a sampling function, often used in digital signal processing (DSP) and discrete time signal analysis. The Dirac comb is given as the infinite sum, whose limit is understood in the distribution sense,
Ш
(
x
)
=
∑
n
=
−
∞
∞
δ
(
x
−
n
)
,
{\displaystyle \operatorname {\text{Ш}} (x)=\sum _{n=-\infty }^{\infty }\delta (x-n),}
which is a sequence of point masses at each of the integers.
Up to an overall normalizing constant, the Dirac comb is equal to its own Fourier transform. This is significant because if f is any Schwartz function, then the periodization of f is given by the convolution
(
f
∗
Ш
)
(
x
)
=
∑
n
=
−
∞
∞
f
(
x
−
n
)
.
{\displaystyle (f*\operatorname {\text{Ш}} )(x)=\sum _{n=-\infty }^{\infty }f(x-n).}
In particular,
(
f
∗
Ш
)
∧
=
f
^
Ш
^
=
f
^
Ш
{\displaystyle (f*\operatorname {\text{Ш}} )^{\wedge }={\widehat {f}}{\widehat {\operatorname {\text{Ш}} }}={\widehat {f}}\operatorname {\text{Ш}} }
is precisely the Poisson summation formula.
More generally, this formula remains to be true if f is a tempered distribution of rapid descent or, equivalently, if
f
^
{\displaystyle {\widehat {f}}}
is a slowly growing, ordinary function within the space of tempered distributions.
== Sokhotski–Plemelj theorem ==
The Sokhotski–Plemelj theorem, important in quantum mechanics, relates the delta function to the distribution p.v. 1/x, the Cauchy principal value of the function 1/x, defined by
⟨
p
.
v
.
1
x
,
φ
⟩
=
lim
ε
→
0
+
∫
|
x
|
>
ε
φ
(
x
)
x
d
x
.
{\displaystyle \left\langle \operatorname {p.v.} {\frac {1}{x}},\varphi \right\rangle =\lim _{\varepsilon \to 0^{+}}\int _{|x|>\varepsilon }{\frac {\varphi (x)}{x}}\,dx.}
Sokhotsky's formula states that
lim
ε
→
0
+
1
x
±
i
ε
=
p
.
v
.
1
x
∓
i
π
δ
(
x
)
,
{\displaystyle \lim _{\varepsilon \to 0^{+}}{\frac {1}{x\pm i\varepsilon }}=\operatorname {p.v.} {\frac {1}{x}}\mp i\pi \delta (x),}
Here the limit is understood in the distribution sense, that for all compactly supported smooth functions f,
∫
−
∞
∞
lim
ε
→
0
+
f
(
x
)
x
±
i
ε
d
x
=
∓
i
π
f
(
0
)
+
lim
ε
→
0
+
∫
|
x
|
>
ε
f
(
x
)
x
d
x
.
{\displaystyle \int _{-\infty }^{\infty }\lim _{\varepsilon \to 0^{+}}{\frac {f(x)}{x\pm i\varepsilon }}\,dx=\mp i\pi f(0)+\lim _{\varepsilon \to 0^{+}}\int _{|x|>\varepsilon }{\frac {f(x)}{x}}\,dx.}
== Relationship to the Kronecker delta ==
The Kronecker delta δij is the quantity defined by
δ
i
j
=
{
1
i
=
j
0
i
≠
j
{\displaystyle \delta _{ij}={\begin{cases}1&i=j\\0&i\not =j\end{cases}}}
for all integers i, j. This function then satisfies the following analog of the sifting property: if ai (for i in the set of all integers) is any doubly infinite sequence, then
∑
i
=
−
∞
∞
a
i
δ
i
k
=
a
k
.
{\displaystyle \sum _{i=-\infty }^{\infty }a_{i}\delta _{ik}=a_{k}.}
Similarly, for any real or complex valued continuous function f on R, the Dirac delta satisfies the sifting property
∫
−
∞
∞
f
(
x
)
δ
(
x
−
x
0
)
d
x
=
f
(
x
0
)
.
{\displaystyle \int _{-\infty }^{\infty }f(x)\delta (x-x_{0})\,dx=f(x_{0}).}
This exhibits the Kronecker delta function as a discrete analog of the Dirac delta function.
== Applications ==
=== Probability theory ===
In probability theory and statistics, the Dirac delta function is often used to represent a discrete distribution, or a partially discrete, partially continuous distribution, using a probability density function (which is normally used to represent absolutely continuous distributions). For example, the probability density function f(x) of a discrete distribution consisting of points x = {x1, ..., xn}, with corresponding probabilities p1, ..., pn, can be written as
f
(
x
)
=
∑
i
=
1
n
p
i
δ
(
x
−
x
i
)
.
{\displaystyle f(x)=\sum _{i=1}^{n}p_{i}\delta (x-x_{i}).}
As another example, consider a distribution in which 6/10 of the time returns a standard normal distribution, and 4/10 of the time returns exactly the value 3.5 (i.e. a partly continuous, partly discrete mixture distribution). The density function of this distribution can be written as
f
(
x
)
=
0.6
1
2
π
e
−
x
2
2
+
0.4
δ
(
x
−
3.5
)
.
{\displaystyle f(x)=0.6\,{\frac {1}{\sqrt {2\pi }}}e^{-{\frac {x^{2}}{2}}}+0.4\,\delta (x-3.5).}
The delta function is also used to represent the resulting probability density function of a random variable that is transformed by continuously differentiable function. If Y = g(X) is a continuous differentiable function, then the density of Y can be written as
f
Y
(
y
)
=
∫
−
∞
+
∞
f
X
(
x
)
δ
(
y
−
g
(
x
)
)
d
x
.
{\displaystyle f_{Y}(y)=\int _{-\infty }^{+\infty }f_{X}(x)\delta (y-g(x))\,dx.}
The delta function is also used in a completely different way to represent the local time of a diffusion process (like Brownian motion). The local time of a stochastic process B(t) is given by
ℓ
(
x
,
t
)
=
∫
0
t
δ
(
x
−
B
(
s
)
)
d
s
{\displaystyle \ell (x,t)=\int _{0}^{t}\delta (x-B(s))\,ds}
and represents the amount of time that the process spends at the point x in the range of the process. More precisely, in one dimension this integral can be written
ℓ
(
x
,
t
)
=
lim
ε
→
0
+
1
2
ε
∫
0
t
1
[
x
−
ε
,
x
+
ε
]
(
B
(
s
)
)
d
s
{\displaystyle \ell (x,t)=\lim _{\varepsilon \to 0^{+}}{\frac {1}{2\varepsilon }}\int _{0}^{t}\mathbf {1} _{[x-\varepsilon ,x+\varepsilon ]}(B(s))\,ds}
where
1
[
x
−
ε
,
x
+
ε
]
{\displaystyle \mathbf {1} _{[x-\varepsilon ,x+\varepsilon ]}}
is the indicator function of the interval
[
x
−
ε
,
x
+
ε
]
.
{\displaystyle [x-\varepsilon ,x+\varepsilon ].}
=== Quantum mechanics ===
The delta function is expedient in quantum mechanics. The wave function of a particle gives the probability amplitude of finding a particle within a given region of space. Wave functions are assumed to be elements of the Hilbert space L2 of square-integrable functions, and the total probability of finding a particle within a given interval is the integral of the magnitude of the wave function squared over the interval. A set {|φn⟩} of wave functions is orthonormal if
⟨
φ
n
∣
φ
m
⟩
=
δ
n
m
,
{\displaystyle \langle \varphi _{n}\mid \varphi _{m}\rangle =\delta _{nm},}
where δnm is the Kronecker delta. A set of orthonormal wave functions is complete in the space of square-integrable functions if any wave function |ψ⟩ can be expressed as a linear combination of the {|φn⟩} with complex coefficients:
ψ
=
∑
c
n
φ
n
,
{\displaystyle \psi =\sum c_{n}\varphi _{n},}
where cn = ⟨φn|ψ⟩. Complete orthonormal systems of wave functions appear naturally as the eigenfunctions of the Hamiltonian (of a bound system) in quantum mechanics that measures the energy levels, which are called the eigenvalues. The set of eigenvalues, in this case, is known as the spectrum of the Hamiltonian. In bra–ket notation this equality implies the resolution of the identity:
I
=
∑
|
φ
n
⟩
⟨
φ
n
|
.
{\displaystyle I=\sum |\varphi _{n}\rangle \langle \varphi _{n}|.}
Here the eigenvalues are assumed to be discrete, but the set of eigenvalues of an observable can also be continuous. An example is the position operator, Qψ(x) = xψ(x). The spectrum of the position (in one dimension) is the entire real line and is called a continuous spectrum. However, unlike the Hamiltonian, the position operator lacks proper eigenfunctions. The conventional way to overcome this shortcoming is to widen the class of available functions by allowing distributions as well, i.e., to replace the Hilbert space with a rigged Hilbert space. In this context, the position operator has a complete set of generalized eigenfunctions, labeled by the points y of the real line, given by
φ
y
(
x
)
=
δ
(
x
−
y
)
.
{\displaystyle \varphi _{y}(x)=\delta (x-y).}
The generalized eigenfunctions of the position operator are called the eigenkets and are denoted by φy = |y⟩.
Similar considerations apply to any other (unbounded) self-adjoint operator with continuous spectrum and no degenerate eigenvalues, such as the momentum operator P. In that case, there is a set Ω of real numbers (the spectrum) and a collection of distributions φy with y ∈ Ω such that
P
φ
y
=
y
φ
y
.
{\displaystyle P\varphi _{y}=y\varphi _{y}.}
That is, φy are the generalized eigenvectors of P. If they form an "orthonormal basis" in the distribution sense, that is:
⟨
φ
y
,
φ
y
′
⟩
=
δ
(
y
−
y
′
)
,
{\displaystyle \langle \varphi _{y},\varphi _{y'}\rangle =\delta (y-y'),}
then for any test function ψ,
ψ
(
x
)
=
∫
Ω
c
(
y
)
φ
y
(
x
)
d
y
{\displaystyle \psi (x)=\int _{\Omega }c(y)\varphi _{y}(x)\,dy}
where c(y) = ⟨ψ, φy⟩. That is, there is a resolution of the identity
I
=
∫
Ω
|
φ
y
⟩
⟨
φ
y
|
d
y
{\displaystyle I=\int _{\Omega }|\varphi _{y}\rangle \,\langle \varphi _{y}|\,dy}
where the operator-valued integral is again understood in the weak sense. If the spectrum of P has both continuous and discrete parts, then the resolution of the identity involves a summation over the discrete spectrum and an integral over the continuous spectrum.
The delta function also has many more specialized applications in quantum mechanics, such as the delta potential models for a single and double potential well.
=== Structural mechanics ===
The delta function can be used in structural mechanics to describe transient loads or point loads acting on structures. The governing equation of a simple mass–spring system excited by a sudden force impulse I at time t = 0 can be written
m
d
2
ξ
d
t
2
+
k
ξ
=
I
δ
(
t
)
,
{\displaystyle m{\frac {d^{2}\xi }{dt^{2}}}+k\xi =I\delta (t),}
where m is the mass, ξ is the deflection, and k is the spring constant.
As another example, the equation governing the static deflection of a slender beam is, according to Euler–Bernoulli theory,
E
I
d
4
w
d
x
4
=
q
(
x
)
,
{\displaystyle EI{\frac {d^{4}w}{dx^{4}}}=q(x),}
where EI is the bending stiffness of the beam, w is the deflection, x is the spatial coordinate, and q(x) is the load distribution. If a beam is loaded by a point force F at x = x0, the load distribution is written
q
(
x
)
=
F
δ
(
x
−
x
0
)
.
{\displaystyle q(x)=F\delta (x-x_{0}).}
As the integration of the delta function results in the Heaviside step function, it follows that the static deflection of a slender beam subject to multiple point loads is described by a set of piecewise polynomials.
Also, a point moment acting on a beam can be described by delta functions. Consider two opposing point forces F at a distance d apart. They then produce a moment M = Fd acting on the beam. Now, let the distance d approach the limit zero, while M is kept constant. The load distribution, assuming a clockwise moment acting at x = 0, is written
q
(
x
)
=
lim
d
→
0
(
F
δ
(
x
)
−
F
δ
(
x
−
d
)
)
=
lim
d
→
0
(
M
d
δ
(
x
)
−
M
d
δ
(
x
−
d
)
)
=
M
lim
d
→
0
δ
(
x
)
−
δ
(
x
−
d
)
d
=
M
δ
′
(
x
)
.
{\displaystyle {\begin{aligned}q(x)&=\lim _{d\to 0}{\Big (}F\delta (x)-F\delta (x-d){\Big )}\\[4pt]&=\lim _{d\to 0}\left({\frac {M}{d}}\delta (x)-{\frac {M}{d}}\delta (x-d)\right)\\[4pt]&=M\lim _{d\to 0}{\frac {\delta (x)-\delta (x-d)}{d}}\\[4pt]&=M\delta '(x).\end{aligned}}}
Point moments can thus be represented by the derivative of the delta function. Integration of the beam equation again results in piecewise polynomial deflection.
== See also ==
Atom (measure theory)
Degenerate distribution
Laplacian of the indicator
Uncertainty principle
== Notes ==
== References ==
Aratyn, Henrik; Rasinariu, Constantin (2006), A short course in mathematical methods with Maple, World Scientific, ISBN 978-981-256-461-0.
Arfken, G. B.; Weber, H. J. (2000), Mathematical Methods for Physicists (5th ed.), Boston, Massachusetts: Academic Press, ISBN 978-0-12-059825-0.
atis (2013), ATIS Telecom Glossary, archived from the original on 2013-03-13
Bracewell, R. N. (1986), The Fourier Transform and Its Applications (2nd ed.), McGraw-Hill, Bibcode:1986ftia.book.....B.
Bracewell, R. N. (2000), The Fourier Transform and Its Applications (3rd ed.), McGraw-Hill.
Córdoba, A. (1988), "La formule sommatoire de Poisson", Comptes Rendus de l'Académie des Sciences, Série I, 306: 373–376.
Courant, Richard; Hilbert, David (1962), Methods of Mathematical Physics, Volume II, Wiley-Interscience.
Davis, Howard Ted; Thomson, Kendall T (2000), Linear algebra and linear operators in engineering with applications in Mathematica, Academic Press, ISBN 978-0-12-206349-7
Dieudonné, Jean (1976), Treatise on analysis. Vol. II, New York: Academic Press [Harcourt Brace Jovanovich Publishers], ISBN 978-0-12-215502-4, MR 0530406.
Dieudonné, Jean (1972), Treatise on analysis. Vol. III, Boston, Massachusetts: Academic Press, MR 0350769
Dirac, Paul (1930), The Principles of Quantum Mechanics (1st ed.), Oxford University Press.
Driggers, Ronald G. (2003), Encyclopedia of Optical Engineering, CRC Press, Bibcode:2003eoe..book.....D, ISBN 978-0-8247-0940-2.
Duistermaat, Hans; Kolk (2010), Distributions: Theory and applications, Springer.
Federer, Herbert (1969), Geometric measure theory, Die Grundlehren der mathematischen Wissenschaften, vol. 153, New York: Springer-Verlag, pp. xiv+676, ISBN 978-3-540-60656-7, MR 0257325.
Gannon, Terry (2008), "Vertex operator algebras", Princeton Companion to Mathematics, Princeton University Press, ISBN 978-1400830398.
Gelfand, I. M.; Shilov, G. E. (1966–1968), Generalized functions, vol. 1–5, Academic Press, ISBN 9781483262246.
Hartmann, William M. (1997), Signals, sound, and sensation, Springer, ISBN 978-1-56396-283-7.
Hazewinkel, Michiel (1995). Encyclopaedia of Mathematics (set). Springer Science & Business Media. ISBN 978-1-55608-010-4.
Hazewinkel, Michiel (2011). Encyclopaedia of mathematics. Vol. 10. Springer. ISBN 978-90-481-4896-7. OCLC 751862625.
Hewitt, E; Stromberg, K (1963), Real and abstract analysis, Springer-Verlag.
Hörmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft., vol. 256, Springer, doi:10.1007/978-3-642-96750-4, ISBN 978-3-540-12104-6, MR 0717035.
Isham, C. J. (1995), Lectures on quantum theory: mathematical and structural foundations, Imperial College Press, Bibcode:1995lqtm.book.....I, ISBN 978-81-7764-190-5.
John, Fritz (1955), Plane waves and spherical means applied to partial differential equations, Interscience Publishers, New York-London, MR 0075429. Reprinted, Dover Publications, 2004, ISBN 9780486438047.
Lang, Serge (1997), Undergraduate analysis, Undergraduate Texts in Mathematics (2nd ed.), Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4757-2698-5, ISBN 978-0-387-94841-6, MR 1476913.
Lange, Rutger-Jan (2012), "Potential theory, path integrals and the Laplacian of the indicator", Journal of High Energy Physics, 2012 (11): 29–30, arXiv:1302.0864, Bibcode:2012JHEP...11..032L, doi:10.1007/JHEP11(2012)032, S2CID 56188533.
Laugwitz, D. (1989), "Definite values of infinite sums: aspects of the foundations of infinitesimal analysis around 1820", Arch. Hist. Exact Sci., 39 (3): 195–245, doi:10.1007/BF00329867, S2CID 120890300.
Levin, Frank S. (2002), "Coordinate-space wave functions and completeness", An introduction to quantum theory, Cambridge University Press, pp. 109ff, ISBN 978-0-521-59841-5
Li, Y. T.; Wong, R. (2008), "Integral and series representations of the Dirac delta function", Commun. Pure Appl. Anal., 7 (2): 229–247, arXiv:1303.1943, doi:10.3934/cpaa.2008.7.229, MR 2373214, S2CID 119319140.
de la Madrid Modino, R. (2001). Quantum mechanics in rigged Hilbert space language (PhD thesis). Universidad de Valladolid.
de la Madrid, R.; Bohm, A.; Gadella, M. (2002), "Rigged Hilbert Space Treatment of Continuous Spectrum", Fortschr. Phys., 50 (2): 185–216, arXiv:quant-ph/0109154, Bibcode:2002ForPh..50..185D, doi:10.1002/1521-3978(200203)50:2<185::AID-PROP185>3.0.CO;2-S, S2CID 9407651.
McMahon, D. (2005-11-22), "An Introduction to State Space" (PDF), Quantum Mechanics Demystified, A Self-Teaching Guide, Demystified Series, New York: McGraw-Hill, p. 108, ISBN 978-0-07-145546-6, retrieved 2008-03-17.
van der Pol, Balth.; Bremmer, H. (1987), Operational calculus (3rd ed.), New York: Chelsea Publishing Co., ISBN 978-0-8284-0327-6, MR 0904873.
Rudin, Walter (1966). Devine, Peter R. (ed.). Real and complex analysis (3rd ed.). New York: McGraw-Hill (published 1987). ISBN 0-07-100276-6.
Rudin, Walter (1991), Functional Analysis (2nd ed.), McGraw-Hill, ISBN 978-0-07-054236-5.
Vallée, Olivier; Soares, Manuel (2004), Airy functions and applications to physics, London: Imperial College Press, ISBN 9781911299486.
Saichev, A I; Woyczyński, Wojbor Andrzej (1997), "Chapter1: Basic definitions and operations", Distributions in the Physical and Engineering Sciences: Distributional and fractal calculus, integral transforms, and wavelets, Birkhäuser, ISBN 978-0-8176-3924-2
Schwartz, L. (1950), Théorie des distributions, vol. 1, Hermann.
Schwartz, L. (1951), Théorie des distributions, vol. 2, Hermann.
Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, ISBN 978-0-691-08078-9.
Strichartz, R. (1994), A Guide to Distribution Theory and Fourier Transforms, CRC Press, ISBN 978-0-8493-8273-4.
Vladimirov, V. S. (1971), Equations of mathematical physics, Marcel Dekker, ISBN 978-0-8247-1713-1.
Weisstein, Eric W. "Delta Function". MathWorld.
Yamashita, H. (2006), "Pointwise analysis of scalar fields: A nonstandard approach", Journal of Mathematical Physics, 47 (9): 092301, Bibcode:2006JMP....47i2301Y, doi:10.1063/1.2339017
Yamashita, H. (2007), "Comment on "Pointwise analysis of scalar fields: A nonstandard approach" [J. Math. Phys. 47, 092301 (2006)]", Journal of Mathematical Physics, 48 (8): 084101, Bibcode:2007JMP....48h4101Y, doi:10.1063/1.2771422
== External links ==
Media related to Dirac distribution at Wikimedia Commons
"Delta-function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
KhanAcademy.org video lesson
The Dirac Delta function, a tutorial on the Dirac delta function.
Video Lectures – Lecture 23, a lecture by Arthur Mattuck.
The Dirac delta measure is a hyperfunction
We show the existence of a unique solution and analyze a finite element approximation when the source term is a Dirac delta measure
Non-Lebesgue measures on R. Lebesgue-Stieltjes measure, Dirac delta measure. Archived 2008-03-07 at the Wayback Machine | Wikipedia/Dirac_delta_function |
In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties in 1786. This is often written as
∇
2
f
=
0
{\displaystyle \nabla ^{2}\!f=0}
or
Δ
f
=
0
,
{\displaystyle \Delta f=0,}
where
Δ
=
∇
⋅
∇
=
∇
2
{\displaystyle \Delta =\nabla \cdot \nabla =\nabla ^{2}}
is the Laplace operator,
∇
⋅
{\displaystyle \nabla \cdot }
is the divergence operator (also symbolized "div"),
∇
{\displaystyle \nabla }
is the gradient operator (also symbolized "grad"), and
f
(
x
,
y
,
z
)
{\displaystyle f(x,y,z)}
is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function.
If the right-hand side is specified as a given function,
h
(
x
,
y
,
z
)
{\displaystyle h(x,y,z)}
, we have
Δ
f
=
h
{\displaystyle \Delta f=h}
This is called Poisson's equation, a generalization of Laplace's equation. Laplace's equation and Poisson's equation are the simplest examples of elliptic partial differential equations. Laplace's equation is also a special case of the Helmholtz equation.
The general theory of solutions to Laplace's equation is known as potential theory. The twice continuously differentiable solutions of Laplace's equation are the harmonic functions, which are important in multiple branches of physics, notably electrostatics, gravitation, and fluid dynamics. In the study of heat conduction, the Laplace equation is the steady-state heat equation. In general, Laplace's equation describes situations of equilibrium, or those that do not depend explicitly on time.
== Forms in different coordinate systems ==
In rectangular coordinates,
∇
2
f
=
∂
2
f
∂
x
2
+
∂
2
f
∂
y
2
+
∂
2
f
∂
z
2
=
0.
{\displaystyle \nabla ^{2}f={\frac {\partial ^{2}f}{\partial x^{2}}}+{\frac {\partial ^{2}f}{\partial y^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}}=0.}
In cylindrical coordinates,
∇
2
f
=
1
r
∂
∂
r
(
r
∂
f
∂
r
)
+
1
r
2
∂
2
f
∂
ϕ
2
+
∂
2
f
∂
z
2
=
0.
{\displaystyle \nabla ^{2}f={\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}}}{\frac {\partial ^{2}f}{\partial \phi ^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}}=0.}
In spherical coordinates, using the
(
r
,
θ
,
φ
)
{\displaystyle (r,\theta ,\varphi )}
convention,
∇
2
f
=
1
r
2
∂
∂
r
(
r
2
∂
f
∂
r
)
+
1
r
2
sin
θ
∂
∂
θ
(
sin
θ
∂
f
∂
θ
)
+
1
r
2
sin
2
θ
∂
2
f
∂
φ
2
=
0.
{\displaystyle \nabla ^{2}f={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}\left(r^{2}{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta {\frac {\partial f}{\partial \theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}}=0.}
More generally, in arbitrary curvilinear coordinates (ξi),
∇
2
f
=
∂
∂
ξ
j
(
∂
f
∂
ξ
k
g
k
j
)
+
∂
f
∂
ξ
j
g
j
m
Γ
m
n
n
=
0
,
{\displaystyle \nabla ^{2}f={\frac {\partial }{\partial \xi ^{j}}}\left({\frac {\partial f}{\partial \xi ^{k}}}g^{kj}\right)+{\frac {\partial f}{\partial \xi ^{j}}}g^{jm}\Gamma _{mn}^{n}=0,}
or
∇
2
f
=
1
|
g
|
∂
∂
ξ
i
(
|
g
|
g
i
j
∂
f
∂
ξ
j
)
=
0
,
(
g
=
det
{
g
i
j
}
)
{\displaystyle \nabla ^{2}f={\frac {1}{\sqrt {|g|}}}{\frac {\partial }{\partial \xi ^{i}}}\!\left({\sqrt {|g|}}g^{ij}{\frac {\partial f}{\partial \xi ^{j}}}\right)=0,\qquad (g=\det\{g_{ij}\})}
where gij is the Euclidean metric tensor relative to the new coordinates and Γ denotes its Christoffel symbols.
== Boundary conditions ==
The Dirichlet problem for Laplace's equation consists of finding a solution φ on some domain D such that φ on the boundary of D is equal to some given function. Since the Laplace operator appears in the heat equation, one physical interpretation of this problem is as follows: fix the temperature on the boundary of the domain according to the given specification of the boundary condition. Allow heat to flow until a stationary state is reached in which the temperature at each point on the domain does not change anymore. The temperature distribution in the interior will then be given by the solution to the corresponding Dirichlet problem.
The Neumann boundary conditions for Laplace's equation specify not the function φ itself on the boundary of D but its normal derivative. Physically, this corresponds to the construction of a potential for a vector field whose effect is known at the boundary of D alone. For the example of the heat equation it amounts to prescribing the heat flux through the boundary. In particular, at an adiabatic boundary, the normal derivative of φ is zero.
Solutions of Laplace's equation are called harmonic functions; they are all analytic within the domain where the equation is satisfied. If any two functions are solutions to Laplace's equation (or any linear homogeneous differential equation), their sum (or any linear combination) is also a solution. This property, called the principle of superposition, is very useful. For example, solutions to complex problems can be constructed by summing simple solutions.
== In two dimensions ==
Laplace's equation in two independent variables in rectangular coordinates has the form
∂
2
ψ
∂
x
2
+
∂
2
ψ
∂
y
2
≡
ψ
x
x
+
ψ
y
y
=
0.
{\displaystyle {\frac {\partial ^{2}\psi }{\partial x^{2}}}+{\frac {\partial ^{2}\psi }{\partial y^{2}}}\equiv \psi _{xx}+\psi _{yy}=0.}
=== Analytic functions ===
The real and imaginary parts of a complex analytic function both satisfy the Laplace equation. That is, if z = x + iy, and if
f
(
z
)
=
u
(
x
,
y
)
+
i
v
(
x
,
y
)
,
{\displaystyle f(z)=u(x,y)+iv(x,y),}
then the necessary condition that f(z) be analytic is that u and v be differentiable and that the Cauchy–Riemann equations be satisfied:
u
x
=
v
y
,
v
x
=
−
u
y
.
{\displaystyle u_{x}=v_{y},\quad v_{x}=-u_{y}.}
where ux is the first partial derivative of u with respect to x.
It follows that
u
y
y
=
(
−
v
x
)
y
=
−
(
v
y
)
x
=
−
(
u
x
)
x
.
{\displaystyle u_{yy}=(-v_{x})_{y}=-(v_{y})_{x}=-(u_{x})_{x}.}
Therefore u satisfies the Laplace equation. A similar calculation shows that v also satisfies the Laplace equation.
Conversely, given a harmonic function, it is the real part of an analytic function, f(z) (at least locally). If a trial form is
f
(
z
)
=
φ
(
x
,
y
)
+
i
ψ
(
x
,
y
)
,
{\displaystyle f(z)=\varphi (x,y)+i\psi (x,y),}
then the Cauchy–Riemann equations will be satisfied if we set
ψ
x
=
−
φ
y
,
ψ
y
=
φ
x
.
{\displaystyle \psi _{x}=-\varphi _{y},\quad \psi _{y}=\varphi _{x}.}
This relation does not determine ψ, but only its increments:
d
ψ
=
−
φ
y
d
x
+
φ
x
d
y
.
{\displaystyle d\psi =-\varphi _{y}\,dx+\varphi _{x}\,dy.}
The Laplace equation for φ implies that the integrability condition for ψ is satisfied:
ψ
x
y
=
ψ
y
x
,
{\displaystyle \psi _{xy}=\psi _{yx},}
and thus ψ may be defined by a line integral. The integrability condition and Stokes' theorem implies that the value of the line integral connecting two points is independent of the path. The resulting pair of solutions of the Laplace equation are called conjugate harmonic functions. This construction is only valid locally, or provided that the path does not loop around a singularity. For example, if r and θ are polar coordinates and
φ
=
log
r
,
{\displaystyle \varphi =\log r,}
then a corresponding analytic function is
f
(
z
)
=
log
z
=
log
r
+
i
θ
.
{\displaystyle f(z)=\log z=\log r+i\theta .}
However, the angle θ is single-valued only in a region that does not enclose the origin.
The close connection between the Laplace equation and analytic functions implies that any solution of the Laplace equation has derivatives of all orders, and can be expanded in a power series, at least inside a circle that does not enclose a singularity. This is in sharp contrast to solutions of the wave equation, which generally have less regularity.
There is an intimate connection between power series and Fourier series. If we expand a function f in a power series inside a circle of radius R, this means that
f
(
z
)
=
∑
n
=
0
∞
c
n
z
n
,
{\displaystyle f(z)=\sum _{n=0}^{\infty }c_{n}z^{n},}
with suitably defined coefficients whose real and imaginary parts are given by
c
n
=
a
n
+
i
b
n
.
{\displaystyle c_{n}=a_{n}+ib_{n}.}
Therefore
f
(
z
)
=
∑
n
=
0
∞
[
a
n
r
n
cos
n
θ
−
b
n
r
n
sin
n
θ
]
+
i
∑
n
=
1
∞
[
a
n
r
n
sin
n
θ
+
b
n
r
n
cos
n
θ
]
,
{\displaystyle f(z)=\sum _{n=0}^{\infty }\left[a_{n}r^{n}\cos n\theta -b_{n}r^{n}\sin n\theta \right]+i\sum _{n=1}^{\infty }\left[a_{n}r^{n}\sin n\theta +b_{n}r^{n}\cos n\theta \right],}
which is a Fourier series for f. These trigonometric functions can themselves be expanded, using multiple angle formulae.
=== Fluid flow ===
Let the quantities u and v be the horizontal and vertical components of the velocity field of a steady incompressible, irrotational flow in two dimensions. The continuity condition for an incompressible flow is that
u
x
+
v
y
=
0
,
{\displaystyle u_{x}+v_{y}=0,}
and the condition that the flow be irrotational is that
∇
×
V
=
v
x
−
u
y
=
0.
{\displaystyle \nabla \times \mathbf {V} =v_{x}-u_{y}=0.}
If we define the differential of a function ψ by
d
ψ
=
u
d
y
−
v
d
x
,
{\displaystyle d\psi =u\,dy-v\,dx,}
then the continuity condition is the integrability condition for this differential: the resulting function is called the stream function because it is constant along flow lines. The first derivatives of ψ are given by
ψ
x
=
−
v
,
ψ
y
=
u
,
{\displaystyle \psi _{x}=-v,\quad \psi _{y}=u,}
and the irrotationality condition implies that ψ satisfies the Laplace equation. The harmonic function φ that is conjugate to ψ is called the velocity potential. The Cauchy–Riemann equations imply that
φ
x
=
ψ
y
=
u
,
φ
y
=
−
ψ
x
=
v
.
{\displaystyle \varphi _{x}=\psi _{y}=u,\quad \varphi _{y}=-\psi _{x}=v.}
Thus every analytic function corresponds to a steady incompressible, irrotational, inviscid fluid flow in the plane. The real part is the velocity potential, and the imaginary part is the stream function.
=== Electrostatics ===
According to Maxwell's equations, an electric field (u, v) in two space dimensions that is independent of time satisfies
∇
×
(
u
,
v
,
0
)
=
(
v
x
−
u
y
)
k
^
=
0
,
{\displaystyle \nabla \times (u,v,0)=(v_{x}-u_{y}){\hat {\mathbf {k} }}=\mathbf {0} ,}
and
∇
⋅
(
u
,
v
)
=
ρ
,
{\displaystyle \nabla \cdot (u,v)=\rho ,}
where ρ is the charge density. The first Maxwell equation is the integrability condition for the differential
d
φ
=
−
u
d
x
−
v
d
y
,
{\displaystyle d\varphi =-u\,dx-v\,dy,}
so the electric potential φ may be constructed to satisfy
φ
x
=
−
u
,
φ
y
=
−
v
.
{\displaystyle \varphi _{x}=-u,\quad \varphi _{y}=-v.}
The second of Maxwell's equations then implies that
φ
x
x
+
φ
y
y
=
−
ρ
,
{\displaystyle \varphi _{xx}+\varphi _{yy}=-\rho ,}
which is the Poisson equation. The Laplace equation can be used in three-dimensional problems in electrostatics and fluid flow just as in two dimensions.
== In three dimensions ==
=== Fundamental solution ===
A fundamental solution of Laplace's equation satisfies
Δ
u
=
u
x
x
+
u
y
y
+
u
z
z
=
−
δ
(
x
−
x
′
,
y
−
y
′
,
z
−
z
′
)
,
{\displaystyle \Delta u=u_{xx}+u_{yy}+u_{zz}=-\delta (x-x',y-y',z-z'),}
where the Dirac delta function δ denotes a unit source concentrated at the point (x′, y′, z′). No function has this property: in fact it is a distribution rather than a function; but it can be thought of as a limit of functions whose integrals over space are unity, and whose support (the region where the function is non-zero) shrinks to a point (see weak solution). It is common to take a different sign convention for this equation than one typically does when defining fundamental solutions. This choice of sign is often convenient to work with because −Δ is a positive operator. The definition of the fundamental solution thus implies that, if the Laplacian of u is integrated over any volume that encloses the source point, then
∭
V
∇
⋅
∇
u
d
V
=
−
1.
{\displaystyle \iiint _{V}\nabla \cdot \nabla u\,dV=-1.}
The Laplace equation is unchanged under a rotation of coordinates, and hence we can expect that a fundamental solution may be obtained among solutions that only depend upon the distance r from the source point. If we choose the volume to be a ball of radius a around the source point, then Gauss's divergence theorem implies that
−
1
=
∭
V
∇
⋅
∇
u
d
V
=
∬
S
d
u
d
r
d
S
=
4
π
a
2
d
u
d
r
|
r
=
a
.
{\displaystyle -1=\iiint _{V}\nabla \cdot \nabla u\,dV=\iint _{S}{\frac {du}{dr}}\,dS=\left.4\pi a^{2}{\frac {du}{dr}}\right|_{r=a}.}
It follows that
d
u
d
r
=
−
1
4
π
r
2
,
{\displaystyle {\frac {du}{dr}}=-{\frac {1}{4\pi r^{2}}},}
on a sphere of radius r that is centered on the source point, and hence
u
=
1
4
π
r
.
{\displaystyle u={\frac {1}{4\pi r}}.}
Note that, with the opposite sign convention (used in physics), this is the potential generated by a point particle, for an inverse-square law force, arising in the solution of Poisson equation. A similar argument shows that in two dimensions
u
=
−
log
(
r
)
2
π
.
{\displaystyle u=-{\frac {\log(r)}{2\pi }}.}
where log(r) denotes the natural logarithm. Note that, with the opposite sign convention, this is the potential generated by a pointlike sink (see point particle), which is the solution of the Euler equations in two-dimensional incompressible flow.
=== Green's function ===
A Green's function is a fundamental solution that also satisfies a suitable condition on the boundary S of a volume V. For instance,
G
(
x
,
y
,
z
;
x
′
,
y
′
,
z
′
)
{\displaystyle G(x,y,z;x',y',z')}
may satisfy
∇
⋅
∇
G
=
−
δ
(
x
−
x
′
,
y
−
y
′
,
z
−
z
′
)
in
V
,
{\displaystyle \nabla \cdot \nabla G=-\delta (x-x',y-y',z-z')\qquad {\text{in }}V,}
G
=
0
if
(
x
,
y
,
z
)
on
S
.
{\displaystyle G=0\quad {\text{if}}\quad (x,y,z)\qquad {\text{on }}S.}
Now if u is any solution of the Poisson equation in V:
∇
⋅
∇
u
=
−
f
,
{\displaystyle \nabla \cdot \nabla u=-f,}
and u assumes the boundary values g on S, then we may apply Green's identity, (a consequence of the divergence theorem) which states that
∭
V
[
G
∇
⋅
∇
u
−
u
∇
⋅
∇
G
]
d
V
=
∭
V
∇
⋅
[
G
∇
u
−
u
∇
G
]
d
V
=
∬
S
[
G
u
n
−
u
G
n
]
d
S
.
{\displaystyle \iiint _{V}\left[G\,\nabla \cdot \nabla u-u\,\nabla \cdot \nabla G\right]\,dV=\iiint _{V}\nabla \cdot \left[G\nabla u-u\nabla G\right]\,dV=\iint _{S}\left[Gu_{n}-uG_{n}\right]\,dS.\,}
The notations un and Gn denote normal derivatives on S. In view of the conditions satisfied by u and G, this result simplifies to
u
(
x
′
,
y
′
,
z
′
)
=
∭
V
G
f
d
V
−
∬
S
G
n
g
d
S
.
{\displaystyle u(x',y',z')=\iiint _{V}Gf\,dV-\iint _{S}G_{n}g\,dS.\,}
Thus the Green's function describes the influence at (x′, y′, z′) of the data f and g. For the case of the interior of a sphere of radius a, the Green's function may be obtained by means of a reflection (Sommerfeld 1949): the source point P at distance ρ from the center of the sphere is reflected along its radial line to a point P' that is at a distance
ρ
′
=
a
2
ρ
.
{\displaystyle \rho '={\frac {a^{2}}{\rho }}.\,}
Note that if P is inside the sphere, then P′ will be outside the sphere. The Green's function is then given by
1
4
π
R
−
a
4
π
ρ
R
′
,
{\displaystyle {\frac {1}{4\pi R}}-{\frac {a}{4\pi \rho R'}},\,}
where R denotes the distance to the source point P and R′ denotes the distance to the reflected point P′. A consequence of this expression for the Green's function is the Poisson integral formula. Let ρ, θ, and φ be spherical coordinates for the source point P. Here θ denotes the angle with the vertical axis, which is contrary to the usual American mathematical notation, but agrees with standard European and physical practice. Then the solution of the Laplace equation with Dirichlet boundary values g inside the sphere is given by (Zachmanoglou & Thoe 1986, p. 228)
u
(
P
)
=
1
4
π
a
3
(
1
−
ρ
2
a
2
)
∫
0
2
π
∫
0
π
g
(
θ
′
,
φ
′
)
sin
θ
′
(
a
2
+
ρ
2
−
2
a
ρ
cos
Θ
)
3
2
d
θ
′
d
φ
′
{\displaystyle u(P)={\frac {1}{4\pi }}a^{3}\left(1-{\frac {\rho ^{2}}{a^{2}}}\right)\int _{0}^{2\pi }\int _{0}^{\pi }{\frac {g(\theta ',\varphi ')\sin \theta '}{(a^{2}+\rho ^{2}-2a\rho \cos \Theta )^{\frac {3}{2}}}}d\theta '\,d\varphi '}
where
cos
Θ
=
cos
θ
cos
θ
′
+
sin
θ
sin
θ
′
cos
(
φ
−
φ
′
)
{\displaystyle \cos \Theta =\cos \theta \cos \theta '+\sin \theta \sin \theta '\cos(\varphi -\varphi ')}
is the cosine of the angle between (θ, φ) and (θ′, φ′). A simple consequence of this formula is that if u is a harmonic function, then the value of u at the center of the sphere is the mean value of its values on the sphere. This mean value property immediately implies that a non-constant harmonic function cannot assume its maximum value at an interior point.
=== Laplace's spherical harmonics ===
Laplace's equation in spherical coordinates is:
∇
2
f
=
1
r
2
∂
∂
r
(
r
2
∂
f
∂
r
)
+
1
r
2
sin
θ
∂
∂
θ
(
sin
θ
∂
f
∂
θ
)
+
1
r
2
sin
2
θ
∂
2
f
∂
φ
2
=
0.
{\displaystyle \nabla ^{2}f={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}\left(r^{2}{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta {\frac {\partial f}{\partial \theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}}=0.}
Consider the problem of finding solutions of the form f(r, θ, φ) = R(r) Y(θ, φ). By separation of variables, two differential equations result by imposing Laplace's equation:
1
R
d
d
r
(
r
2
d
R
d
r
)
=
λ
,
1
Y
1
sin
θ
∂
∂
θ
(
sin
θ
∂
Y
∂
θ
)
+
1
Y
1
sin
2
θ
∂
2
Y
∂
φ
2
=
−
λ
.
{\displaystyle {\frac {1}{R}}{\frac {d}{dr}}\left(r^{2}{\frac {dR}{dr}}\right)=\lambda ,\qquad {\frac {1}{Y}}{\frac {1}{\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta {\frac {\partial Y}{\partial \theta }}\right)+{\frac {1}{Y}}{\frac {1}{\sin ^{2}\theta }}{\frac {\partial ^{2}Y}{\partial \varphi ^{2}}}=-\lambda .}
The second equation can be simplified under the assumption that Y has the form Y(θ, φ) = Θ(θ) Φ(φ). Applying separation of variables again to the second equation gives way to the pair of differential equations
1
Φ
d
2
Φ
d
φ
2
=
−
m
2
{\displaystyle {\frac {1}{\Phi }}{\frac {d^{2}\Phi }{d\varphi ^{2}}}=-m^{2}}
λ
sin
2
θ
+
sin
θ
Θ
d
d
θ
(
sin
θ
d
Θ
d
θ
)
=
m
2
{\displaystyle \lambda \sin ^{2}\theta +{\frac {\sin \theta }{\Theta }}{\frac {d}{d\theta }}\left(\sin \theta {\frac {d\Theta }{d\theta }}\right)=m^{2}}
for some number m. A priori, m is a complex constant, but because Φ must be a periodic function whose period evenly divides 2π, m is necessarily an integer and Φ is a linear combination of the complex exponentials e±imφ. The solution function Y(θ, φ) is regular at the poles of the sphere, where θ = 0, π. Imposing this regularity in the solution Θ of the second equation at the boundary points of the domain is a Sturm–Liouville problem that forces the parameter λ to be of the form λ = ℓ (ℓ + 1) for some non-negative integer with ℓ ≥ |m|; this is also explained below in terms of the orbital angular momentum. Furthermore, a change of variables t = cos θ transforms this equation into the Legendre equation, whose solution is a multiple of the associated Legendre polynomial Pℓm(cos θ) . Finally, the equation for R has solutions of the form R(r) = A rℓ + B r−ℓ − 1; requiring the solution to be regular throughout R3 forces B = 0.
Here the solution was assumed to have the special form Y(θ, φ) = Θ(θ) Φ(φ). For a given value of ℓ, there are 2ℓ + 1 independent solutions of this form, one for each integer m with −ℓ ≤ m ≤ ℓ. These angular solutions are a product of trigonometric functions, here represented as a complex exponential, and associated Legendre polynomials:
Y
ℓ
m
(
θ
,
φ
)
=
N
e
i
m
φ
P
ℓ
m
(
cos
θ
)
{\displaystyle Y_{\ell }^{m}(\theta ,\varphi )=Ne^{im\varphi }P_{\ell }^{m}(\cos {\theta })}
which fulfill
r
2
∇
2
Y
ℓ
m
(
θ
,
φ
)
=
−
ℓ
(
ℓ
+
1
)
Y
ℓ
m
(
θ
,
φ
)
.
{\displaystyle r^{2}\nabla ^{2}Y_{\ell }^{m}(\theta ,\varphi )=-\ell (\ell +1)Y_{\ell }^{m}(\theta ,\varphi ).}
Here Yℓm is called a spherical harmonic function of degree ℓ and order m, Pℓm is an associated Legendre polynomial, N is a normalization constant, and θ and φ represent colatitude and longitude, respectively. In particular, the colatitude θ, or polar angle, ranges from 0 at the North Pole, to π/2 at the Equator, to π at the South Pole, and the longitude φ, or azimuth, may assume all values with 0 ≤ φ < 2π. For a fixed integer ℓ, every solution Y(θ, φ) of the eigenvalue problem
r
2
∇
2
Y
=
−
ℓ
(
ℓ
+
1
)
Y
{\displaystyle r^{2}\nabla ^{2}Y=-\ell (\ell +1)Y}
is a linear combination of Yℓm. In fact, for any such solution, rℓ Y(θ, φ) is the expression in spherical coordinates of a homogeneous polynomial that is harmonic (see below), and so counting dimensions shows that there are 2ℓ + 1 linearly independent such polynomials.
The general solution to Laplace's equation in a ball centered at the origin is a linear combination of the spherical harmonic functions multiplied by the appropriate scale factor rℓ,
f
(
r
,
θ
,
φ
)
=
∑
ℓ
=
0
∞
∑
m
=
−
ℓ
ℓ
f
ℓ
m
r
ℓ
Y
ℓ
m
(
θ
,
φ
)
,
{\displaystyle f(r,\theta ,\varphi )=\sum _{\ell =0}^{\infty }\sum _{m=-\ell }^{\ell }f_{\ell }^{m}r^{\ell }Y_{\ell }^{m}(\theta ,\varphi ),}
where the fℓm are constants and the factors rℓ Yℓm are known as solid harmonics. Such an expansion is valid in the ball
r
<
R
=
1
lim sup
ℓ
→
∞
|
f
ℓ
m
|
1
/
ℓ
.
{\displaystyle r<R={\frac {1}{\limsup _{\ell \to \infty }|f_{\ell }^{m}|^{{1}/{\ell }}}}.}
For
r
>
R
{\displaystyle r>R}
, the solid harmonics with negative powers of
r
{\displaystyle r}
are chosen instead. In that case, one needs to expand the solution of known regions in Laurent series (about
r
=
∞
{\displaystyle r=\infty }
), instead of Taylor series (about
r
=
0
{\displaystyle r=0}
), to match the terms and find
f
ℓ
m
{\displaystyle f_{\ell }^{m}}
.
=== Electrostatics and magnetostatics ===
Let
E
{\displaystyle \mathbf {E} }
be the electric field,
ρ
{\displaystyle \rho }
be the electric charge density, and
ε
0
{\displaystyle \varepsilon _{0}}
be the permittivity of free space. Then Gauss's law for electricity (Maxwell's first equation) in differential form states
∇
⋅
E
=
ρ
ε
0
.
{\displaystyle \nabla \cdot \mathbf {E} ={\frac {\rho }{\varepsilon _{0}}}.}
Now, the electric field can be expressed as the negative gradient of the electric potential
V
{\displaystyle V}
,
E
=
−
∇
V
,
{\displaystyle \mathbf {E} =-\nabla V,}
if the field is irrotational,
∇
×
E
=
0
{\displaystyle \nabla \times \mathbf {E} =\mathbf {0} }
. The irrotationality of
E
{\displaystyle \mathbf {E} }
is also known as the electrostatic condition.
∇
⋅
E
=
∇
⋅
(
−
∇
V
)
=
−
∇
2
V
{\displaystyle \nabla \cdot \mathbf {E} =\nabla \cdot (-\nabla V)=-\nabla ^{2}V}
∇
2
V
=
−
∇
⋅
E
{\displaystyle \nabla ^{2}V=-\nabla \cdot \mathbf {E} }
Plugging this relation into Gauss's law, we obtain Poisson's equation for electricity,
∇
2
V
=
−
ρ
ε
0
.
{\displaystyle \nabla ^{2}V=-{\frac {\rho }{\varepsilon _{0}}}.}
In the particular case of a source-free region,
ρ
=
0
{\displaystyle \rho =0}
and Poisson's equation reduces to Laplace's equation for the electric potential.
If the electrostatic potential
V
{\displaystyle V}
is specified on the boundary of a region
R
{\displaystyle {\mathcal {R}}}
, then it is uniquely determined. If
R
{\displaystyle {\mathcal {R}}}
is surrounded by a conducting material with a specified charge density
ρ
{\displaystyle \rho }
, and if the total charge
Q
{\displaystyle Q}
is known, then
V
{\displaystyle V}
is also unique.
For the magnetic field, when there is no free current,
∇
×
H
=
0
.
{\displaystyle \nabla \times \mathbf {H} =\mathbf {0} .}
We can thus define a magnetic scalar potential, ψ, as
H
=
−
∇
ψ
.
{\displaystyle \mathbf {H} =-\nabla \psi .}
With the definition of H:
∇
⋅
B
=
μ
0
∇
⋅
(
H
+
M
)
=
0
,
{\displaystyle \nabla \cdot \mathbf {B} =\mu _{0}\nabla \cdot \left(\mathbf {H} +\mathbf {M} \right)=0,}
it follows that
∇
2
ψ
=
−
∇
⋅
H
=
∇
⋅
M
.
{\displaystyle \nabla ^{2}\psi =-\nabla \cdot \mathbf {H} =\nabla \cdot \mathbf {M} .}
Similar to electrostatics, in a source-free region,
M
=
0
{\displaystyle \mathbf {M} =0}
and Poisson's equation reduces to Laplace's equation for the magnetic scalar potential ,
∇
2
ψ
=
0
{\displaystyle \nabla ^{2}\psi =0}
A potential that does not satisfy Laplace's equation together with the boundary condition is an invalid electrostatic or magnetic scalar potential.
== Gravitation ==
Let
g
{\displaystyle \mathbf {g} }
be the gravitational field,
ρ
{\displaystyle \rho }
the mass density, and
G
{\displaystyle G}
the gravitational constant. Then Gauss's law for gravitation in differential form is
∇
⋅
g
=
−
4
π
G
ρ
.
{\displaystyle \nabla \cdot \mathbf {g} =-4\pi G\rho .}
The gravitational field is conservative and can therefore be expressed as the negative gradient of the gravitational potential:
g
=
−
∇
V
,
∇
⋅
g
=
∇
⋅
(
−
∇
V
)
=
−
∇
2
V
,
⟹
∇
2
V
=
−
∇
⋅
g
.
{\displaystyle {\begin{aligned}\mathbf {g} &=-\nabla V,\\\nabla \cdot \mathbf {g} &=\nabla \cdot (-\nabla V)=-\nabla ^{2}V,\\\implies \nabla ^{2}V&=-\nabla \cdot \mathbf {g} .\end{aligned}}}
Using the differential form of Gauss's law of gravitation, we have
∇
2
V
=
4
π
G
ρ
,
{\displaystyle \nabla ^{2}V=4\pi G\rho ,}
which is Poisson's equation for gravitational fields.
In empty space,
ρ
=
0
{\displaystyle \rho =0}
and we have
∇
2
V
=
0
,
{\displaystyle \nabla ^{2}V=0,}
which is Laplace's equation for gravitational fields.
== In the Schwarzschild metric ==
S. Persides solved the Laplace equation in Schwarzschild spacetime on hypersurfaces of constant t. Using the canonical variables r, θ, φ the solution is
Ψ
(
r
,
θ
,
φ
)
=
R
(
r
)
Y
l
(
θ
,
φ
)
,
{\displaystyle \Psi (r,\theta ,\varphi )=R(r)Y_{l}(\theta ,\varphi ),}
where Yl(θ, φ) is a spherical harmonic function, and
R
(
r
)
=
(
−
1
)
l
(
l
!
)
2
r
s
l
(
2
l
)
!
P
l
(
1
−
2
r
r
s
)
+
(
−
1
)
l
+
1
2
(
2
l
+
1
)
!
(
l
)
!
2
r
s
l
+
1
Q
l
(
1
−
2
r
r
s
)
.
{\displaystyle R(r)=(-1)^{l}{\frac {(l!)^{2}r_{s}^{l}}{(2l)!}}P_{l}\left(1-{\frac {2r}{r_{s}}}\right)+(-1)^{l+1}{\frac {2(2l+1)!}{(l)!^{2}r_{s}^{l+1}}}Q_{l}\left(1-{\frac {2r}{r_{s}}}\right).}
Here Pl and Ql are Legendre functions of the first and second kind, respectively, while rs is the Schwarzschild radius. The parameter l is an arbitrary non-negative integer.
== See also ==
6-sphere coordinates, a coordinate system under which Laplace's equation becomes R-separable
Helmholtz equation, a generalization of Laplace's equation
Spherical harmonic
Quadrature domains
Potential theory
Potential flow
Bateman transform
Earnshaw's theorem uses the Laplace equation to show that stable static ferromagnetic suspension is impossible
Vector Laplacian
Fundamental solution
== Notes ==
== References ==
== Sources ==
Courant, Richard; Hilbert, David (1962), Methods of Mathematical Physics, Volume I, Wiley-Interscience.
Sommerfeld, A. (1949). Partial Differential Equations in Physics. New York: Academic Press. Bibcode:1949pdep.book.....S.
Zachmanoglou, E. C.; Thoe, Dale W. (1986). Introduction to Partial Differential Equations with Applications. New York: Dover. ISBN 9780486652511.
== Further reading ==
Evans, L. C. (1998). Partial Differential Equations. Providence: American Mathematical Society. ISBN 978-0-8218-0772-9.
Petrovsky, I. G. (1967). Partial Differential Equations. Philadelphia: W. B. Saunders.
Polyanin, A. D. (2002). Handbook of Linear Partial Differential Equations for Engineers and Scientists. Boca Raton: Chapman & Hall/CRC Press. ISBN 978-1-58488-299-2.
== External links ==
"Laplace equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Laplace Equation (particular solutions and boundary value problems) at EqWorld: The World of Mathematical Equations.
Example initial-boundary value problems using Laplace's equation from exampleproblems.com.
Weisstein, Eric W. "Laplace's Equation". MathWorld.
Find out how boundary value problems governed by Laplace's equation may be solved numerically by boundary element method | Wikipedia/Laplace_equation |
In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time domain (if applicable) are discretized, or broken into a finite number of intervals, and the values of the solution at the end points of the intervals are approximated by solving algebraic equations containing finite differences and values from nearby points.
Finite difference methods convert ordinary differential equations (ODE) or partial differential equations (PDE), which may be nonlinear, into a system of linear equations that can be solved by matrix algebra techniques. Modern computers can perform these linear algebra computations efficiently, and this, along with their relative ease of implementation, has led to the widespread use of FDM in modern numerical analysis.
Today, FDMs are one of the most common approaches to the numerical solution of PDE, along with finite element methods.
== Derive difference quotient from Taylor's polynomial ==
For a n-times differentiable function, by Taylor's theorem the Taylor series expansion is given as
f
(
x
0
+
h
)
=
f
(
x
0
)
+
f
′
(
x
0
)
1
!
h
+
f
(
2
)
(
x
0
)
2
!
h
2
+
⋯
+
f
(
n
)
(
x
0
)
n
!
h
n
+
R
n
(
x
)
,
{\displaystyle f(x_{0}+h)=f(x_{0})+{\frac {f'(x_{0})}{1!}}h+{\frac {f^{(2)}(x_{0})}{2!}}h^{2}+\cdots +{\frac {f^{(n)}(x_{0})}{n!}}h^{n}+R_{n}(x),}
Where n! denotes the factorial of n, and Rn(x) is a remainder term, denoting the difference between the Taylor polynomial of degree n and the original function.
Following is the process to derive an approximation for the first derivative of the function f by first truncating the Taylor polynomial plus remainder:
f
(
x
0
+
h
)
=
f
(
x
0
)
+
f
′
(
x
0
)
h
+
R
1
(
x
)
.
{\displaystyle f(x_{0}+h)=f(x_{0})+f'(x_{0})h+R_{1}(x).}
Dividing across by h gives:
f
(
x
0
+
h
)
h
=
f
(
x
0
)
h
+
f
′
(
x
0
)
+
R
1
(
x
)
h
{\displaystyle {f(x_{0}+h) \over h}={f(x_{0}) \over h}+f'(x_{0})+{R_{1}(x) \over h}}
Solving for
f
′
(
x
0
)
{\displaystyle f'(x_{0})}
:
f
′
(
x
0
)
=
f
(
x
0
+
h
)
−
f
(
x
0
)
h
−
R
1
(
x
)
h
.
{\displaystyle f'(x_{0})={f(x_{0}+h)-f(x_{0}) \over h}-{R_{1}(x) \over h}.}
Assuming that
R
1
(
x
)
{\displaystyle R_{1}(x)}
is sufficiently small, the approximation of the first derivative of f is:
f
′
(
x
0
)
≈
f
(
x
0
+
h
)
−
f
(
x
0
)
h
.
{\displaystyle f'(x_{0})\approx {f(x_{0}+h)-f(x_{0}) \over h}.}
This is similar to the definition of derivative, which is:
f
′
(
x
0
)
=
lim
h
→
0
f
(
x
0
+
h
)
−
f
(
x
0
)
h
.
{\displaystyle f'(x_{0})=\lim _{h\to 0}{\frac {f(x_{0}+h)-f(x_{0})}{h}}.}
except for the limit towards zero (the method is named after this).
== Accuracy and order ==
The error in a method's solution is defined as the difference between the approximation and the exact analytical solution. The two sources of error in finite difference methods are round-off error, the loss of precision due to computer rounding of decimal quantities, and truncation error or discretization error, the difference between the exact solution of the original differential equation and the exact quantity assuming perfect arithmetic (no round-off).
To use a finite difference method to approximate the solution to a problem, one must first discretize the problem's domain. This is usually done by dividing the domain into a uniform grid (see image). This means that finite-difference methods produce sets of discrete numerical approximations to the derivative, often in a "time-stepping" manner.
An expression of general interest is the local truncation error of a method. Typically expressed using Big-O notation, local truncation error refers to the error from a single application of a method. That is, it is the quantity
f
′
(
x
i
)
−
f
i
′
{\displaystyle f'(x_{i})-f'_{i}}
if
f
′
(
x
i
)
{\displaystyle f'(x_{i})}
refers to the exact value and
f
i
′
{\displaystyle f'_{i}}
to the numerical approximation. The remainder term of the Taylor polynomial can be used to analyze local truncation error. Using the Lagrange form of the remainder from the Taylor polynomial for
f
(
x
0
+
h
)
{\displaystyle f(x_{0}+h)}
, which is
R
n
(
x
0
+
h
)
=
f
(
n
+
1
)
(
ξ
)
(
n
+
1
)
!
(
h
)
n
+
1
,
x
0
<
ξ
<
x
0
+
h
,
{\displaystyle R_{n}(x_{0}+h)={\frac {f^{(n+1)}(\xi )}{(n+1)!}}(h)^{n+1}\,,\quad x_{0}<\xi <x_{0}+h,}
the dominant term of the local truncation error can be discovered. For example, again using the forward-difference formula for the first derivative, knowing that
f
(
x
i
)
=
f
(
x
0
+
i
h
)
{\displaystyle f(x_{i})=f(x_{0}+ih)}
,
f
(
x
0
+
i
h
)
=
f
(
x
0
)
+
f
′
(
x
0
)
i
h
+
f
″
(
ξ
)
2
!
(
i
h
)
2
,
{\displaystyle f(x_{0}+ih)=f(x_{0})+f'(x_{0})ih+{\frac {f''(\xi )}{2!}}(ih)^{2},}
and with some algebraic manipulation, this leads to
f
(
x
0
+
i
h
)
−
f
(
x
0
)
i
h
=
f
′
(
x
0
)
+
f
″
(
ξ
)
2
!
i
h
,
{\displaystyle {\frac {f(x_{0}+ih)-f(x_{0})}{ih}}=f'(x_{0})+{\frac {f''(\xi )}{2!}}ih,}
and further noting that the quantity on the left is the approximation from the finite difference method and that the quantity on the right is the exact quantity of interest plus a remainder, clearly that remainder is the local truncation error. A final expression of this example and its order is:
f
(
x
0
+
i
h
)
−
f
(
x
0
)
i
h
=
f
′
(
x
0
)
+
O
(
h
)
.
{\displaystyle {\frac {f(x_{0}+ih)-f(x_{0})}{ih}}=f'(x_{0})+O(h).}
In this case, the local truncation error is proportional to the step sizes. The quality and duration of simulated FDM solution depends on the discretization equation selection and the step sizes (time and space steps). The data quality and simulation duration increase significantly with smaller step size. Therefore, a reasonable balance between data quality and simulation duration is necessary for practical usage. Large time steps are useful for increasing simulation speed in practice. However, time steps which are too large may create instabilities and affect the data quality.
The von Neumann and Courant-Friedrichs-Lewy criteria are often evaluated to determine the numerical model stability.
== Example: ordinary differential equation ==
For example, consider the ordinary differential equation
u
′
(
x
)
=
3
u
(
x
)
+
2.
{\displaystyle u'(x)=3u(x)+2.}
The Euler method for solving this equation uses the finite difference quotient
u
(
x
+
h
)
−
u
(
x
)
h
≈
u
′
(
x
)
{\displaystyle {\frac {u(x+h)-u(x)}{h}}\approx u'(x)}
to approximate the differential equation by first substituting it for u'(x) then applying a little algebra (multiplying both sides by h, and then adding u(x) to both sides) to get
u
(
x
+
h
)
≈
u
(
x
)
+
h
(
3
u
(
x
)
+
2
)
.
{\displaystyle u(x+h)\approx u(x)+h(3u(x)+2).}
The last equation is a finite-difference equation, and solving this equation gives an approximate solution to the differential equation.
== Example: The heat equation ==
Consider the normalized heat equation in one dimension, with homogeneous Dirichlet boundary conditions
{
U
t
=
U
x
x
U
(
0
,
t
)
=
U
(
1
,
t
)
=
0
(boundary condition)
U
(
x
,
0
)
=
U
0
(
x
)
(initial condition)
{\displaystyle {\begin{cases}U_{t}=U_{xx}\\U(0,t)=U(1,t)=0&{\text{(boundary condition)}}\\U(x,0)=U_{0}(x)&{\text{(initial condition)}}\end{cases}}}
One way to numerically solve this equation is to approximate all the derivatives by finite differences. First partition the domain in space using a mesh
x
0
,
…
,
x
J
{\displaystyle x_{0},\dots ,x_{J}}
and in time using a mesh
t
0
,
…
,
t
N
{\displaystyle t_{0},\dots ,t_{N}}
. Assume a uniform partition both in space and in time, so the difference between two consecutive space points will be h and between two consecutive time points will be k. The points
u
(
x
j
,
t
n
)
=
u
j
n
{\displaystyle u(x_{j},t_{n})=u_{j}^{n}}
will represent the numerical approximation of
u
(
x
j
,
t
n
)
.
{\displaystyle u(x_{j},t_{n}).}
=== Explicit method ===
Using a forward difference at time
t
n
{\displaystyle t_{n}}
and a second-order central difference for the space derivative at position
x
j
{\displaystyle x_{j}}
(FTCS) gives the recurrence equation:
u
j
n
+
1
−
u
j
n
k
=
u
j
+
1
n
−
2
u
j
n
+
u
j
−
1
n
h
2
.
{\displaystyle {\frac {u_{j}^{n+1}-u_{j}^{n}}{k}}={\frac {u_{j+1}^{n}-2u_{j}^{n}+u_{j-1}^{n}}{h^{2}}}.}
This is an explicit method for solving the one-dimensional heat equation.
One can obtain
u
j
n
+
1
{\displaystyle u_{j}^{n+1}}
from the other values this way:
u
j
n
+
1
=
(
1
−
2
r
)
u
j
n
+
r
u
j
−
1
n
+
r
u
j
+
1
n
{\displaystyle u_{j}^{n+1}=(1-2r)u_{j}^{n}+ru_{j-1}^{n}+ru_{j+1}^{n}}
where
r
=
k
/
h
2
.
{\displaystyle r=k/h^{2}.}
So, with this recurrence relation, and knowing the values at time n, one can obtain the corresponding values at time n+1.
u
0
n
{\displaystyle u_{0}^{n}}
and
u
J
n
{\displaystyle u_{J}^{n}}
must be replaced by the boundary conditions, in this example they are both 0.
This explicit method is known to be numerically stable and convergent whenever
r
≤
1
/
2
{\displaystyle r\leq 1/2}
. The numerical errors are proportional to the time step and the square of the space step:
Δ
u
=
O
(
k
)
+
O
(
h
2
)
{\displaystyle \Delta u=O(k)+O(h^{2})}
=== Implicit method ===
Using the backward difference at time
t
n
+
1
{\displaystyle t_{n+1}}
and a second-order central difference for the space derivative at position
x
j
{\displaystyle x_{j}}
(The Backward Time, Centered Space Method "BTCS") gives the recurrence equation:
u
j
n
+
1
−
u
j
n
k
=
u
j
+
1
n
+
1
−
2
u
j
n
+
1
+
u
j
−
1
n
+
1
h
2
.
{\displaystyle {\frac {u_{j}^{n+1}-u_{j}^{n}}{k}}={\frac {u_{j+1}^{n+1}-2u_{j}^{n+1}+u_{j-1}^{n+1}}{h^{2}}}.}
This is an implicit method for solving the one-dimensional heat equation.
One can obtain
u
j
n
+
1
{\displaystyle u_{j}^{n+1}}
from solving a system of linear equations:
(
1
+
2
r
)
u
j
n
+
1
−
r
u
j
−
1
n
+
1
−
r
u
j
+
1
n
+
1
=
u
j
n
{\displaystyle (1+2r)u_{j}^{n+1}-ru_{j-1}^{n+1}-ru_{j+1}^{n+1}=u_{j}^{n}}
The scheme is always numerically stable and convergent but usually more numerically intensive than the explicit method as it requires solving a system of numerical equations on each time step. The errors are linear over the time step and quadratic over the space step:
Δ
u
=
O
(
k
)
+
O
(
h
2
)
.
{\displaystyle \Delta u=O(k)+O(h^{2}).}
=== Crank–Nicolson method ===
Finally, using the central difference at time
t
n
+
1
/
2
{\displaystyle t_{n+1/2}}
and a second-order central difference for the space derivative at position
x
j
{\displaystyle x_{j}}
("CTCS") gives the recurrence equation:
u
j
n
+
1
−
u
j
n
k
=
1
2
(
u
j
+
1
n
+
1
−
2
u
j
n
+
1
+
u
j
−
1
n
+
1
h
2
+
u
j
+
1
n
−
2
u
j
n
+
u
j
−
1
n
h
2
)
.
{\displaystyle {\frac {u_{j}^{n+1}-u_{j}^{n}}{k}}={\frac {1}{2}}\left({\frac {u_{j+1}^{n+1}-2u_{j}^{n+1}+u_{j-1}^{n+1}}{h^{2}}}+{\frac {u_{j+1}^{n}-2u_{j}^{n}+u_{j-1}^{n}}{h^{2}}}\right).}
This formula is known as the Crank–Nicolson method.
One can obtain
u
j
n
+
1
{\displaystyle u_{j}^{n+1}}
from solving a system of linear equations:
(
2
+
2
r
)
u
j
n
+
1
−
r
u
j
−
1
n
+
1
−
r
u
j
+
1
n
+
1
=
(
2
−
2
r
)
u
j
n
+
r
u
j
−
1
n
+
r
u
j
+
1
n
{\displaystyle (2+2r)u_{j}^{n+1}-ru_{j-1}^{n+1}-ru_{j+1}^{n+1}=(2-2r)u_{j}^{n}+ru_{j-1}^{n}+ru_{j+1}^{n}}
The scheme is always numerically stable and convergent but usually more numerically intensive as it requires solving a system of numerical equations on each time step. The errors are quadratic over both the time step and the space step:
Δ
u
=
O
(
k
2
)
+
O
(
h
2
)
.
{\displaystyle \Delta u=O(k^{2})+O(h^{2}).}
=== Comparison ===
To summarize, usually the Crank–Nicolson scheme is the most accurate scheme for small time steps. For larger time steps, the implicit scheme works better since it is less computationally demanding. The explicit scheme is the least accurate and can be unstable, but is also the easiest to implement and the least numerically intensive.
Here is an example. The figures below present the solutions given by the above methods to approximate the heat equation
U
t
=
α
U
x
x
,
α
=
1
π
2
,
{\displaystyle U_{t}=\alpha U_{xx},\quad \alpha ={\frac {1}{\pi ^{2}}},}
with the boundary condition
U
(
0
,
t
)
=
U
(
1
,
t
)
=
0.
{\displaystyle U(0,t)=U(1,t)=0.}
The exact solution is
U
(
x
,
t
)
=
1
π
2
e
−
t
sin
(
π
x
)
.
{\displaystyle U(x,t)={\frac {1}{\pi ^{2}}}e^{-t}\sin(\pi x).}
== Example: The Laplace operator ==
The (continuous) Laplace operator in
n
{\displaystyle n}
-dimensions is given by
Δ
u
(
x
)
=
∑
i
=
1
n
∂
i
2
u
(
x
)
{\displaystyle \Delta u(x)=\sum _{i=1}^{n}\partial _{i}^{2}u(x)}
.
The discrete Laplace operator
Δ
h
u
{\displaystyle \Delta _{h}u}
depends on the dimension
n
{\displaystyle n}
.
In 1D the Laplace operator is approximated as
Δ
u
(
x
)
=
u
″
(
x
)
≈
u
(
x
−
h
)
−
2
u
(
x
)
+
u
(
x
+
h
)
h
2
=:
Δ
h
u
(
x
)
.
{\displaystyle \Delta u(x)=u''(x)\approx {\frac {u(x-h)-2u(x)+u(x+h)}{h^{2}}}=:\Delta _{h}u(x)\,.}
This approximation is usually expressed via the following stencil
Δ
h
=
1
h
2
[
1
−
2
1
]
{\displaystyle \Delta _{h}={\frac {1}{h^{2}}}{\begin{bmatrix}1&-2&1\end{bmatrix}}}
and which represents a symmetric, tridiagonal matrix.
For an equidistant grid one gets a Toeplitz matrix.
The 2D case shows all the characteristics of the more general n-dimensional case. Each second partial derivative needs to be approximated similar to the 1D case
Δ
u
(
x
,
y
)
=
u
x
x
(
x
,
y
)
+
u
y
y
(
x
,
y
)
≈
u
(
x
−
h
,
y
)
−
2
u
(
x
,
y
)
+
u
(
x
+
h
,
y
)
h
2
+
u
(
x
,
y
−
h
)
−
2
u
(
x
,
y
)
+
u
(
x
,
y
+
h
)
h
2
=
u
(
x
−
h
,
y
)
+
u
(
x
+
h
,
y
)
−
4
u
(
x
,
y
)
+
u
(
x
,
y
−
h
)
+
u
(
x
,
y
+
h
)
h
2
=:
Δ
h
u
(
x
,
y
)
,
{\displaystyle {\begin{aligned}\Delta u(x,y)&=u_{xx}(x,y)+u_{yy}(x,y)\\&\approx {\frac {u(x-h,y)-2u(x,y)+u(x+h,y)}{h^{2}}}+{\frac {u(x,y-h)-2u(x,y)+u(x,y+h)}{h^{2}}}\\&={\frac {u(x-h,y)+u(x+h,y)-4u(x,y)+u(x,y-h)+u(x,y+h)}{h^{2}}}\\&=:\Delta _{h}u(x,y)\,,\end{aligned}}}
which is usually given by the following stencil
Δ
h
=
1
h
2
[
1
1
−
4
1
1
]
.
{\displaystyle \Delta _{h}={\frac {1}{h^{2}}}{\begin{bmatrix}&1\\1&-4&1\\&1\end{bmatrix}}\,.}
=== Consistency ===
Consistency of the above-mentioned approximation can be shown for highly regular functions, such as
u
∈
C
4
(
Ω
)
{\displaystyle u\in C^{4}(\Omega )}
.
The statement is
Δ
u
−
Δ
h
u
=
O
(
h
2
)
.
{\displaystyle \Delta u-\Delta _{h}u={\mathcal {O}}(h^{2})\,.}
To prove this, one needs to substitute Taylor Series expansions up to order 3 into the discrete Laplace operator.
=== Properties ===
==== Subharmonic ====
Similar to continuous subharmonic functions one can define subharmonic functions for finite-difference approximations
u
h
{\displaystyle u_{h}}
−
Δ
h
u
h
≤
0
.
{\displaystyle -\Delta _{h}u_{h}\leq 0\,.}
==== Mean value ====
One can define a general stencil of positive type via
[
α
N
α
W
−
α
C
α
E
α
S
]
,
α
i
>
0
,
α
C
=
∑
i
∈
{
N
,
E
,
S
,
W
}
α
i
.
{\displaystyle {\begin{bmatrix}&\alpha _{N}\\\alpha _{W}&-\alpha _{C}&\alpha _{E}\\&\alpha _{S}\end{bmatrix}}\,,\quad \alpha _{i}>0\,,\quad \alpha _{C}=\sum _{i\in \{N,E,S,W\}}\alpha _{i}\,.}
If
u
h
{\displaystyle u_{h}}
is (discrete) subharmonic then the following mean value property holds
u
h
(
x
C
)
≤
∑
i
∈
{
N
,
E
,
S
,
W
}
α
i
u
h
(
x
i
)
∑
i
∈
{
N
,
E
,
S
,
W
}
α
i
,
{\displaystyle u_{h}(x_{C})\leq {\frac {\sum _{i\in \{N,E,S,W\}}\alpha _{i}u_{h}(x_{i})}{\sum _{i\in \{N,E,S,W\}}\alpha _{i}}}\,,}
where the approximation is evaluated on points of the grid, and the stencil is assumed to be of positive type.
A similar mean value property also holds for the continuous case.
==== Maximum principle ====
For a (discrete) subharmonic function
u
h
{\displaystyle u_{h}}
the following holds
max
Ω
h
u
h
≤
max
∂
Ω
h
u
h
,
{\displaystyle \max _{\Omega _{h}}u_{h}\leq \max _{\partial \Omega _{h}}u_{h}\,,}
where
Ω
h
,
∂
Ω
h
{\displaystyle \Omega _{h},\partial \Omega _{h}}
are discretizations of the continuous domain
Ω
{\displaystyle \Omega }
, respectively the boundary
∂
Ω
{\displaystyle \partial \Omega }
.
A similar maximum principle also holds for the continuous case.
== The SBP-SAT method ==
The SBP-SAT (summation by parts - simultaneous approximation term) method is a stable and accurate technique for discretizing and imposing boundary conditions of a well-posed partial differential equation using high order finite differences.
The method is based on finite differences where the differentiation operators exhibit summation-by-parts properties. Typically, these operators consist of differentiation matrices with central difference stencils in the interior with carefully chosen one-sided boundary stencils designed to mimic integration-by-parts in the discrete setting. Using the SAT technique, the boundary conditions of the PDE are imposed weakly, where the boundary values are "pulled" towards the desired conditions rather than exactly fulfilled. If the tuning parameters (inherent to the SAT technique) are chosen properly, the resulting system of ODE's will exhibit similar energy behavior as the continuous PDE, i.e. the system has no non-physical energy growth. This guarantees stability if an integration scheme with a stability region that includes parts of the imaginary axis, such as the fourth order Runge-Kutta method, is used. This makes the SAT technique an attractive method of imposing boundary conditions for higher order finite difference methods, in contrast to for example the injection method, which typically will not be stable if high order differentiation operators are used.
== See also ==
== References ==
== Further reading ==
K.W. Morton and D.F. Mayers, Numerical Solution of Partial Differential Equations, An Introduction. Cambridge University Press, 2005.
Autar Kaw and E. Eric Kalu, Numerical Methods with Applications, (2008) [1]. Contains a brief, engineering-oriented introduction to FDM (for ODEs) in Chapter 08.07.
John Strikwerda (2004). Finite Difference Schemes and Partial Differential Equations (2nd ed.). SIAM. ISBN 978-0-89871-639-9.
Smith, G. D. (1985), Numerical Solution of Partial Differential Equations: Finite Difference Methods, 3rd ed., Oxford University Press
Peter Olver (2013). Introduction to Partial Differential Equations. Springer. Chapter 5: Finite differences. ISBN 978-3-319-02099-0..
Randall J. LeVeque, Finite Difference Methods for Ordinary and Partial Differential Equations, SIAM, 2007.
Sergey Lemeshevsky, Piotr Matus, Dmitriy Poliakov(Eds): "Exact Finite-Difference Schemes", De Gruyter (2016). DOI: https://doi.org/10.1515/9783110491326 .
Mikhail Shashkov: Conservative Finite-Difference Methods on General Grids, CRC Press, ISBN 0-8493-7375-1 (1996). | Wikipedia/Finite_difference_method |
The finite volume method (FVM) is a method for representing and evaluating partial differential equations in the form of algebraic equations.
In the finite volume method, volume integrals in a partial differential equation that contain a divergence term are converted to surface integrals, using the divergence theorem.
These terms are then evaluated as fluxes at the surfaces of each finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods are conservative. Another advantage of the finite volume method is that it is easily formulated to allow for unstructured meshes. The method is used in many computational fluid dynamics packages.
"Finite volume" refers to the small volume surrounding each node point on a mesh.
Finite volume methods can be compared and contrasted with the finite difference methods, which approximate derivatives using nodal values, or finite element methods, which create local approximations of a solution using local data, and construct a global approximation by stitching them together. In contrast a finite volume method evaluates exact expressions for the average value of the solution over some volume, and uses this data to construct approximations of the solution within cells.
== Example ==
Consider a simple 1D advection problem:
Here,
ρ
=
ρ
(
x
,
t
)
{\displaystyle \rho =\rho \left(x,t\right)}
represents the state variable and
f
=
f
(
ρ
(
x
,
t
)
)
{\displaystyle f=f\left(\rho \left(x,t\right)\right)}
represents the flux or flow of
ρ
{\displaystyle \rho }
. Conventionally, positive
f
{\displaystyle f}
represents flow to the right while negative
f
{\displaystyle f}
represents flow to the left. If we assume that equation (1) represents a flowing medium of constant area, we can sub-divide the spatial domain,
x
{\displaystyle x}
, into finite volumes or cells with cell centers indexed as
i
{\displaystyle i}
. For a particular cell,
i
{\displaystyle i}
, we can define the volume average value of
ρ
i
(
t
)
=
ρ
(
x
,
t
)
{\displaystyle {\rho }_{i}\left(t\right)=\rho \left(x,t\right)}
at time
t
=
t
1
{\displaystyle {t=t_{1}}}
and
x
∈
[
x
i
−
1
/
2
,
x
i
+
1
/
2
]
{\displaystyle {x\in \left[x_{i-1/2},x_{i+1/2}\right]}}
, as
and at time
t
=
t
2
{\displaystyle t=t_{2}}
as,
where
x
i
−
1
/
2
{\displaystyle x_{i-1/2}}
and
x
i
+
1
/
2
{\displaystyle x_{i+1/2}}
represent locations of the upstream and downstream faces or edges respectively of the
i
th
{\displaystyle i^{\text{th}}}
cell.
Integrating equation (1) in time, we have:
where
f
x
=
∂
f
∂
x
{\displaystyle f_{x}={\frac {\partial f}{\partial x}}}
.
To obtain the volume average of
ρ
(
x
,
t
)
{\displaystyle \rho \left(x,t\right)}
at time
t
=
t
2
{\displaystyle t=t_{2}}
, we integrate
ρ
(
x
,
t
2
)
{\displaystyle \rho \left(x,t_{2}\right)}
over the cell volume,
[
x
i
−
1
/
2
,
x
i
+
1
/
2
]
{\displaystyle \left[x_{i-1/2},x_{i+1/2}\right]}
and divide the result by
Δ
x
i
=
x
i
+
1
/
2
−
x
i
−
1
/
2
{\displaystyle \Delta x_{i}=x_{i+1/2}-x_{i-1/2}}
, i.e.
We assume that
f
{\displaystyle f\ }
is well behaved and that we can reverse the order of integration. Also, recall that flow is normal to the unit area of the cell. Now, since in one dimension
f
x
≜
∇
⋅
f
{\displaystyle f_{x}\triangleq \nabla \cdot f}
, we can apply the divergence theorem, i.e.
∮
v
∇
⋅
f
d
v
=
∮
S
f
d
S
{\displaystyle \oint _{v}\nabla \cdot fdv=\oint _{S}f\,dS}
, and substitute for the volume integral of the divergence with the values of
f
(
x
)
{\displaystyle f(x)}
evaluated at the cell surface (edges
x
i
−
1
/
2
{\displaystyle x_{i-1/2}}
and
x
i
+
1
/
2
{\displaystyle x_{i+1/2}}
) of the finite volume as follows:
where
f
i
±
1
/
2
=
f
(
x
i
±
1
/
2
,
t
)
{\displaystyle f_{i\pm 1/2}=f\left(x_{i\pm 1/2},t\right)}
.
We can therefore derive a semi-discrete numerical scheme for the above problem with cell centers indexed as
i
{\displaystyle i}
, and with cell edge fluxes indexed as
i
±
1
/
2
{\displaystyle i\pm 1/2}
, by differentiating (6) with respect to time to obtain:
where values for the edge fluxes,
f
i
±
1
/
2
{\displaystyle f_{i\pm 1/2}}
, can be reconstructed by interpolation or extrapolation of the cell averages. Equation (7) is exact for the volume averages; i.e., no approximations have been made during its derivation.
This method can also be applied to a 2D situation by considering the north and south faces along with the east and west faces around a node.
== General conservation law ==
We can also consider the general conservation law problem, represented by the following PDE,
Here,
u
{\displaystyle \mathbf {u} }
represents a vector of states and
f
{\displaystyle \mathbf {f} }
represents the corresponding flux tensor. Again we can sub-divide the spatial domain into finite volumes or cells. For a particular cell,
i
{\displaystyle i}
, we take the volume integral over the total volume of the cell,
v
i
{\displaystyle v_{i}}
, which gives,
On integrating the first term to get the volume average and applying the divergence theorem to the second, this yields
where
S
i
{\displaystyle S_{i}}
represents the total surface area of the cell and
n
{\displaystyle {\mathbf {n} }}
is a unit vector normal to the surface and pointing outward. So, finally, we are able to present the general result equivalent to (8), i.e.
Again, values for the edge fluxes can be reconstructed by interpolation or extrapolation of the cell averages. The actual numerical scheme will depend upon problem geometry and mesh construction. MUSCL reconstruction is often used in high resolution schemes where shocks or discontinuities are present in the solution.
Finite volume schemes are conservative as cell averages change through the edge fluxes. In other words, one cell's loss is always another cell's gain!
== See also ==
Finite element method
Flux limiter
Godunov's scheme
Godunov's theorem
High-resolution scheme
KIVA (software)
MIT General Circulation Model
MUSCL scheme
Sergei K. Godunov
Total variation diminishing
Finite volume method for unsteady flow
== References ==
== Further reading ==
Eymard, R. Gallouët, T. R., Herbin, R. (2000) The finite volume method Handbook of Numerical Analysis, Vol. VII, 2000, p. 713–1020. Editors: P.G. Ciarlet and J.L. Lions.
Hirsch, C. (1990), Numerical Computation of Internal and External Flows, Volume 2: Computational Methods for Inviscid and Viscous Flows, Wiley.
Laney, Culbert B. (1998), Computational Gas Dynamics, Cambridge University Press.
LeVeque, Randall (1990), Numerical Methods for Conservation Laws, ETH Lectures in Mathematics Series, Birkhauser-Verlag.
LeVeque, Randall (2002), Finite Volume Methods for Hyperbolic Problems, Cambridge University Press.
Patankar, Suhas V. (1980), Numerical Heat Transfer and Fluid Flow, Hemisphere.
Tannehill, John C., et al., (1997), Computational Fluid mechanics and Heat Transfer, 2nd Ed., Taylor and Francis.
Toro, E. F. (1999), Riemann Solvers and Numerical Methods for Fluid Dynamics, Springer-Verlag.
Wesseling, Pieter (2001), Principles of Computational Fluid Dynamics, Springer-Verlag.
== External links ==
Finite volume methods by R. Eymard, T Gallouët and R. Herbin, update of the article published in Handbook of Numerical Analysis, 2000
Rübenkönig, Oliver. "The Finite Volume Method (FVM) – An introduction". Archived from the original on 2009-10-02., available under the GFDL.
FiPy: A Finite Volume PDE Solver Using Python from NIST.
CLAWPACK: a software package designed to compute numerical solutions to hyperbolic partial differential equations using a wave propagation approach | Wikipedia/Finite_volume_method |
In mathematics, a recurrence relation is an equation according to which the
n
{\displaystyle n}
th term of a sequence of numbers is equal to some combination of the previous terms. Often, only
k
{\displaystyle k}
previous terms of the sequence appear in the equation, for a parameter
k
{\displaystyle k}
that is independent of
n
{\displaystyle n}
; this number
k
{\displaystyle k}
is called the order of the relation. If the values of the first
k
{\displaystyle k}
numbers in the sequence have been given, the rest of the sequence can be calculated by repeatedly applying the equation.
In linear recurrences, the nth term is equated to a linear function of the
k
{\displaystyle k}
previous terms. A famous example is the recurrence for the Fibonacci numbers,
F
n
=
F
n
−
1
+
F
n
−
2
{\displaystyle F_{n}=F_{n-1}+F_{n-2}}
where the order
k
{\displaystyle k}
is two and the linear function merely adds the two previous terms. This example is a linear recurrence with constant coefficients, because the coefficients of the linear function (1 and 1) are constants that do not depend on
n
.
{\displaystyle n.}
For these recurrences, one can express the general term of the sequence as a closed-form expression of
n
{\displaystyle n}
. As well, linear recurrences with polynomial coefficients depending on
n
{\displaystyle n}
are also important, because many common elementary functions and special functions have a Taylor series whose coefficients satisfy such a recurrence relation (see holonomic function).
Solving a recurrence relation means obtaining a closed-form solution: a non-recursive function of
n
{\displaystyle n}
.
The concept of a recurrence relation can be extended to multidimensional arrays, that is, indexed families that are indexed by tuples of natural numbers.
== Definition ==
A recurrence relation is an equation that expresses each element of a sequence as a function of the preceding ones. More precisely, in the case where only the immediately preceding element is involved, a recurrence relation has the form
u
n
=
φ
(
n
,
u
n
−
1
)
for
n
>
0
,
{\displaystyle u_{n}=\varphi (n,u_{n-1})\quad {\text{for}}\quad n>0,}
where
φ
:
N
×
X
→
X
{\displaystyle \varphi :\mathbb {N} \times X\to X}
is a function, where X is a set to which the elements of a sequence must belong. For any
u
0
∈
X
{\displaystyle u_{0}\in X}
, this defines a unique sequence with
u
0
{\displaystyle u_{0}}
as its first element, called the initial value.
It is easy to modify the definition for getting sequences starting from the term of index 1 or higher.
This defines recurrence relation of first order. A recurrence relation of order k has the form
u
n
=
φ
(
n
,
u
n
−
1
,
u
n
−
2
,
…
,
u
n
−
k
)
for
n
≥
k
,
{\displaystyle u_{n}=\varphi (n,u_{n-1},u_{n-2},\ldots ,u_{n-k})\quad {\text{for}}\quad n\geq k,}
where
φ
:
N
×
X
k
→
X
{\displaystyle \varphi :\mathbb {N} \times X^{k}\to X}
is a function that involves k consecutive elements of the sequence.
In this case, k initial values are needed for defining a sequence.
== Examples ==
=== Factorial ===
The factorial is defined by the recurrence relation
n
!
=
n
⋅
(
n
−
1
)
!
for
n
>
0
,
{\displaystyle n!=n\cdot (n-1)!\quad {\text{for}}\quad n>0,}
and the initial condition
0
!
=
1.
{\displaystyle 0!=1.}
This is an example of a linear recurrence with polynomial coefficients of order 1, with the simple polynomial (in n)
n
{\displaystyle n}
as its only coefficient.
=== Logistic map ===
An example of a recurrence relation is the logistic map defined by
x
n
+
1
=
r
x
n
(
1
−
x
n
)
,
{\displaystyle x_{n+1}=rx_{n}(1-x_{n}),}
for a given constant
r
.
{\displaystyle r.}
The behavior of the sequence depends dramatically on
r
,
{\displaystyle r,}
but is stable when the initial condition
x
0
{\displaystyle x_{0}}
varies.
=== Fibonacci numbers ===
The recurrence of order two satisfied by the Fibonacci numbers is the canonical example of a homogeneous linear recurrence relation with constant coefficients (see below). The Fibonacci sequence is defined using the recurrence
F
n
=
F
n
−
1
+
F
n
−
2
{\displaystyle F_{n}=F_{n-1}+F_{n-2}}
with initial conditions
F
0
=
0
{\displaystyle F_{0}=0}
F
1
=
1.
{\displaystyle F_{1}=1.}
Explicitly, the recurrence yields the equations
F
2
=
F
1
+
F
0
{\displaystyle F_{2}=F_{1}+F_{0}}
F
3
=
F
2
+
F
1
{\displaystyle F_{3}=F_{2}+F_{1}}
F
4
=
F
3
+
F
2
{\displaystyle F_{4}=F_{3}+F_{2}}
etc.
We obtain the sequence of Fibonacci numbers, which begins
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
The recurrence can be solved by methods described below yielding Binet's formula, which involves powers of the two roots of the characteristic polynomial
t
2
=
t
+
1
{\displaystyle t^{2}=t+1}
; the generating function of the sequence is the rational function
t
1
−
t
−
t
2
.
{\displaystyle {\frac {t}{1-t-t^{2}}}.}
=== Binomial coefficients ===
A simple example of a multidimensional recurrence relation is given by the binomial coefficients
(
n
k
)
{\displaystyle {\tbinom {n}{k}}}
, which count the ways of selecting
k
{\displaystyle k}
elements out of a set of
n
{\displaystyle n}
elements.
They can be computed by the recurrence relation
(
n
k
)
=
(
n
−
1
k
−
1
)
+
(
n
−
1
k
)
,
{\displaystyle {\binom {n}{k}}={\binom {n-1}{k-1}}+{\binom {n-1}{k}},}
with the base cases
(
n
0
)
=
(
n
n
)
=
1
{\displaystyle {\tbinom {n}{0}}={\tbinom {n}{n}}=1}
. Using this formula to compute the values of all binomial coefficients generates an infinite array called Pascal's triangle. The same values can also be computed directly by a different formula that is not a recurrence, but uses factorials, multiplication and division, not just additions:
(
n
k
)
=
n
!
k
!
(
n
−
k
)
!
.
{\displaystyle {\binom {n}{k}}={\frac {n!}{k!(n-k)!}}.}
The binomial coefficients can also be computed with a uni-dimensional recurrence:
(
n
k
)
=
(
n
k
−
1
)
(
n
−
k
+
1
)
/
k
,
{\displaystyle {\binom {n}{k}}={\binom {n}{k-1}}(n-k+1)/k,}
with the initial value
(
n
0
)
=
1
{\textstyle {\binom {n}{0}}=1}
(The division is not displayed as a fraction for emphasizing that it must be computed after the multiplication, for not introducing fractional numbers).
This recurrence is widely used in computers because it does not require to build a table as does the bi-dimensional recurrence, and does involve very large integers as does the formula with factorials (if one uses
(
n
k
)
=
(
n
n
−
k
)
,
{\textstyle {\binom {n}{k}}={\binom {n}{n-k}},}
all involved integers are smaller than the final result).
== Difference operator and difference equations ==
The difference operator is an operator that maps sequences to sequences, and, more generally, functions to functions. It is commonly denoted
Δ
,
{\displaystyle \Delta ,}
and is defined, in functional notation, as
(
Δ
f
)
(
x
)
=
f
(
x
+
1
)
−
f
(
x
)
.
{\displaystyle (\Delta f)(x)=f(x+1)-f(x).}
It is thus a special case of finite difference.
When using the index notation for sequences, the definition becomes
(
Δ
a
)
n
=
a
n
+
1
−
a
n
.
{\displaystyle (\Delta a)_{n}=a_{n+1}-a_{n}.}
The parentheses around
Δ
f
{\displaystyle \Delta f}
and
Δ
a
{\displaystyle \Delta a}
are generally omitted, and
Δ
a
n
{\displaystyle \Delta a_{n}}
must be understood as the term of index n in the sequence
Δ
a
,
{\displaystyle \Delta a,}
and not
Δ
{\displaystyle \Delta }
applied to the element
a
n
.
{\displaystyle a_{n}.}
Given sequence
a
=
(
a
n
)
n
∈
N
,
{\displaystyle a=(a_{n})_{n\in \mathbb {N} },}
the first difference of a is
Δ
a
.
{\displaystyle \Delta a.}
The second difference is
Δ
2
a
=
(
Δ
∘
Δ
)
a
=
Δ
(
Δ
a
)
.
{\displaystyle \Delta ^{2}a=(\Delta \circ \Delta )a=\Delta (\Delta a).}
A simple computation shows that
Δ
2
a
n
=
a
n
+
2
−
2
a
n
+
1
+
a
n
.
{\displaystyle \Delta ^{2}a_{n}=a_{n+2}-2a_{n+1}+a_{n}.}
More generally: the kth difference is defined recursively as
Δ
k
=
Δ
∘
Δ
k
−
1
,
{\displaystyle \Delta ^{k}=\Delta \circ \Delta ^{k-1},}
and one has
Δ
k
a
n
=
∑
t
=
0
k
(
−
1
)
t
(
k
t
)
a
n
+
k
−
t
.
{\displaystyle \Delta ^{k}a_{n}=\sum _{t=0}^{k}(-1)^{t}{\binom {k}{t}}a_{n+k-t}.}
This relation can be inverted, giving
a
n
+
k
=
a
n
+
(
k
1
)
Δ
a
n
+
⋯
+
(
k
k
)
Δ
k
(
a
n
)
.
{\displaystyle a_{n+k}=a_{n}+{k \choose 1}\Delta a_{n}+\cdots +{k \choose k}\Delta ^{k}(a_{n}).}
A difference equation of order k is an equation that involves the k first differences of a sequence or a function, in the same way as a differential equation of order k relates the k first derivatives of a function.
The two above relations allow transforming a recurrence relation of order k into a difference equation of order k, and, conversely, a difference equation of order k into recurrence relation of order k. Each transformation is the inverse of the other, and the sequences that are solution of the difference equation are exactly those that satisfies the recurrence relation.
For example, the difference equation
3
Δ
2
a
n
+
2
Δ
a
n
+
7
a
n
=
0
{\displaystyle 3\Delta ^{2}a_{n}+2\Delta a_{n}+7a_{n}=0}
is equivalent to the recurrence relation
3
a
n
+
2
=
4
a
n
+
1
−
8
a
n
,
{\displaystyle 3a_{n+2}=4a_{n+1}-8a_{n},}
in the sense that the two equations are satisfied by the same sequences.
As it is equivalent for a sequence to satisfy a recurrence relation or to be the solution of a difference equation, the two terms "recurrence relation" and "difference equation" are sometimes used interchangeably. See Rational difference equation and Matrix difference equation for example of uses of "difference equation" instead of "recurrence relation"
Difference equations resemble differential equations, and this resemblance is often used to mimic methods for solving differentiable equations to apply to solving difference equations, and therefore recurrence relations.
Summation equations relate to difference equations as integral equations relate to differential equations. See time scale calculus for a unification of the theory of difference equations with that of differential equations.
=== From sequences to grids ===
Single-variable or one-dimensional recurrence relations are about sequences (i.e. functions defined on one-dimensional grids). Multi-variable or n-dimensional recurrence relations are about
n
{\displaystyle n}
-dimensional grids. Functions defined on
n
{\displaystyle n}
-grids can also be studied with partial difference equations.
== Solving ==
=== Solving linear recurrence relations with constant coefficients ===
=== Solving first-order non-homogeneous recurrence relations with variable coefficients ===
Moreover, for the general first-order non-homogeneous linear recurrence relation with variable coefficients:
a
n
+
1
=
f
n
a
n
+
g
n
,
f
n
≠
0
,
{\displaystyle a_{n+1}=f_{n}a_{n}+g_{n},\qquad f_{n}\neq 0,}
there is also a nice method to solve it:
a
n
+
1
−
f
n
a
n
=
g
n
{\displaystyle a_{n+1}-f_{n}a_{n}=g_{n}}
a
n
+
1
∏
k
=
0
n
f
k
−
f
n
a
n
∏
k
=
0
n
f
k
=
g
n
∏
k
=
0
n
f
k
{\displaystyle {\frac {a_{n+1}}{\prod _{k=0}^{n}f_{k}}}-{\frac {f_{n}a_{n}}{\prod _{k=0}^{n}f_{k}}}={\frac {g_{n}}{\prod _{k=0}^{n}f_{k}}}}
a
n
+
1
∏
k
=
0
n
f
k
−
a
n
∏
k
=
0
n
−
1
f
k
=
g
n
∏
k
=
0
n
f
k
{\displaystyle {\frac {a_{n+1}}{\prod _{k=0}^{n}f_{k}}}-{\frac {a_{n}}{\prod _{k=0}^{n-1}f_{k}}}={\frac {g_{n}}{\prod _{k=0}^{n}f_{k}}}}
Let
A
n
=
a
n
∏
k
=
0
n
−
1
f
k
,
{\displaystyle A_{n}={\frac {a_{n}}{\prod _{k=0}^{n-1}f_{k}}},}
Then
A
n
+
1
−
A
n
=
g
n
∏
k
=
0
n
f
k
{\displaystyle A_{n+1}-A_{n}={\frac {g_{n}}{\prod _{k=0}^{n}f_{k}}}}
∑
m
=
0
n
−
1
(
A
m
+
1
−
A
m
)
=
A
n
−
A
0
=
∑
m
=
0
n
−
1
g
m
∏
k
=
0
m
f
k
{\displaystyle \sum _{m=0}^{n-1}(A_{m+1}-A_{m})=A_{n}-A_{0}=\sum _{m=0}^{n-1}{\frac {g_{m}}{\prod _{k=0}^{m}f_{k}}}}
a
n
∏
k
=
0
n
−
1
f
k
=
A
0
+
∑
m
=
0
n
−
1
g
m
∏
k
=
0
m
f
k
{\displaystyle {\frac {a_{n}}{\prod _{k=0}^{n-1}f_{k}}}=A_{0}+\sum _{m=0}^{n-1}{\frac {g_{m}}{\prod _{k=0}^{m}f_{k}}}}
a
n
=
(
∏
k
=
0
n
−
1
f
k
)
(
A
0
+
∑
m
=
0
n
−
1
g
m
∏
k
=
0
m
f
k
)
{\displaystyle a_{n}=\left(\prod _{k=0}^{n-1}f_{k}\right)\left(A_{0}+\sum _{m=0}^{n-1}{\frac {g_{m}}{\prod _{k=0}^{m}f_{k}}}\right)}
If we apply the formula to
a
n
+
1
=
(
1
+
h
f
n
h
)
a
n
+
h
g
n
h
{\displaystyle a_{n+1}=(1+hf_{nh})a_{n}+hg_{nh}}
and take the limit
h
→
0
{\displaystyle h\to 0}
, we get the formula for first order linear differential equations with variable coefficients; the sum becomes an integral, and the product becomes the exponential function of an integral.
=== Solving general homogeneous linear recurrence relations ===
Many homogeneous linear recurrence relations may be solved by means of the generalized hypergeometric series. Special cases of these lead to recurrence relations for the orthogonal polynomials, and many special functions. For example, the solution to
J
n
+
1
=
2
n
z
J
n
−
J
n
−
1
{\displaystyle J_{n+1}={\frac {2n}{z}}J_{n}-J_{n-1}}
is given by
J
n
=
J
n
(
z
)
,
{\displaystyle J_{n}=J_{n}(z),}
the Bessel function, while
(
b
−
n
)
M
n
−
1
+
(
2
n
−
b
+
z
)
M
n
−
n
M
n
+
1
=
0
{\displaystyle (b-n)M_{n-1}+(2n-b+z)M_{n}-nM_{n+1}=0}
is solved by
M
n
=
M
(
n
,
b
;
z
)
{\displaystyle M_{n}=M(n,b;z)}
the confluent hypergeometric series. Sequences which are the solutions of linear difference equations with polynomial coefficients are called P-recursive. For these specific recurrence equations algorithms are known which find polynomial, rational or hypergeometric solutions.
=== Solving general non-homogeneous linear recurrence relations with constant coefficients ===
Furthermore, for the general non-homogeneous linear recurrence relation with constant coefficients, one can solve it based on variation of parameter.
=== Solving first-order rational difference equations ===
A first order rational difference equation has the form
w
t
+
1
=
a
w
t
+
b
c
w
t
+
d
{\displaystyle w_{t+1}={\tfrac {aw_{t}+b}{cw_{t}+d}}}
. Such an equation can be solved by writing
w
t
{\displaystyle w_{t}}
as a nonlinear transformation of another variable
x
t
{\displaystyle x_{t}}
which itself evolves linearly. Then standard methods can be used to solve the linear difference equation in
x
t
{\displaystyle x_{t}}
.
== Stability ==
=== Stability of linear higher-order recurrences ===
The linear recurrence of order
d
{\displaystyle d}
,
a
n
=
c
1
a
n
−
1
+
c
2
a
n
−
2
+
⋯
+
c
d
a
n
−
d
,
{\displaystyle a_{n}=c_{1}a_{n-1}+c_{2}a_{n-2}+\cdots +c_{d}a_{n-d},}
has the characteristic equation
λ
d
−
c
1
λ
d
−
1
−
c
2
λ
d
−
2
−
⋯
−
c
d
λ
0
=
0.
{\displaystyle \lambda ^{d}-c_{1}\lambda ^{d-1}-c_{2}\lambda ^{d-2}-\cdots -c_{d}\lambda ^{0}=0.}
The recurrence is stable, meaning that the iterates converge asymptotically to a fixed value, if and only if the eigenvalues (i.e., the roots of the characteristic equation), whether real or complex, are all less than unity in absolute value.
=== Stability of linear first-order matrix recurrences ===
In the first-order matrix difference equation
[
x
t
−
x
∗
]
=
A
[
x
t
−
1
−
x
∗
]
{\displaystyle [x_{t}-x^{*}]=A[x_{t-1}-x^{*}]}
with state vector
x
{\displaystyle x}
and transition matrix
A
{\displaystyle A}
,
x
{\displaystyle x}
converges asymptotically to the steady state vector
x
∗
{\displaystyle x^{*}}
if and only if all eigenvalues of the transition matrix
A
{\displaystyle A}
(whether real or complex) have an absolute value which is less than 1.
=== Stability of nonlinear first-order recurrences ===
Consider the nonlinear first-order recurrence
x
n
=
f
(
x
n
−
1
)
.
{\displaystyle x_{n}=f(x_{n-1}).}
This recurrence is locally stable, meaning that it converges to a fixed point
x
∗
{\displaystyle x^{*}}
from points sufficiently close to
x
∗
{\displaystyle x^{*}}
, if the slope of
f
{\displaystyle f}
in the neighborhood of
x
∗
{\displaystyle x^{*}}
is smaller than unity in absolute value: that is,
|
f
′
(
x
∗
)
|
<
1.
{\displaystyle |f'(x^{*})|<1.}
A nonlinear recurrence could have multiple fixed points, in which case some fixed points may be locally stable and others locally unstable; for continuous f two adjacent fixed points cannot both be locally stable.
A nonlinear recurrence relation could also have a cycle of period
k
{\displaystyle k}
for
k
>
1
{\displaystyle k>1}
. Such a cycle is stable, meaning that it attracts a set of initial conditions of positive measure, if the composite function
g
(
x
)
:=
f
∘
f
∘
⋯
∘
f
(
x
)
{\displaystyle g(x):=f\circ f\circ \cdots \circ f(x)}
with
f
{\displaystyle f}
appearing
k
{\displaystyle k}
times is locally stable according to the same criterion:
|
g
′
(
x
∗
)
|
<
1
,
{\displaystyle |g'(x^{*})|<1,}
where
x
∗
{\displaystyle x^{*}}
is any point on the cycle.
In a chaotic recurrence relation, the variable
x
{\displaystyle x}
stays in a bounded region but never converges to a fixed point or an attracting cycle; any fixed points or cycles of the equation are unstable. See also logistic map, dyadic transformation, and tent map.
== Relationship to differential equations ==
When solving an ordinary differential equation numerically, one typically encounters a recurrence relation. For example, when solving the initial value problem
y
′
(
t
)
=
f
(
t
,
y
(
t
)
)
,
y
(
t
0
)
=
y
0
,
{\displaystyle y'(t)=f(t,y(t)),\ \ y(t_{0})=y_{0},}
with Euler's method and a step size
h
{\displaystyle h}
, one calculates the values
y
0
=
y
(
t
0
)
,
y
1
=
y
(
t
0
+
h
)
,
y
2
=
y
(
t
0
+
2
h
)
,
…
{\displaystyle y_{0}=y(t_{0}),\ \ y_{1}=y(t_{0}+h),\ \ y_{2}=y(t_{0}+2h),\ \dots }
by the recurrence
y
n
+
1
=
y
n
+
h
f
(
t
n
,
y
n
)
,
t
n
=
t
0
+
n
h
{\displaystyle \,y_{n+1}=y_{n}+hf(t_{n},y_{n}),t_{n}=t_{0}+nh}
Systems of linear first order differential equations can be discretized exactly analytically using the methods shown in the discretization article.
== Applications ==
=== Mathematical biology ===
Some of the best-known difference equations have their origins in the attempt to model population dynamics. For example, the Fibonacci numbers were once used as a model for the growth of a rabbit population.
The logistic map is used either directly to model population growth, or as a starting point for more detailed models of population dynamics. In this context, coupled difference equations are often used to model the interaction of two or more populations. For example, the Nicholson–Bailey model for a host-parasite interaction is given by
N
t
+
1
=
λ
N
t
e
−
a
P
t
{\displaystyle N_{t+1}=\lambda N_{t}e^{-aP_{t}}}
P
t
+
1
=
N
t
(
1
−
e
−
a
P
t
)
,
{\displaystyle P_{t+1}=N_{t}(1-e^{-aP_{t}}),}
with
N
t
{\displaystyle N_{t}}
representing the hosts, and
P
t
{\displaystyle P_{t}}
the parasites, at time
t
{\displaystyle t}
.
Integrodifference equations are a form of recurrence relation important to spatial ecology. These and other difference equations are particularly suited to modeling univoltine populations.
=== Computer science ===
Recurrence relations are also of fundamental importance in analysis of algorithms. If an algorithm is designed so that it will break a problem into smaller subproblems (divide and conquer), its running time is described by a recurrence relation.
A simple example is the time an algorithm takes to find an element in an ordered vector with
n
{\displaystyle n}
elements, in the worst case.
A naive algorithm will search from left to right, one element at a time. The worst possible scenario is when the required element is the last, so the number of comparisons is
n
{\displaystyle n}
.
A better algorithm is called binary search. However, it requires a sorted vector. It will first check if the element is at the middle of the vector. If not, then it will check if the middle element is greater or lesser than the sought element. At this point, half of the vector can be discarded, and the algorithm can be run again on the other half. The number of comparisons will be given by
c
1
=
1
{\displaystyle c_{1}=1}
c
n
=
1
+
c
n
/
2
{\displaystyle c_{n}=1+c_{n/2}}
the time complexity of which will be
O
(
log
2
(
n
)
)
{\displaystyle O(\log _{2}(n))}
.
=== Digital signal processing ===
In digital signal processing, recurrence relations can model feedback in a system, where outputs at one time become inputs for future time. They thus arise in infinite impulse response (IIR) digital filters.
For example, the equation for a "feedforward" IIR comb filter of delay
T
{\displaystyle T}
is:
y
t
=
(
1
−
α
)
x
t
+
α
y
t
−
T
,
{\displaystyle y_{t}=(1-\alpha )x_{t}+\alpha y_{t-T},}
where
x
t
{\displaystyle x_{t}}
is the input at time
t
{\displaystyle t}
,
y
t
{\displaystyle y_{t}}
is the output at time
t
{\displaystyle t}
, and
α
{\displaystyle \alpha }
controls how much of the delayed signal is fed back into the output. From this we can see that
y
t
=
(
1
−
α
)
x
t
+
α
(
(
1
−
α
)
x
t
−
T
+
α
y
t
−
2
T
)
{\displaystyle y_{t}=(1-\alpha )x_{t}+\alpha ((1-\alpha )x_{t-T}+\alpha y_{t-2T})}
y
t
=
(
1
−
α
)
x
t
+
(
α
−
α
2
)
x
t
−
T
+
α
2
y
t
−
2
T
{\displaystyle y_{t}=(1-\alpha )x_{t}+(\alpha -\alpha ^{2})x_{t-T}+\alpha ^{2}y_{t-2T}}
etc.
=== Economics ===
Recurrence relations, especially linear recurrence relations, are used extensively in both theoretical and empirical economics. In particular, in macroeconomics one might develop a model of various broad sectors of the economy (the financial sector, the goods sector, the labor market, etc.) in which some agents' actions depend on lagged variables. The model would then be solved for current values of key variables (interest rate, real GDP, etc.) in terms of past and current values of other variables.
== See also ==
== References ==
=== Footnotes ===
=== Bibliography ===
Batchelder, Paul M. (1967). An introduction to linear difference equations. Dover Publications.
Miller, Kenneth S. (1968). Linear difference equations. W. A. Benjamin.
Fillmore, Jay P.; Marx, Morris L. (1968). "Linear recursive sequences". SIAM Rev. Vol. 10, no. 3. pp. 324–353. JSTOR 2027658.
Brousseau, Alfred (1971). Linear Recursion and Fibonacci Sequences. Fibonacci Association.
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 1990. ISBN 0-262-03293-7. Chapter 4: Recurrences, pp. 62–90.
Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). Concrete Mathematics: A Foundation for Computer Science (2 ed.). Addison-Wesley. ISBN 0-201-55802-5.
Enders, Walter (2010). Applied Econometric Times Series (3 ed.). Archived from the original on 2014-11-10.
Cull, Paul; Flahive, Mary; Robson, Robbie (2005). Difference Equations: From Rabbits to Chaos. Springer. ISBN 0-387-23234-6. chapter 7.
Jacques, Ian (2006). Mathematics for Economics and Business (Fifth ed.). Prentice Hall. pp. 551–568. ISBN 0-273-70195-9. Chapter 9.1: Difference Equations.
Minh, Tang; Van To, Tan (2006). "Using generating functions to solve linear inhomogeneous recurrence equations" (PDF). Proc. Int. Conf. Simulation, Modelling and Optimization, SMO'06. pp. 399–404. Archived from the original (PDF) on 2016-03-04. Retrieved 2014-08-07.
Polyanin, Andrei D. "Difference and Functional Equations: Exact Solutions". at EqWorld - The World of Mathematical Equations.
Polyanin, Andrei D. "Difference and Functional Equations: Methods". at EqWorld - The World of Mathematical Equations.
Wang, Xiang-Sheng; Wong, Roderick (2012). "Asymptotics of orthogonal polynomials via recurrence relations". Anal. Appl. 10 (2): 215–235. arXiv:1101.4371. doi:10.1142/S0219530512500108. S2CID 28828175.
== External links ==
"Recurrence relation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Recurrence Equation". MathWorld.
"OEIS Index Rec". OEIS index to a few thousand examples of linear recurrences, sorted by order (number of terms) and signature (vector of values of the constant coefficients) | Wikipedia/Difference_equations |
In mathematics, a system of differential equations is a finite set of differential equations. Such a system can be either linear or non-linear. Also, such a system can be either a system of ordinary differential equations or a system of partial differential equations.
== Linear systems of differential equations ==
A first-order linear system of ODEs is a system in which every equation is first order and depends on the unknown functions linearly. Here we consider systems with an equal number of unknown functions and equations. These may be written as
d
x
j
d
t
=
a
j
1
(
t
)
x
1
+
…
+
a
j
n
(
t
)
x
n
+
g
j
(
t
)
,
j
=
1
,
…
,
n
{\displaystyle {\frac {dx_{j}}{dt}}=a_{j1}(t)x_{1}+\ldots +a_{jn}(t)x_{n}+g_{j}(t),\qquad j=1,\ldots ,n}
where
n
{\displaystyle n}
is a positive integer, and
a
j
i
(
t
)
,
g
j
(
t
)
{\displaystyle a_{ji}(t),g_{j}(t)}
are arbitrary functions of the independent variable t.
A first-order linear system of ODEs may be written in matrix form:
d
d
t
[
x
1
x
2
⋮
x
n
]
=
[
a
11
…
a
1
n
a
21
…
a
2
n
⋮
…
⋮
a
n
1
a
n
n
]
[
x
1
x
2
⋮
x
n
]
+
[
g
1
g
2
⋮
g
n
]
,
{\displaystyle {\frac {d}{dt}}{\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}}={\begin{bmatrix}a_{11}&\ldots &a_{1n}\\a_{21}&\ldots &a_{2n}\\\vdots &\ldots &\vdots \\a_{n1}&&a_{nn}\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}}+{\begin{bmatrix}g_{1}\\g_{2}\\\vdots \\g_{n}\end{bmatrix}},}
or simply
x
˙
(
t
)
=
A
(
t
)
x
(
t
)
+
g
(
t
)
{\displaystyle \mathbf {\dot {x}} (t)=\mathbf {A} (t)\mathbf {x} (t)+\mathbf {g} (t)}
.
=== Homogeneous systems of differential equations ===
A linear system is said to be homogeneous if
g
j
(
t
)
=
0
{\displaystyle g_{j}(t)=0}
for each
j
{\displaystyle j}
and for all values of
t
{\displaystyle t}
, otherwise it is referred to as non-homogeneous. Homogeneous systems have the property that if
x
1
,
…
,
x
p
{\displaystyle \mathbf {x_{1}} ,\ldots ,\mathbf {x_{p}} }
are linearly independent solutions to the system, then any linear combination of these,
C
1
x
1
+
…
+
C
p
x
p
{\displaystyle C_{1}\mathbf {x_{1}} +\ldots +C_{p}\mathbf {x_{p}} }
, is also a solution to the linear system where
C
1
,
…
,
C
p
{\displaystyle C_{1},\ldots ,C_{p}}
are constant.
The case where the coefficients
a
j
i
(
t
)
{\displaystyle a_{ji}(t)}
are all constant has a general solution:
x
=
C
1
v
1
e
λ
1
t
+
…
+
C
n
v
n
e
λ
n
t
{\displaystyle \mathbf {x} =C_{1}\mathbf {v_{1}} e^{\lambda _{1}t}+\ldots +C_{n}\mathbf {v_{n}} e^{\lambda _{n}t}}
, where
λ
i
{\displaystyle \lambda _{i}}
is an eigenvalue of the matrix
A
{\displaystyle \mathbf {A} }
with corresponding eigenvectors
v
i
{\displaystyle \mathbf {v} _{i}}
for
1
≤
i
≤
n
{\displaystyle 1\leq i\leq n}
. This general solution only applies in cases where
A
{\displaystyle \mathbf {A} }
has n distinct eigenvalues, cases with fewer distinct eigenvalues must be treated differently.
== Linear independence of solutions ==
For an arbitrary system of ODEs, a set of solutions
x
1
(
t
)
,
…
,
x
n
(
t
)
{\displaystyle \mathbf {x_{1}} (t),\ldots ,\mathbf {x_{n}} (t)}
are said to be linearly-independent if:
C
1
x
1
(
t
)
+
…
+
C
n
x
n
=
0
∀
t
{\displaystyle C_{1}\mathbf {x_{1}} (t)+\ldots +C_{n}\mathbf {x_{n}} =0\quad \forall t}
is satisfied only for
C
1
=
…
=
C
n
=
0
{\displaystyle C_{1}=\ldots =C_{n}=0}
.
A second-order differential equation
x
¨
=
f
(
t
,
x
,
x
˙
)
{\displaystyle {\ddot {x}}=f(t,x,{\dot {x}})}
may be converted into a system of first order linear differential equations by defining
y
=
x
˙
{\displaystyle y={\dot {x}}}
, which gives us the first-order system:
{
x
˙
=
y
y
˙
=
f
(
t
,
x
,
y
)
{\displaystyle {\begin{cases}{\dot {x}}&=&y\\{\dot {y}}&=&f(t,x,y)\end{cases}}}
Just as with any linear system of two equations, two solutions may be called linearly-independent if
C
1
x
1
+
C
2
x
2
=
0
{\displaystyle C_{1}\mathbf {x} _{1}+C_{2}\mathbf {x} _{2}=\mathbf {0} }
implies
C
1
=
C
2
=
0
{\displaystyle C_{1}=C_{2}=0}
, or equivalently that
|
x
1
x
2
x
˙
1
x
˙
2
|
{\displaystyle {\begin{vmatrix}x_{1}&x_{2}\\{\dot {x}}_{1}&{\dot {x}}_{2}\end{vmatrix}}}
is non-zero. This notion is extended to second-order systems, and any two solutions to a second-order ODE are called linearly-independent if they are linearly-independent in this sense.
== Overdetermination of systems of differential equations ==
Like any system of equations, a system of linear differential equations is said to be overdetermined if there are more equations than the unknowns. For an overdetermined system to have a solution, it needs to satisfy the compatibility conditions. For example, consider the system:
∂
u
∂
x
i
=
f
i
,
1
≤
i
≤
m
.
{\displaystyle {\frac {\partial u}{\partial x_{i}}}=f_{i},1\leq i\leq m.}
Then the necessary conditions for the system to have a solution are:
∂
f
i
∂
x
k
−
∂
f
k
∂
x
i
=
0
,
1
≤
i
,
k
≤
m
.
{\displaystyle {\frac {\partial f_{i}}{\partial x_{k}}}-{\frac {\partial f_{k}}{\partial x_{i}}}=0,1\leq i,k\leq m.}
See also: Cauchy problem and Ehrenpreis's fundamental principle.
== Nonlinear system of differential equations ==
Perhaps the most famous example of a nonlinear system of differential equations is the Navier–Stokes equations. Unlike the linear case, the existence of a solution of a nonlinear system is a difficult problem (cf. Navier–Stokes existence and smoothness.)
Other examples of nonlinear systems of differential equations include the Lotka–Volterra equations.
== Differential system ==
A differential system is a means of studying a system of partial differential equations using geometric ideas such as differential forms and vector fields.
For example, the compatibility conditions of an overdetermined system of differential equations can be succinctly stated in terms of differential forms (i.e., for a form to be exact, it needs to be closed). See integrability conditions for differential systems for more.
== See also ==
Integral geometry
Cartan–Kuranishi prolongation theorem
== Notes ==
== References ==
L. Ehrenpreis, The Universality of the Radon Transform, Oxford Univ. Press, 2003.
Gromov, M. (1986), Partial differential relations, Springer, ISBN 3-540-12177-3
M. Kuranishi, "Lectures on involutive systems of partial differential equations", Publ. Soc. Mat. São Paulo (1967)
Pierre Schapira, Microdifferential systems in the complex domain, Grundlehren der Math- ematischen Wissenschaften, vol. 269, Springer-Verlag, 1985.
== Further reading ==
https://mathoverflow.net/questions/273235/a-very-basic-question-about-projections-in-formal-pde-theory
https://www.encyclopediaofmath.org/index.php/Involutional_system
https://www.encyclopediaofmath.org/index.php/Complete_system
https://www.encyclopediaofmath.org/index.php/Partial_differential_equations_on_a_manifold | Wikipedia/System_of_differential_equations |
In mathematics, an ordinary differential equation is called a Bernoulli differential equation if it is of the form
y
′
+
P
(
x
)
y
=
Q
(
x
)
y
n
,
{\displaystyle y'+P(x)y=Q(x)y^{n},}
where
n
{\displaystyle n}
is a real number. Some authors allow any real
n
{\displaystyle n}
, whereas others require that
n
{\displaystyle n}
not be 0 or 1. The equation was first discussed in a work of 1695 by Jacob Bernoulli, after whom it is named. The earliest solution, however, was offered by Gottfried Leibniz, who published his result in the same year and whose method is the one still used today.
Bernoulli equations are special because they are nonlinear differential equations with known exact solutions. A notable special case of the Bernoulli equation is the logistic differential equation.
== Transformation to a linear differential equation ==
When
n
=
0
{\displaystyle n=0}
, the differential equation is linear. When
n
=
1
{\displaystyle n=1}
, it is separable. In these cases, standard techniques for solving equations of those forms can be applied. For
n
≠
0
{\displaystyle n\neq 0}
and
n
≠
1
{\displaystyle n\neq 1}
, the substitution
u
=
y
1
−
n
{\displaystyle u=y^{1-n}}
reduces any Bernoulli equation to a linear differential equation
d
u
d
x
−
(
n
−
1
)
P
(
x
)
u
=
−
(
n
−
1
)
Q
(
x
)
.
{\displaystyle {\frac {du}{dx}}-(n-1)P(x)u=-(n-1)Q(x).}
For example, in the case
n
=
2
{\displaystyle n=2}
, making the substitution
u
=
y
−
1
{\displaystyle u=y^{-1}}
in the differential equation
d
y
d
x
+
1
x
y
=
x
y
2
{\displaystyle {\frac {dy}{dx}}+{\frac {1}{x}}y=xy^{2}}
produces the equation
d
u
d
x
−
1
x
u
=
−
x
{\displaystyle {\frac {du}{dx}}-{\frac {1}{x}}u=-x}
, which is a linear differential equation.
== Solution ==
Let
x
0
∈
(
a
,
b
)
{\displaystyle x_{0}\in (a,b)}
and
{
z
:
(
a
,
b
)
→
(
0
,
∞
)
,
if
α
∈
R
∖
{
1
,
2
}
,
z
:
(
a
,
b
)
→
R
∖
{
0
}
,
if
α
=
2
,
{\displaystyle {\begin{cases}z:(a,b)\rightarrow (0,\infty ),&{\text{if }}\alpha \in \mathbb {R} \smallsetminus \{1,2\},\\[4pt]z:(a,b)\rightarrow \mathbb {R} \smallsetminus \{0\},&{\text{if }}\alpha =2,\end{cases}}}
be a solution of the linear differential equation
z
′
(
x
)
=
(
1
−
α
)
P
(
x
)
z
(
x
)
+
(
1
−
α
)
Q
(
x
)
.
{\displaystyle z'(x)=(1-\alpha )P(x)z(x)+(1-\alpha )Q(x).}
Then we have that
y
(
x
)
:=
[
z
(
x
)
]
1
/
(
1
−
α
)
{\displaystyle y(x):=[z(x)]^{1/(1-\alpha )}}
is a solution of
y
′
(
x
)
=
P
(
x
)
y
(
x
)
+
Q
(
x
)
y
α
(
x
)
,
y
(
x
0
)
=
y
0
:=
[
z
(
x
0
)
]
1
/
(
1
−
α
)
.
{\displaystyle y'(x)=P(x)y(x)+Q(x)y^{\alpha }(x)\ ,\ y(x_{0})=y_{0}:=[z(x_{0})]^{1/(1-\alpha )}.}
And for every such differential equation, for all
α
>
0
{\displaystyle \alpha >0}
we have
y
≡
0
{\displaystyle y\equiv 0}
as solution for
y
0
=
0
{\displaystyle y_{0}=0}
.
== Example ==
Consider the Bernoulli equation
y
′
−
2
y
x
=
−
x
2
y
2
{\displaystyle y'-{\frac {2y}{x}}=-x^{2}y^{2}}
(in this case, more specifically a Riccati equation).
The constant function
y
=
0
{\displaystyle y=0}
is a solution.
Division by
y
2
{\displaystyle y^{2}}
yields
y
′
y
−
2
−
2
x
y
−
1
=
−
x
2
{\displaystyle y'y^{-2}-{\frac {2}{x}}y^{-1}=-x^{2}}
Changing variables gives the equations
u
=
1
y
,
u
′
=
−
y
′
y
2
−
u
′
−
2
x
u
=
−
x
2
u
′
+
2
x
u
=
x
2
{\displaystyle {\begin{aligned}u={\frac {1}{y}}\;&,~u'={\frac {-y'}{y^{2}}}\\[5pt]-u'-{\frac {2}{x}}u&=-x^{2}\\[5pt]u'+{\frac {2}{x}}u&=x^{2}\end{aligned}}}
which can be solved using the integrating factor
M
(
x
)
=
e
2
∫
1
x
d
x
=
e
2
ln
x
=
x
2
.
{\displaystyle M(x)=e^{2\int {\frac {1}{x}}\,dx}=e^{2\ln x}=x^{2}.}
Multiplying by
M
(
x
)
{\displaystyle M(x)}
,
u
′
x
2
+
2
x
u
=
x
4
.
{\displaystyle u'x^{2}+2xu=x^{4}.}
The left side can be represented as the derivative of
u
x
2
{\displaystyle ux^{2}}
by reversing the product rule. Applying the chain rule and integrating both sides with respect to
x
{\displaystyle x}
results in the equations
∫
(
u
x
2
)
′
d
x
=
∫
x
4
d
x
u
x
2
=
1
5
x
5
+
C
1
y
x
2
=
1
5
x
5
+
C
{\displaystyle {\begin{aligned}\int \left(ux^{2}\right)'dx&=\int x^{4}\,dx\\[5pt]ux^{2}&={\frac {1}{5}}x^{5}+C\\[5pt]{\frac {1}{y}}x^{2}&={\frac {1}{5}}x^{5}+C\end{aligned}}}
The solution for
y
{\displaystyle y}
is
y
=
x
2
1
5
x
5
+
C
.
{\displaystyle y={\frac {x^{2}}{{\frac {1}{5}}x^{5}+C}}.}
== Notes ==
== References ==
Bernoulli, Jacob (1695), "Explicationes, Annotationes & Additiones ad ea, quae in Actis sup. de Curva Elastica, Isochrona Paracentrica, & Velaria, hinc inde memorata, & paratim controversa legundur; ubi de Linea mediarum directionum, alliisque novis", Acta Eruditorum. Cited in Hairer, Nørsett & Wanner (1993).
Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems, Berlin, New York: Springer-Verlag, ISBN 978-3-540-56670-0.
== External links ==
Index of differential equations | Wikipedia/Bernoulli_differential_equation |
Natural science or empirical science is one of the branches of science concerned with the description, understanding and prediction of natural phenomena, based on empirical evidence from observation and experimentation. Mechanisms such as peer review and reproducibility of findings are used to try to ensure the validity of scientific advances.
Natural science can be divided into two main branches: life science and physical science. Life science is alternatively known as biology. Physical science is subdivided into branches: physics, astronomy, Earth science and chemistry. These branches of natural science may be further divided into more specialized branches (also known as fields). As empirical sciences, natural sciences use tools from the formal sciences, such as mathematics and logic, converting information about nature into measurements that can be explained as clear statements of the "laws of nature".
Modern natural science succeeded more classical approaches to natural philosophy. Galileo, Kepler, Descartes, Bacon, and Newton debated the benefits of using approaches which were more mathematical and more experimental in a methodical way. Still, philosophical perspectives, conjectures, and presuppositions, often overlooked, remain necessary in natural science. Systematic data collection, including discovery science, succeeded natural history, which emerged in the 16th century by describing and classifying plants, animals, minerals, and so on. Today, "natural history" suggests observational descriptions aimed at popular audiences.
== Criteria ==
Philosophers of science have suggested several criteria, including Karl Popper's controversial falsifiability criterion, to help them differentiate scientific endeavors from non-scientific ones. Validity, accuracy, and quality control, such as peer review and reproducibility of findings, are amongst the most respected criteria in today's global scientific community.
In natural science, impossibility assertions come to be widely accepted as overwhelmingly probable rather than considered proven to the point of being unchallengeable. The basis for this strong acceptance is a combination of extensive evidence of something not occurring, combined with an underlying theory, very successful in making predictions, whose assumptions lead logically to the conclusion that something is impossible. While an impossibility assertion in natural science can never be proved, it could be refuted by the observation of a single counterexample. Such a counterexample would require that the assumptions underlying the theory that implied the impossibility be re-examined.
== Branches of natural science ==
=== Biology ===
This field encompasses a diverse set of disciplines that examine phenomena related to living organisms. The scale of study can range from sub-component biophysics up to complex ecologies. Biology is concerned with the characteristics, classification and behaviors of organisms, as well as how species were formed and their interactions with each other and the environment.
The biological fields of botany, zoology, and medicine date back to early periods of civilization, while microbiology was introduced in the 17th century with the invention of the microscope. However, it was not until the 19th century that biology became a unified science. Once scientists discovered commonalities between all living things, it was decided they were best studied as a whole.
Some key developments in biology were the discovery of genetics, evolution through natural selection, the germ theory of disease, and the application of the techniques of chemistry and physics at the level of the cell or organic molecule.
Modern biology is divided into subdisciplines by the type of organism and by the scale being studied. Molecular biology is the study of the fundamental chemistry of life, while cellular biology is the examination of the cell; the basic building block of all life. At a higher level, anatomy and physiology look at the internal structures, and their functions, of an organism, while ecology looks at how various organisms interrelate.
=== Earth science ===
Earth science (also known as geoscience) is an all-embracing term for the sciences related to the planet Earth, including geology, geography, geophysics, geochemistry, climatology, glaciology, hydrology, meteorology, and oceanography.
Although mining and precious stones have been human interests throughout the history of civilization, the development of the related sciences of economic geology and mineralogy did not occur until the 18th century. The study of the earth, particularly paleontology, blossomed in the 19th century. The growth of other disciplines, such as geophysics, in the 20th century led to the development of the theory of plate tectonics in the 1960s, which has had a similar effect on the Earth sciences as the theory of evolution had on biology. Earth sciences today are closely linked to petroleum and mineral resources, climate research, and to environmental assessment and remediation.
==== Atmospheric sciences ====
Although sometimes considered in conjunction with the earth sciences, due to the independent development of its concepts, techniques, and practices and also the fact of it having a wide range of sub-disciplines under its wing, atmospheric science is also considered a separate branch of natural science. This field studies the characteristics of different layers of the atmosphere from ground level to the edge of the space. The timescale of the study also varies from day to century. Sometimes, the field also includes the study of climatic patterns on planets other than Earth.
==== Oceanography ====
The serious study of oceans began in the early- to mid-20th century. As a field of natural science, it is relatively young, but stand-alone programs offer specializations in the subject. Though some controversies remain as to the categorization of the field under earth sciences, interdisciplinary sciences, or as a separate field in its own right, most modern workers in the field agree that it has matured to a state that it has its own paradigms and practices.
==== Planetary science ====
Planetary science or planetology, is the scientific study of planets, which include terrestrial planets like the Earth, and other types of planets, such as gas giants and ice giants. Planetary science also concerns other celestial bodies, such as dwarf planets moons, asteroids, and comets. This largely includes the Solar System, but recently has started to expand to exoplanets, particularly terrestrial exoplanets. It explores various objects, spanning from micrometeoroids to gas giants, to establish their composition, movements, genesis, interrelation, and past. Planetary science is an interdisciplinary domain, having originated from astronomy and Earth science, and currently encompassing a multitude of areas, such as planetary geology, cosmochemistry, atmospheric science, physics, oceanography, hydrology, theoretical planetology, glaciology, and exoplanetology. Related fields encompass space physics, which delves into the impact of the Sun on the bodies in the Solar System, and astrobiology.
Planetary science comprises interconnected observational and theoretical branches. Observational research entails a combination of space exploration, primarily through robotic spacecraft missions utilizing remote sensing, and comparative experimental work conducted in Earth-based laboratories. The theoretical aspect involves extensive mathematical modelling and computer simulation.
Typically, planetary scientists are situated within astronomy and physics or Earth sciences departments in universities or research centers. However, there are also dedicated planetary science institutes worldwide. Generally, individuals pursuing a career in planetary science undergo graduate-level studies in one of the Earth sciences, astronomy, astrophysics, geophysics, or physics. They then focus their research within the discipline of planetary science. Major conferences are held annually, and numerous peer reviewed journals cater to the diverse research interests in planetary science. Some planetary scientists are employed by private research centers and frequently engage in collaborative research initiatives.
=== Chemistry ===
Constituting the scientific study of matter at the atomic and molecular scale, chemistry deals primarily with collections of atoms, such as gases, molecules, crystals, and metals. The composition, statistical properties, transformations, and reactions of these materials are studied. Chemistry also involves understanding the properties and interactions of individual atoms and molecules for use in larger-scale applications.
Most chemical processes can be studied directly in a laboratory, using a series of (often well-tested) techniques for manipulating materials, as well as an understanding of the underlying processes. Chemistry is often called "the central science" because of its role in connecting the other natural sciences.
Early experiments in chemistry had their roots in the system of alchemy, a set of beliefs combining mysticism with physical experiments. The science of chemistry began to develop with the work of Robert Boyle, the discoverer of gases, and Antoine Lavoisier, who developed the theory of the conservation of mass.
The discovery of the chemical elements and atomic theory began to systematize this science, and researchers developed a fundamental understanding of states of matter, ions, chemical bonds and chemical reactions. The success of this science led to a complementary chemical industry that now plays a significant role in the world economy.
=== Physics ===
Physics embodies the study of the fundamental constituents of the universe, the forces and interactions they exert on one another, and the results produced by these interactions. Physics is generally regarded as foundational because all other natural sciences use and obey the field's principles and laws. Physics relies heavily on mathematics as the logical framework for formulating and quantifying principles.
The study of the principles of the universe has a long history and largely derives from direct observation and experimentation. The formulation of theories about the governing laws of the universe has been central to the study of physics from very early on, with philosophy gradually yielding to systematic, quantitative experimental testing and observation as the source of verification. Key historical developments in physics include Isaac Newton's theory of universal gravitation and classical mechanics, an understanding of electricity and its relation to magnetism, Einstein's theories of special and general relativity, the development of thermodynamics, and the quantum mechanical model of atomic and subatomic physics.
The field of physics is vast and can include such diverse studies as quantum mechanics and theoretical physics, applied physics and optics. Modern physics is becoming increasingly specialized, where researchers tend to focus on a particular area rather than being "universalists" like Isaac Newton, Albert Einstein, and Lev Landau, who worked in multiple areas.
=== Astronomy ===
Astronomy is a natural science that studies celestial objects and phenomena. Objects of interest include planets, moons, stars, nebulae, galaxies, and comets. Astronomy is the study of everything in the universe beyond Earth's atmosphere, including objects we can see with our naked eyes. It is one of the oldest sciences.
Astronomers of early civilizations performed methodical observations of the night sky, and astronomical artifacts have been found from much earlier periods. There are two types of astronomy: observational astronomy and theoretical astronomy. Observational astronomy is focused on acquiring and analyzing data, mainly using basic principles of physics. In contrast, Theoretical astronomy is oriented towards developing computer or analytical models to describe astronomical objects and phenomena.
This discipline is the science of celestial objects and phenomena that originate outside the Earth's atmosphere. It is concerned with the evolution, physics, chemistry, meteorology, geology, and motion of celestial objects, as well as the formation and development of the universe.
Astronomy includes examining, studying, and modeling stars, planets, and comets. Most of the information used by astronomers is gathered by remote observation. However, some laboratory reproduction of celestial phenomena has been performed (such as the molecular chemistry of the interstellar medium). There is considerable overlap with physics and in some areas of earth science. There are also interdisciplinary fields such as astrophysics, planetary sciences, and cosmology, along with allied disciplines such as space physics and astrochemistry.
While the study of celestial features and phenomena can be traced back to antiquity, the scientific methodology of this field began to develop in the middle of the 17th century. A key factor was Galileo's introduction of the telescope to examine the night sky in more detail.
The mathematical treatment of astronomy began with Newton's development of celestial mechanics and the laws of gravitation. However, it was triggered by earlier work of astronomers such as Kepler. By the 19th century, astronomy had developed into formal science, with the introduction of instruments such as the spectroscope and photography, along with much-improved telescopes and the creation of professional observatories.
== Interdisciplinary studies ==
The distinctions between the natural science disciplines are not always sharp, and they share many cross-discipline fields. Physics plays a significant role in the other natural sciences, as represented by astrophysics, geophysics, chemical physics and biophysics. Likewise chemistry is represented by such fields as biochemistry, physical chemistry, geochemistry and astrochemistry.
A particular example of a scientific discipline that draws upon multiple natural sciences is environmental science. This field studies the interactions of physical, chemical, geological, and biological components of the environment, with particular regard to the effect of human activities and the impact on biodiversity and sustainability. This science also draws upon expertise from other fields, such as economics, law, and social sciences.
A comparable discipline is oceanography, as it draws upon a similar breadth of scientific disciplines. Oceanography is sub-categorized into more specialized cross-disciplines, such as physical oceanography and marine biology. As the marine ecosystem is vast and diverse, marine biology is further divided into many subfields, including specializations in particular species.
There is also a subset of cross-disciplinary fields with strong currents that run counter to specialization by the nature of the problems they address. Put another way: In some fields of integrative application, specialists in more than one field are a key part of most scientific discourse. Such integrative fields, for example, include nanoscience, astrobiology, and complex system informatics.
=== Materials science ===
Materials science is a relatively new, interdisciplinary field that deals with the study of matter and its properties and the discovery and design of new materials. Originally developed through the field of metallurgy, the study of the properties of materials and solids has now expanded into all materials. The field covers the chemistry, physics, and engineering applications of materials, including metals, ceramics, artificial polymers, and many others. The field's core deals with relating the structure of materials with their properties.
Materials science is at the forefront of research in science and engineering. It is an essential part of forensic engineering (the investigation of materials, products, structures, or components that fail or do not operate or function as intended, causing personal injury or damage to property) and failure analysis, the latter being the key to understanding, for example, the cause of various aviation accidents. Many of the most pressing scientific problems that are faced today are due to the limitations of the materials that are available, and, as a result, breakthroughs in this field are likely to have a significant impact on the future of technology.
The basis of materials science involves studying the structure of materials and relating them to their properties. Understanding this structure-property correlation, material scientists can then go on to study the relative performance of a material in a particular application. The major determinants of the structure of a material and, thus, of its properties are its constituent chemical elements and how it has been processed into its final form. These characteristics, taken together and related through the laws of thermodynamics and kinetics, govern a material's microstructure and thus its properties.
== History ==
Some scholars trace the origins of natural science as far back as pre-literate human societies, where understanding the natural world was necessary for survival. People observed and built up knowledge about the behavior of animals and the usefulness of plants as food and medicine, which was passed down from generation to generation. These primitive understandings gave way to more formalized inquiry around 3500 to 3000 BC in the Mesopotamian and Ancient Egyptian cultures, which produced the first known written evidence of natural philosophy, the precursor of natural science. While the writings show an interest in astronomy, mathematics, and other aspects of the physical world, the ultimate aim of inquiry about nature's workings was, in all cases, religious or mythological, not scientific.
A tradition of scientific inquiry also emerged in Ancient China, where Taoist alchemists and philosophers experimented with elixirs to extend life and cure ailments. They focused on the yin and yang, or contrasting elements in nature; the yin was associated with femininity and coldness, while yang was associated with masculinity and warmth. The five phases – fire, earth, metal, wood, and water – described a cycle of transformations in nature. The water turned into wood, which turned into the fire when it burned. The ashes left by fire were earth. Using these principles, Chinese philosophers and doctors explored human anatomy, characterizing organs as predominantly yin or yang, and understood the relationship between the pulse, the heart, and the flow of blood in the body centuries before it became accepted in the West.
Little evidence survives of how Ancient Indian cultures around the Indus River understood nature, but some of their perspectives may be reflected in the Vedas, a set of sacred Hindu texts. They reveal a conception of the universe as ever-expanding and constantly being recycled and reformed. Surgeons in the Ayurvedic tradition saw health and illness as a combination of three humors: wind, bile and phlegm. A healthy life resulted from a balance among these humors. In Ayurvedic thought, the body consisted of five elements: earth, water, fire, wind, and space. Ayurvedic surgeons performed complex surgeries and developed a detailed understanding of human anatomy.
Pre-Socratic philosophers in Ancient Greek culture brought natural philosophy a step closer to direct inquiry about cause and effect in nature between 600 and 400 BC. However, an element of magic and mythology remained. Natural phenomena such as earthquakes and eclipses were explained increasingly in the context of nature itself instead of being attributed to angry gods. Thales of Miletus, an early philosopher who lived from 625 to 546 BC, explained earthquakes by theorizing that the world floated on water and that water was the fundamental element in nature. In the 5th century BC, Leucippus was an early exponent of atomism, the idea that the world is made up of fundamental indivisible particles. Pythagoras applied Greek innovations in mathematics to astronomy and suggested that the earth was spherical.
=== Aristotelian natural philosophy (400 BC–1100 AD) ===
Later Socratic and Platonic thought focused on ethics, morals, and art and did not attempt an investigation of the physical world; Plato criticized pre-Socratic thinkers as materialists and anti-religionists. Aristotle, however, a student of Plato who lived from 384 to 322 BC, paid closer attention to the natural world in his philosophy. In his History of Animals, he described the inner workings of 110 species, including the stingray, catfish and bee. He investigated chick embryos by breaking open eggs and observing them at various stages of development. Aristotle's works were influential through the 16th century, and he is considered to be the father of biology for his pioneering work in that science. He also presented philosophies about physics, nature, and astronomy using inductive reasoning in his works Physics and Meteorology.
While Aristotle considered natural philosophy more seriously than his predecessors, he approached it as a theoretical branch of science. Still, inspired by his work, Ancient Roman philosophers of the early 1st century AD, including Lucretius, Seneca and Pliny the Elder, wrote treatises that dealt with the rules of the natural world in varying degrees of depth. Many Ancient Roman Neoplatonists of the 3rd to the 6th centuries also adapted Aristotle's teachings on the physical world to a philosophy that emphasized spiritualism. Early medieval philosophers including Macrobius, Calcidius and Martianus Capella also examined the physical world, largely from a cosmological and cosmographical perspective, putting forth theories on the arrangement of celestial bodies and the heavens, which were posited as being composed of aether.
Aristotle's works on natural philosophy continued to be translated and studied amid the rise of the Byzantine Empire and Abbasid Caliphate.
In the Byzantine Empire, John Philoponus, an Alexandrian Aristotelian commentator and Christian theologian, was the first to question Aristotle's physics teaching. Unlike Aristotle, who based his physics on verbal argument, Philoponus instead relied on observation and argued for observation rather than resorting to a verbal argument. He introduced the theory of impetus. John Philoponus' criticism of Aristotelian principles of physics served as inspiration for Galileo Galilei during the Scientific Revolution.
A revival in mathematics and science took place during the time of the Abbasid Caliphate from the 9th century onward, when Muslim scholars expanded upon Greek and Indian natural philosophy. The words alcohol, algebra and zenith all have Arabic roots.
=== Medieval natural philosophy (1100–1600) ===
Aristotle's works and other Greek natural philosophy did not reach the West until about the middle of the 12th century, when works were translated from Greek and Arabic into Latin. The development of European civilization later in the Middle Ages brought with it further advances in natural philosophy. European inventions such as the horseshoe, horse collar and crop rotation allowed for rapid population growth, eventually giving way to urbanization and the foundation of schools connected to monasteries and cathedrals in modern-day France and England. Aided by the schools, an approach to Christian theology developed that sought to answer questions about nature and other subjects using logic. This approach, however, was seen by some detractors as heresy.
By the 12th century, Western European scholars and philosophers came into contact with a body of knowledge of which they had previously been ignorant: a large corpus of works in Greek and Arabic that were preserved by Islamic scholars. Through translation into Latin, Western Europe was introduced to Aristotle and his natural philosophy. These works were taught at new universities in Paris and Oxford by the early 13th century, although the practice was frowned upon by the Catholic church. A 1210 decree from the Synod of Paris ordered that "no lectures are to be held in Paris either publicly or privately using Aristotle's books on natural philosophy or the commentaries, and we forbid all this under pain of ex-communication."
In the late Middle Ages, Spanish philosopher Dominicus Gundissalinus translated a treatise by the earlier Persian scholar Al-Farabi called On the Sciences into Latin, calling the study of the mechanics of nature Scientia naturalis, or natural science. Gundissalinus also proposed his classification of the natural sciences in his 1150 work On the Division of Philosophy. This was the first detailed classification of the sciences based on Greek and Arab philosophy to reach Western Europe. Gundissalinus defined natural science as "the science considering only things unabstracted and with motion," as opposed to mathematics and sciences that rely on mathematics. Following Al-Farabi, he separated the sciences into eight parts, including: physics, cosmology, meteorology, minerals science, and plant and animal science.
Later, philosophers made their own classifications of the natural sciences. Robert Kilwardby wrote On the Order of the Sciences in the 13th century that classed medicine as a mechanical science, along with agriculture, hunting, and theater, while defining natural science as the science that deals with bodies in motion. Roger Bacon, an English friar and philosopher, wrote that natural science dealt with "a principle of motion and rest, as in the parts of the elements of fire, air, earth, and water, and in all inanimate things made from them." These sciences also covered plants, animals and celestial bodies.
Later in the 13th century, a Catholic priest and theologian Thomas Aquinas defined natural science as dealing with "mobile beings" and "things which depend on a matter not only for their existence but also for their definition." There was broad agreement among scholars in medieval times that natural science was about bodies in motion. However, there was division about including fields such as medicine, music, and perspective. Philosophers pondered questions including the existence of a vacuum, whether motion could produce heat, the colors of rainbows, the motion of the earth, whether elemental chemicals exist, and where in the atmosphere rain is formed.
In the centuries up through the end of the Middle Ages, natural science was often mingled with philosophies about magic and the occult. Natural philosophy appeared in various forms, from treatises to encyclopedias to commentaries on Aristotle. The interaction between natural philosophy and Christianity was complex during this period; some early theologians, including Tatian and Eusebius, considered natural philosophy an outcropping of pagan Greek science and were suspicious of it. Although some later Christian philosophers, including Aquinas, came to see natural science as a means of interpreting scripture, this suspicion persisted until the 12th and 13th centuries. The Condemnation of 1277, which forbade setting philosophy on a level equal with theology and the debate of religious constructs in a scientific context, showed the persistence with which Catholic leaders resisted the development of natural philosophy even from a theological perspective. Aquinas and Albertus Magnus, another Catholic theologian of the era, sought to distance theology from science in their works. "I don't see what one's interpretation of Aristotle has to do with the teaching of the faith," he wrote in 1271.
=== Newton and the scientific revolution (1600–1800) ===
By the 16th and 17th centuries, natural philosophy evolved beyond commentary on Aristotle as more early Greek philosophy was uncovered and translated. The invention of the printing press in the 15th century, the invention of the microscope and telescope, and the Protestant Reformation fundamentally altered the social context in which scientific inquiry evolved in the West. Christopher Columbus's discovery of a new world changed perceptions about the physical makeup of the world, while observations by Copernicus, Tyco Brahe and Galileo brought a more accurate picture of the solar system as heliocentric and proved many of Aristotle's theories about the heavenly bodies false. Several 17th-century philosophers, including René Descartes, Pierre Gassendi, Marin Mersenne, Nicolas Malebranche, Thomas Hobbes, John Locke and Francis Bacon, made a break from the past by rejecting Aristotle and his medieval followers outright, calling their approach to natural philosophy superficial.
The titles of Galileo's work Two New Sciences and Johannes Kepler's New Astronomy underscored the atmosphere of change that took hold in the 17th century as Aristotle was dismissed in favor of novel methods of inquiry into the natural world. Bacon was instrumental in popularizing this change; he argued that people should use the arts and sciences to gain dominion over nature. To achieve this, he wrote that "human life [must] be endowed with discoveries and powers." He defined natural philosophy as "the knowledge of Causes and secret motions of things; and enlarging the bounds of Human Empire, to the effecting of all things possible." Bacon proposed that scientific inquiry be supported by the state and fed by the collaborative research of scientists, a vision that was unprecedented in its scope, ambition, and forms at the time.
Natural philosophers came to view nature increasingly as a mechanism that could be taken apart and understood, much like a complex clock. Natural philosophers including Isaac Newton, Evangelista Torricelli and Francesco Redi, Edme Mariotte, Jean-Baptiste Denis and Jacques Rohault conducted experiments focusing on the flow of water, measuring atmospheric pressure using a barometer and disproving spontaneous generation. Scientific societies and scientific journals emerged and were spread widely through the printing press, touching off the scientific revolution. Newton in 1687 published his The Mathematical Principles of Natural Philosophy, or Principia Mathematica, which set the groundwork for physical laws that remained current until the 19th century.
Some modern scholars, including Andrew Cunningham, Perry Williams, and Floris Cohen, argue that natural philosophy is not properly called science and that genuine scientific inquiry began only with the scientific revolution. According to Cohen, "the emancipation of science from an overarching entity called 'natural philosophy is one defining characteristic of the Scientific Revolution." Other historians of science, including Edward Grant, contend that the scientific revolution that blossomed in the 17th, 18th, and 19th centuries occurred when principles learned in the exact sciences of optics, mechanics, and astronomy began to be applied to questions raised by natural philosophy. Grant argues that Newton attempted to expose the mathematical basis of nature – the immutable rules it obeyed – and, in doing so, joined natural philosophy and mathematics for the first time, producing an early work of modern physics.
The scientific revolution, which began to take hold in the 17th century, represented a sharp break from Aristotelian modes of inquiry. One of its principal advances was the use of the scientific method to investigate nature. Data was collected, and repeatable measurements were made in experiments. Scientists then formed hypotheses to explain the results of these experiments. The hypothesis was then tested using the principle of falsifiability to prove or disprove its accuracy. The natural sciences continued to be called natural philosophy, but the adoption of the scientific method took science beyond the realm of philosophical conjecture and introduced a more structured way of examining nature.
Newton, an English mathematician and physicist, was a seminal figure in the scientific revolution. Drawing on advances made in astronomy by Copernicus, Brahe, and Kepler, Newton derived the universal law of gravitation and laws of motion. These laws applied both on earth and in outer space, uniting two spheres of the physical world previously thought to function independently, according to separate physical rules. Newton, for example, showed that the tides were caused by the gravitational pull of the moon. Another of Newton's advances was to make mathematics a powerful explanatory tool for natural phenomena. While natural philosophers had long used mathematics as a means of measurement and analysis, its principles were not used as a means of understanding cause and effect in nature until Newton.
In the 18th century and 19th century, scientists including Charles-Augustin de Coulomb, Alessandro Volta, and Michael Faraday built upon Newtonian mechanics by exploring electromagnetism, or the interplay of forces with positive and negative charges on electrically charged particles. Faraday proposed that forces in nature operated in "fields" that filled space. The idea of fields contrasted with the Newtonian construct of gravitation as simply "action at a distance", or the attraction of objects with nothing in the space between them to intervene. James Clerk Maxwell in the 19th century unified these discoveries in a coherent theory of electrodynamics. Using mathematical equations and experimentation, Maxwell discovered that space was filled with charged particles that could act upon each other and were a medium for transmitting charged waves.
Significant advances in chemistry also took place during the scientific revolution. Antoine Lavoisier, a French chemist, refuted the phlogiston theory, which posited that things burned by releasing "phlogiston" into the air. Joseph Priestley had discovered oxygen in the 18th century, but Lavoisier discovered that combustion was the result of oxidation. He also constructed a table of 33 elements and invented modern chemical nomenclature. Formal biological science remained in its infancy in the 18th century, when the focus lay upon the classification and categorization of natural life. This growth in natural history was led by Carl Linnaeus, whose 1735 taxonomy of the natural world is still in use. Linnaeus, in the 1750s, introduced scientific names for all his species.
=== 19th-century developments (1800–1900) ===
By the 19th century, the study of science had come into the purview of professionals and institutions. In so doing, it gradually acquired the more modern name of natural science. The term scientist was coined by William Whewell in an 1834 review of Mary Somerville's On the Connexion of the Sciences. But the word did not enter general use until nearly the end of the same century.
=== Modern natural science (1900–present) ===
According to a famous 1923 textbook, Thermodynamics and the Free Energy of Chemical Substances, by the American chemist Gilbert N. Lewis and the American physical chemist Merle Randall, the natural sciences contain three great branches:
Aside from the logical and mathematical sciences, there are three great branches of natural science which stand apart by reason of the variety of far reaching deductions drawn from a small number of primary postulates — they are mechanics, electrodynamics, and thermodynamics.
Today, natural sciences are more commonly divided into life sciences, such as botany and zoology, and physical sciences, which include physics, chemistry, astronomy, and Earth sciences.
== See also ==
Branches of science
Empiricism
List of academic disciplines and sub-disciplines
Logology (science)
Natural history
Natural Sciences (Cambridge), for the Tripos at the University of Cambridge
== References ==
=== Bibliography ===
== Further reading ==
Defining Natural Sciences Ledoux, S. F., 2002: Defining Natural Sciences, Behaviorology Today, 5(1), 34–36.
Stokes, Donald E. (1997). Pasteur's Quadrant: Basic Science and Technological Innovation. Revised and translated by Albert V. Carozzi and Marguerite Carozzi. Washington, D.C.: Brookings Institution Press. ISBN 978-0-8157-8177-6.
The History of Recent Science and Technology
Natural Sciences Contains updated information on research in the Natural Sciences including biology, geography and the applied life and earth sciences.
Reviews of Books About Natural Science This site contains over 50 previously published reviews of books about natural science, plus selected essays on timely topics in natural science.
Scientific Grant Awards Database Contains details of over 2,000,000 scientific research projects conducted over the past 25 years.
E!Science Up-to-date science news aggregator from major sources including universities. | Wikipedia/Natural_science |
In mathematics, an autonomous system or autonomous differential equation is a system of ordinary differential equations which does not explicitly depend on the independent variable. When the variable is time, they are also called time-invariant systems.
Many laws in physics, where the independent variable is usually assumed to be time, are expressed as autonomous systems because it is assumed the laws of nature which hold now are identical to those for any point in the past or future.
== Definition ==
An autonomous system is a system of ordinary differential equations of the form
d
d
t
x
(
t
)
=
f
(
x
(
t
)
)
{\displaystyle {\frac {d}{dt}}x(t)=f(x(t))}
where x takes values in n-dimensional Euclidean space; t is often interpreted as time.
It is distinguished from systems of differential equations of the form
d
d
t
x
(
t
)
=
g
(
x
(
t
)
,
t
)
{\displaystyle {\frac {d}{dt}}x(t)=g(x(t),t)}
in which the law governing the evolution of the system does not depend solely on the system's current state but also the parameter t, again often interpreted as time; such systems are by definition not autonomous.
== Properties ==
Solutions are invariant under horizontal translations:
Let
x
1
(
t
)
{\displaystyle x_{1}(t)}
be a unique solution of the initial value problem for an autonomous system
d
d
t
x
(
t
)
=
f
(
x
(
t
)
)
,
x
(
0
)
=
x
0
.
{\displaystyle {\frac {d}{dt}}x(t)=f(x(t))\,,\quad x(0)=x_{0}.}
Then
x
2
(
t
)
=
x
1
(
t
−
t
0
)
{\displaystyle x_{2}(t)=x_{1}(t-t_{0})}
solves
d
d
t
x
(
t
)
=
f
(
x
(
t
)
)
,
x
(
t
0
)
=
x
0
.
{\displaystyle {\frac {d}{dt}}x(t)=f(x(t))\,,\quad x(t_{0})=x_{0}.}
Denoting
s
=
t
−
t
0
{\displaystyle s=t-t_{0}}
gets
x
1
(
s
)
=
x
2
(
t
)
{\displaystyle x_{1}(s)=x_{2}(t)}
and
d
s
=
d
t
{\displaystyle ds=dt}
, thus
d
d
t
x
2
(
t
)
=
d
d
t
x
1
(
t
−
t
0
)
=
d
d
s
x
1
(
s
)
=
f
(
x
1
(
s
)
)
=
f
(
x
2
(
t
)
)
.
{\displaystyle {\frac {d}{dt}}x_{2}(t)={\frac {d}{dt}}x_{1}(t-t_{0})={\frac {d}{ds}}x_{1}(s)=f(x_{1}(s))=f(x_{2}(t)).}
For the initial condition, the verification is trivial,
x
2
(
t
0
)
=
x
1
(
t
0
−
t
0
)
=
x
1
(
0
)
=
x
0
.
{\displaystyle x_{2}(t_{0})=x_{1}(t_{0}-t_{0})=x_{1}(0)=x_{0}.}
== Example ==
The equation
y
′
=
(
2
−
y
)
y
{\displaystyle y'=\left(2-y\right)y}
is autonomous, since the independent variable (
x
{\displaystyle x}
) does not explicitly appear in the equation.
To plot the slope field and isocline for this equation, one can use the following code in GNU Octave/MATLAB
One can observe from the plot that the function
(
2
−
y
)
y
{\displaystyle \left(2-y\right)y}
is
x
{\displaystyle x}
-invariant, and so is the shape of the solution, i.e.
y
(
x
)
=
y
(
x
−
x
0
)
{\displaystyle y(x)=y(x-x_{0})}
for any shift
x
0
{\displaystyle x_{0}}
.
Solving the equation symbolically in MATLAB, by running
obtains two equilibrium solutions,
y
=
0
{\displaystyle y=0}
and
y
=
2
{\displaystyle y=2}
, and a third solution involving an unknown constant
C
3
{\displaystyle C_{3}}
,
-2 / (exp(C3 - 2 * x) - 1).
Picking up some specific values for the initial condition, one can add the plot of several solutions
== Qualitative analysis ==
Autonomous systems can be analyzed qualitatively using the phase space; in the one-variable case, this is the phase line.
== Solution techniques ==
The following techniques apply to one-dimensional autonomous differential equations. Any one-dimensional equation of order
n
{\displaystyle n}
is equivalent to an
n
{\displaystyle n}
-dimensional first-order system (as described in reduction to a first-order system), but not necessarily vice versa.
=== First order ===
The first-order autonomous equation
d
x
d
t
=
f
(
x
)
{\displaystyle {\frac {dx}{dt}}=f(x)}
is separable, so it can be solved by rearranging it into the integral form
t
+
C
=
∫
d
x
f
(
x
)
{\displaystyle t+C=\int {\frac {dx}{f(x)}}}
=== Second order ===
The second-order autonomous equation
d
2
x
d
t
2
=
f
(
x
,
x
′
)
{\displaystyle {\frac {d^{2}x}{dt^{2}}}=f(x,x')}
is more difficult, but it can be solved by introducing the new variable
v
=
d
x
d
t
{\displaystyle v={\frac {dx}{dt}}}
and expressing the second derivative of
x
{\displaystyle x}
via the chain rule as
d
2
x
d
t
2
=
d
v
d
t
=
d
x
d
t
d
v
d
x
=
v
d
v
d
x
{\displaystyle {\frac {d^{2}x}{dt^{2}}}={\frac {dv}{dt}}={\frac {dx}{dt}}{\frac {dv}{dx}}=v{\frac {dv}{dx}}}
so that the original equation becomes
v
d
v
d
x
=
f
(
x
,
v
)
{\displaystyle v{\frac {dv}{dx}}=f(x,v)}
which is a first order equation containing no reference to the independent variable
t
{\displaystyle t}
. Solving provides
v
{\displaystyle v}
as a function of
x
{\displaystyle x}
. Then, recalling the definition of
v
{\displaystyle v}
:
d
x
d
t
=
v
(
x
)
⇒
t
+
C
=
∫
d
x
v
(
x
)
{\displaystyle {\frac {dx}{dt}}=v(x)\quad \Rightarrow \quad t+C=\int {\frac {dx}{v(x)}}}
which is an implicit solution.
==== Special case: x″ = f(x) ====
The special case where
f
{\displaystyle f}
is independent of
x
′
{\displaystyle x'}
d
2
x
d
t
2
=
f
(
x
)
{\displaystyle {\frac {d^{2}x}{dt^{2}}}=f(x)}
benefits from separate treatment. These types of equations are very common in classical mechanics because they are always Hamiltonian systems.
The idea is to make use of the identity
d
x
d
t
=
(
d
t
d
x
)
−
1
{\displaystyle {\frac {dx}{dt}}=\left({\frac {dt}{dx}}\right)^{-1}}
which follows from the chain rule, barring any issues due to division by zero.
By inverting both sides of a first order autonomous system, one can immediately integrate with respect to
x
{\displaystyle x}
:
d
x
d
t
=
f
(
x
)
⇒
d
t
d
x
=
1
f
(
x
)
⇒
t
+
C
=
∫
d
x
f
(
x
)
{\displaystyle {\frac {dx}{dt}}=f(x)\quad \Rightarrow \quad {\frac {dt}{dx}}={\frac {1}{f(x)}}\quad \Rightarrow \quad t+C=\int {\frac {dx}{f(x)}}}
which is another way to view the separation of variables technique. The second derivative must be expressed as a derivative with respect to
x
{\displaystyle x}
instead of
t
{\displaystyle t}
:
d
2
x
d
t
2
=
d
d
t
(
d
x
d
t
)
=
d
d
x
(
d
x
d
t
)
d
x
d
t
=
d
d
x
(
(
d
t
d
x
)
−
1
)
(
d
t
d
x
)
−
1
=
−
(
d
t
d
x
)
−
2
d
2
t
d
x
2
(
d
t
d
x
)
−
1
=
−
(
d
t
d
x
)
−
3
d
2
t
d
x
2
=
d
d
x
(
1
2
(
d
t
d
x
)
−
2
)
{\displaystyle {\begin{aligned}{\frac {d^{2}x}{dt^{2}}}&={\frac {d}{dt}}\left({\frac {dx}{dt}}\right)={\frac {d}{dx}}\left({\frac {dx}{dt}}\right){\frac {dx}{dt}}\\[4pt]&={\frac {d}{dx}}\left(\left({\frac {dt}{dx}}\right)^{-1}\right)\left({\frac {dt}{dx}}\right)^{-1}\\[4pt]&=-\left({\frac {dt}{dx}}\right)^{-2}{\frac {d^{2}t}{dx^{2}}}\left({\frac {dt}{dx}}\right)^{-1}=-\left({\frac {dt}{dx}}\right)^{-3}{\frac {d^{2}t}{dx^{2}}}\\[4pt]&={\frac {d}{dx}}\left({\frac {1}{2}}\left({\frac {dt}{dx}}\right)^{-2}\right)\end{aligned}}}
To reemphasize: what's been accomplished is that the second derivative with respect to
t
{\displaystyle t}
has been expressed as a derivative of
x
{\displaystyle x}
. The original second order equation can now be integrated:
d
2
x
d
t
2
=
f
(
x
)
d
d
x
(
1
2
(
d
t
d
x
)
−
2
)
=
f
(
x
)
(
d
t
d
x
)
−
2
=
2
∫
f
(
x
)
d
x
+
C
1
d
t
d
x
=
±
1
2
∫
f
(
x
)
d
x
+
C
1
t
+
C
2
=
±
∫
d
x
2
∫
f
(
x
)
d
x
+
C
1
{\displaystyle {\begin{aligned}{\frac {d^{2}x}{dt^{2}}}&=f(x)\\{\frac {d}{dx}}\left({\frac {1}{2}}\left({\frac {dt}{dx}}\right)^{-2}\right)&=f(x)\\\left({\frac {dt}{dx}}\right)^{-2}&=2\int f(x)dx+C_{1}\\{\frac {dt}{dx}}&=\pm {\frac {1}{\sqrt {2\int f(x)dx+C_{1}}}}\\t+C_{2}&=\pm \int {\frac {dx}{\sqrt {2\int f(x)dx+C_{1}}}}\end{aligned}}}
This is an implicit solution. The greatest potential problem is inability to simplify the integrals, which implies difficulty or impossibility in evaluating the integration constants.
==== Special case: x″ = x′n f(x) ====
Using the above approach, the technique can extend to the more general equation
d
2
x
d
t
2
=
(
d
x
d
t
)
n
f
(
x
)
{\displaystyle {\frac {d^{2}x}{dt^{2}}}=\left({\frac {dx}{dt}}\right)^{n}f(x)}
where
n
{\displaystyle n}
is some parameter not equal to two. This will work since the second derivative can be written in a form involving a power of
x
′
{\displaystyle x'}
. Rewriting the second derivative, rearranging, and expressing the left side as a derivative:
−
(
d
t
d
x
)
−
3
d
2
t
d
x
2
=
(
d
t
d
x
)
−
n
f
(
x
)
−
(
d
t
d
x
)
n
−
3
d
2
t
d
x
2
=
f
(
x
)
d
d
x
(
1
2
−
n
(
d
t
d
x
)
n
−
2
)
=
f
(
x
)
(
d
t
d
x
)
n
−
2
=
(
2
−
n
)
∫
f
(
x
)
d
x
+
C
1
t
+
C
2
=
∫
(
(
2
−
n
)
∫
f
(
x
)
d
x
+
C
1
)
1
n
−
2
d
x
{\displaystyle {\begin{aligned}&-\left({\frac {dt}{dx}}\right)^{-3}{\frac {d^{2}t}{dx^{2}}}=\left({\frac {dt}{dx}}\right)^{-n}f(x)\\[4pt]&-\left({\frac {dt}{dx}}\right)^{n-3}{\frac {d^{2}t}{dx^{2}}}=f(x)\\[4pt]&{\frac {d}{dx}}\left({\frac {1}{2-n}}\left({\frac {dt}{dx}}\right)^{n-2}\right)=f(x)\\[4pt]&\left({\frac {dt}{dx}}\right)^{n-2}=(2-n)\int f(x)dx+C_{1}\\[2pt]&t+C_{2}=\int \left((2-n)\int f(x)dx+C_{1}\right)^{\frac {1}{n-2}}dx\end{aligned}}}
The right will carry +/− if
n
{\displaystyle n}
is even. The treatment must be different if
n
=
2
{\displaystyle n=2}
:
−
(
d
t
d
x
)
−
1
d
2
t
d
x
2
=
f
(
x
)
−
d
d
x
(
ln
(
d
t
d
x
)
)
=
f
(
x
)
d
t
d
x
=
C
1
e
−
∫
f
(
x
)
d
x
t
+
C
2
=
C
1
∫
e
−
∫
f
(
x
)
d
x
d
x
{\displaystyle {\begin{aligned}-\left({\frac {dt}{dx}}\right)^{-1}{\frac {d^{2}t}{dx^{2}}}&=f(x)\\-{\frac {d}{dx}}\left(\ln \left({\frac {dt}{dx}}\right)\right)&=f(x)\\{\frac {dt}{dx}}&=C_{1}e^{-\int f(x)dx}\\t+C_{2}&=C_{1}\int e^{-\int f(x)dx}dx\end{aligned}}}
=== Higher orders ===
There is no analogous method for solving third- or higher-order autonomous equations. Such equations can only be solved exactly if they happen to have some other simplifying property, for instance linearity or dependence of the right side of the equation on the dependent variable only (i.e., not its derivatives). This should not be surprising, considering that nonlinear autonomous systems in three dimensions can produce truly chaotic behavior such as the Lorenz attractor and the Rössler attractor.
Likewise, general non-autonomous equations of second order are unsolvable explicitly, since these can also be chaotic, as in a periodically forced pendulum.
=== Multivariate case ===
In
x
′
(
t
)
=
A
x
(
t
)
{\displaystyle \mathbf {x} '(t)=A\mathbf {x} (t)}
, where
x
(
t
)
{\displaystyle \mathbf {x} (t)}
is an
n
{\displaystyle n}
-dimensional column vector dependent on
t
{\displaystyle t}
.
The solution is
x
(
t
)
=
e
A
t
c
{\displaystyle \mathbf {x} (t)=e^{At}\mathbf {c} }
where
c
{\displaystyle \mathbf {c} }
is an
n
×
1
{\displaystyle n\times 1}
constant vector.
=== Finite durations ===
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that from its own dynamics, the system will reach the value zero at an ending time and stay there in zero forever after. These finite-duration solutions cannot be analytical functions on the whole real line, and because they will be non-Lipschitz functions at the ending time, they don't stand uniqueness of solutions of Lipschitz differential equations.
As example, the equation:
y
′
=
−
sgn
(
y
)
|
y
|
,
y
(
0
)
=
1
{\displaystyle y'=-{\text{sgn}}(y){\sqrt {|y|}},\,\,y(0)=1}
Admits the finite duration solution:
y
(
x
)
=
1
4
(
1
−
x
2
+
|
1
−
x
2
|
)
2
{\displaystyle y(x)={\frac {1}{4}}\left(1-{\frac {x}{2}}+\left|1-{\frac {x}{2}}\right|\right)^{2}}
== See also ==
Non-autonomous system (mathematics)
== References == | Wikipedia/Autonomous_differential_equation |
In mathematics, in the area of numerical analysis, Galerkin methods are a family of methods for converting a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete problem by applying linear constraints determined by finite sets of basis functions. They are named after the Soviet mathematician Boris Galerkin.
Often when referring to a Galerkin method, one also gives the name along with typical assumptions and approximation methods used:
Ritz–Galerkin method (after Walther Ritz) typically assumes symmetric and positive-definite bilinear form in the weak formulation, where the differential equation for a physical system can be formulated via minimization of a quadratic function representing the system energy and the approximate solution is a linear combination of the given set of the basis functions.
Bubnov–Galerkin method (after Ivan Bubnov) does not require the bilinear form to be symmetric and substitutes the energy minimization with orthogonality constraints determined by the same basis functions that are used to approximate the solution. In an operator formulation of the differential equation, Bubnov–Galerkin method can be viewed as applying an orthogonal projection to the operator.
Petrov–Galerkin method (after Georgii I. Petrov) allows using basis functions for orthogonality constraints (called test basis functions) that are different from the basis functions used to approximate the solution. Petrov–Galerkin method can be viewed as an extension of Bubnov–Galerkin method, applying a projection that is not necessarily orthogonal in the operator formulation of the differential equation.
Examples of Galerkin methods are:
the Galerkin method of weighted residuals, the most common method of calculating the global stiffness matrix in the finite element method,
the boundary element method for solving integral equations,
Krylov subspace methods.
== Linear equation in a Hilbert space ==
=== Weak formulation of a linear equation ===
Let us introduce Galerkin's method with an abstract problem posed as a weak formulation on a Hilbert space
V
{\displaystyle V}
, namely,
find
u
∈
V
{\displaystyle u\in V}
such that for all
v
∈
V
:
a
(
u
,
v
)
=
f
(
v
)
{\displaystyle v\in V:a(u,v)=f(v)}
.
Here,
a
(
⋅
,
⋅
)
{\displaystyle a(\cdot ,\cdot )}
is a bilinear form (the exact requirements on
a
(
⋅
,
⋅
)
{\displaystyle a(\cdot ,\cdot )}
will be specified later) and
f
{\displaystyle f}
is a bounded linear functional on
V
{\displaystyle V}
.
=== Galerkin dimension reduction ===
Choose a subspace
V
n
⊂
V
{\displaystyle V_{n}\subset V}
of dimension n and solve the projected problem:
Find
u
n
∈
V
n
{\displaystyle u_{n}\in V_{n}}
such that for all
v
n
∈
V
n
,
a
(
u
n
,
v
n
)
=
f
(
v
n
)
{\displaystyle v_{n}\in V_{n},a(u_{n},v_{n})=f(v_{n})}
.
We call this the Galerkin equation. Notice that the equation has remained unchanged and only the spaces have changed.
Reducing the problem to a finite-dimensional vector subspace allows us to numerically compute
u
n
{\displaystyle u_{n}}
as a finite linear combination of the basis vectors in
V
n
{\displaystyle V_{n}}
.
=== Galerkin orthogonality ===
The key property of the Galerkin approach is that the error is orthogonal to the chosen subspaces. Since
V
n
⊂
V
{\displaystyle V_{n}\subset V}
, we can use
v
n
{\displaystyle v_{n}}
as a test vector in the original equation. Subtracting the two, we get the Galerkin orthogonality relation for the error,
ϵ
n
=
u
−
u
n
{\displaystyle \epsilon _{n}=u-u_{n}}
which is the error between the solution of the original problem,
u
{\displaystyle u}
, and the solution of the Galerkin equation,
u
n
{\displaystyle u_{n}}
a
(
ϵ
n
,
v
n
)
=
a
(
u
,
v
n
)
−
a
(
u
n
,
v
n
)
=
f
(
v
n
)
−
f
(
v
n
)
=
0.
{\displaystyle a(\epsilon _{n},v_{n})=a(u,v_{n})-a(u_{n},v_{n})=f(v_{n})-f(v_{n})=0.}
=== Matrix form of Galerkin's equation ===
Since the aim of Galerkin's method is the production of a linear system of equations, we build its matrix form, which can be used to compute the solution algorithmically.
Let
e
1
,
e
2
,
…
,
e
n
{\displaystyle e_{1},e_{2},\ldots ,e_{n}}
be a basis for
V
n
{\displaystyle V_{n}}
. Then, it is sufficient to use these in turn for testing the Galerkin equation, i.e.: find
u
n
∈
V
n
{\displaystyle u_{n}\in V_{n}}
such that
a
(
u
n
,
e
i
)
=
f
(
e
i
)
i
=
1
,
…
,
n
.
{\displaystyle a(u_{n},e_{i})=f(e_{i})\quad i=1,\ldots ,n.}
We expand
u
n
{\displaystyle u_{n}}
with respect to this basis,
u
n
=
∑
j
=
1
n
u
j
e
j
{\displaystyle u_{n}=\sum _{j=1}^{n}u_{j}e_{j}}
and insert it into the equation above, to obtain
a
(
∑
j
=
1
n
u
j
e
j
,
e
i
)
=
∑
j
=
1
n
u
j
a
(
e
j
,
e
i
)
=
f
(
e
i
)
i
=
1
,
…
,
n
.
{\displaystyle a\left(\sum _{j=1}^{n}u_{j}e_{j},e_{i}\right)=\sum _{j=1}^{n}u_{j}a(e_{j},e_{i})=f(e_{i})\quad i=1,\ldots ,n.}
This previous equation is actually a linear system of equations
A
u
=
f
{\displaystyle Au=f}
, where
A
i
j
=
a
(
e
j
,
e
i
)
,
f
i
=
f
(
e
i
)
.
{\displaystyle A_{ij}=a(e_{j},e_{i}),\quad f_{i}=f(e_{i}).}
==== Symmetry of the matrix ====
Due to the definition of the matrix entries, the matrix of the Galerkin equation is symmetric if and only if the bilinear form
a
(
⋅
,
⋅
)
{\displaystyle a(\cdot ,\cdot )}
is symmetric.
== Analysis of Galerkin methods ==
Here, we will restrict ourselves to symmetric bilinear forms, that is
a
(
u
,
v
)
=
a
(
v
,
u
)
.
{\displaystyle a(u,v)=a(v,u).}
While this is not really a restriction of Galerkin methods, the application of the standard theory becomes much simpler. Furthermore, a Petrov–Galerkin method may be required in the nonsymmetric case.
The analysis of these methods proceeds in two steps. First, we will show that the Galerkin equation is a well-posed problem in the sense of Hadamard and therefore admits a unique solution. In the second step, we study the quality of approximation of the Galerkin solution
u
n
{\displaystyle u_{n}}
.
The analysis will mostly rest on two properties of the bilinear form, namely
Boundedness: for all
u
,
v
∈
V
{\displaystyle u,v\in V}
holds
a
(
u
,
v
)
≤
C
‖
u
‖
‖
v
‖
{\displaystyle a(u,v)\leq C\|u\|\,\|v\|}
for some constant
C
>
0
{\displaystyle C>0}
Ellipticity: for all
u
∈
V
{\displaystyle u\in V}
holds
a
(
u
,
u
)
≥
c
‖
u
‖
2
{\displaystyle a(u,u)\geq c\|u\|^{2}}
for some constant
c
>
0.
{\displaystyle c>0.}
By the Lax-Milgram theorem (see weak formulation), these two conditions imply well-posedness of the original problem in weak formulation. All norms in the following sections will be norms for which the above inequalities hold (these norms are often called an energy norm).
=== Well-posedness of the Galerkin equation ===
Since
V
n
⊂
V
{\displaystyle V_{n}\subset V}
, boundedness and ellipticity of the bilinear form apply to
V
n
{\displaystyle V_{n}}
. Therefore, the well-posedness of the Galerkin problem is actually inherited from the well-posedness of the original problem.
=== Quasi-best approximation (Céa's lemma) ===
The error
u
−
u
n
{\displaystyle u-u_{n}}
between the original and the Galerkin solution admits the estimate
‖
u
−
u
n
‖
≤
C
c
inf
v
n
∈
V
n
‖
u
−
v
n
‖
.
{\displaystyle \|u-u_{n}\|\leq {\frac {C}{c}}\inf _{v_{n}\in V_{n}}\|u-v_{n}\|.}
This means, that up to the constant
C
/
c
{\displaystyle C/c}
, the Galerkin solution
u
n
{\displaystyle u_{n}}
is as close to the original solution
u
{\displaystyle u}
as any other vector in
V
n
{\displaystyle V_{n}}
. In particular, it will be sufficient to study approximation by spaces
V
n
{\displaystyle V_{n}}
, completely forgetting about the equation being solved.
==== Proof ====
Since the proof is very simple and the basic principle behind all Galerkin methods, we include it here:
by ellipticity and boundedness of the bilinear form (inequalities) and Galerkin orthogonality (equals sign in the middle), we have for arbitrary
v
n
∈
V
n
{\displaystyle v_{n}\in V_{n}}
:
c
‖
u
−
u
n
‖
2
≤
a
(
u
−
u
n
,
u
−
u
n
)
=
a
(
u
−
u
n
,
u
−
v
n
)
≤
C
‖
u
−
u
n
‖
‖
u
−
v
n
‖
.
{\displaystyle c\|u-u_{n}\|^{2}\leq a(u-u_{n},u-u_{n})=a(u-u_{n},u-v_{n})\leq C\|u-u_{n}\|\,\|u-v_{n}\|.}
Dividing by
c
‖
u
−
u
n
‖
{\displaystyle c\|u-u_{n}\|}
and taking the infimum over all possible
v
n
{\displaystyle v_{n}}
yields the lemma.
=== Galerkin's best approximation property in the energy norm ===
For simplicity of presentation in the section above we have assumed that the bilinear form
a
(
u
,
v
)
{\displaystyle a(u,v)}
is symmetric and positive-definite, which implies that it is a scalar product and the expression
‖
u
‖
a
=
a
(
u
,
u
)
{\displaystyle \|u\|_{a}={\sqrt {a(u,u)}}}
is actually a valid vector norm, called the energy norm. Under these assumptions one can easily prove in addition Galerkin's best approximation property in the energy norm.
Using Galerkin a-orthogonality and the Cauchy–Schwarz inequality for the energy norm, we obtain
‖
u
−
u
n
‖
a
2
=
a
(
u
−
u
n
,
u
−
u
n
)
=
a
(
u
−
u
n
,
u
−
v
n
)
≤
‖
u
−
u
n
‖
a
‖
u
−
v
n
‖
a
.
{\displaystyle \|u-u_{n}\|_{a}^{2}=a(u-u_{n},u-u_{n})=a(u-u_{n},u-v_{n})\leq \|u-u_{n}\|_{a}\,\|u-v_{n}\|_{a}.}
Dividing by
‖
u
−
u
n
‖
a
{\displaystyle \|u-u_{n}\|_{a}}
and taking the infimum over all possible
v
n
∈
V
n
{\displaystyle v_{n}\in V_{n}}
proves that the Galerkin approximation
u
n
∈
V
n
{\displaystyle u_{n}\in V_{n}}
is the best approximation in the energy norm within the subspace
V
n
⊂
V
{\displaystyle V_{n}\subset V}
, i.e.
u
n
∈
V
n
{\displaystyle u_{n}\in V_{n}}
is nothing but the orthogonal, with respect to the scalar product
a
(
u
,
v
)
{\displaystyle a(u,v)}
, projection of the solution
u
{\displaystyle u}
to the subspace
V
n
{\displaystyle V_{n}}
.
== Galerkin method for stepped Structures ==
I. Elishakof, M. Amato, A. Marzani, P.A. Arvan, and J.N. Reddy
studied the application of the Galerkin method to stepped structures. They showed that the generalized function, namely unit-step function, Dirac’s delta function, and the doublet function are needed for obtaining accurate results.
== History ==
The approach is usually credited to Boris Galerkin. The method was explained to the Western reader by Hencky and Duncan among others. Its convergence was studied by Mikhlin and Leipholz Its coincidence with Fourier method was illustrated by Elishakoff et al. Its equivalence to Ritz's method for conservative problems was shown by Singer. Gander and Wanner showed how Ritz and Galerkin methods led to the modern finite element method. One hundred years of method's development was discussed by Repin. Elishakoff, Kaplunov and Kaplunov show that the Galerkin’s method was not developed by Ritz, contrary to the Timoshenko’s statements.
== See also ==
Ritz method
== References ==
== External links ==
"Galerkin method", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Galerkin Method from MathWorld | Wikipedia/Galerkin_method |
In numerical analysis, the Runge–Kutta methods (English: RUUNG-ə-KUUT-tah) are a family of implicit and explicit iterative methods, which include the Euler method, used in temporal discretization for the approximate solutions of simultaneous nonlinear equations. These methods were developed around 1900 by the German mathematicians Carl Runge and Wilhelm Kutta.
== The Runge–Kutta method ==
The most widely known member of the Runge–Kutta family is generally referred to as "RK4", the "classic Runge–Kutta method" or simply as "the Runge–Kutta method".
Let an initial value problem be specified as follows:
d
y
d
t
=
f
(
t
,
y
)
,
y
(
t
0
)
=
y
0
.
{\displaystyle {\frac {dy}{dt}}=f(t,y),\quad y(t_{0})=y_{0}.}
Here
y
{\displaystyle y}
is an unknown function (scalar or vector) of time
t
{\displaystyle t}
, which we would like to approximate; we are told that
d
y
d
t
{\displaystyle {\frac {dy}{dt}}}
, the rate at which
y
{\displaystyle y}
changes, is a function of
t
{\displaystyle t}
and of
y
{\displaystyle y}
itself. At the initial time
t
0
{\displaystyle t_{0}}
the corresponding
y
{\displaystyle y}
value is
y
0
{\displaystyle y_{0}}
. The function
f
{\displaystyle f}
and the initial conditions
t
0
{\displaystyle t_{0}}
,
y
0
{\displaystyle y_{0}}
are given.
Now we pick a step-size h > 0 and define:
y
n
+
1
=
y
n
+
h
6
(
k
1
+
2
k
2
+
2
k
3
+
k
4
)
,
t
n
+
1
=
t
n
+
h
{\displaystyle {\begin{aligned}y_{n+1}&=y_{n}+{\frac {h}{6}}\left(k_{1}+2k_{2}+2k_{3}+k_{4}\right),\\t_{n+1}&=t_{n}+h\\\end{aligned}}}
for n = 0, 1, 2, 3, ..., using
k
1
=
f
(
t
n
,
y
n
)
,
k
2
=
f
(
t
n
+
h
2
,
y
n
+
h
k
1
2
)
,
k
3
=
f
(
t
n
+
h
2
,
y
n
+
h
k
2
2
)
,
k
4
=
f
(
t
n
+
h
,
y
n
+
h
k
3
)
.
{\displaystyle {\begin{aligned}k_{1}&=\ f(t_{n},y_{n}),\\k_{2}&=\ f\!\left(t_{n}+{\frac {h}{2}},y_{n}+h{\frac {k_{1}}{2}}\right),\\k_{3}&=\ f\!\left(t_{n}+{\frac {h}{2}},y_{n}+h{\frac {k_{2}}{2}}\right),\\k_{4}&=\ f\!\left(t_{n}+h,y_{n}+hk_{3}\right).\end{aligned}}}
(Note: the above equations have different but equivalent definitions in different texts.)
Here
y
n
+
1
{\displaystyle y_{n+1}}
is the RK4 approximation of
y
(
t
n
+
1
)
{\displaystyle y(t_{n+1})}
, and the next value (
y
n
+
1
{\displaystyle y_{n+1}}
) is determined by the present value (
y
n
{\displaystyle y_{n}}
) plus the weighted average of four increments, where each increment is the product of the size of the interval, h, and an estimated slope specified by function f on the right-hand side of the differential equation.
k
1
{\displaystyle k_{1}}
is the slope at the beginning of the interval, using
y
{\displaystyle y}
(Euler's method);
k
2
{\displaystyle k_{2}}
is the slope at the midpoint of the interval, using
y
{\displaystyle y}
and
k
1
{\displaystyle k_{1}}
;
k
3
{\displaystyle k_{3}}
is again the slope at the midpoint, but now using
y
{\displaystyle y}
and
k
2
{\displaystyle k_{2}}
;
k
4
{\displaystyle k_{4}}
is the slope at the end of the interval, using
y
{\displaystyle y}
and
k
3
{\displaystyle k_{3}}
.
In averaging the four slopes, greater weight is given to the slopes at the midpoint. If
f
{\displaystyle f}
is independent of
y
{\displaystyle y}
, so that the differential equation is equivalent to a simple integral, then RK4 is Simpson's rule.
The RK4 method is a fourth-order method, meaning that the local truncation error is on the order of
O
(
h
5
)
{\displaystyle O(h^{5})}
, while the total accumulated error is on the order of
O
(
h
4
)
{\displaystyle O(h^{4})}
.
In many practical applications the function
f
{\displaystyle f}
is independent of
t
{\displaystyle t}
(so called autonomous system, or time-invariant system, especially in physics), and their increments are not computed at all and not passed to function
f
{\displaystyle f}
, with only the final formula for
t
n
+
1
{\displaystyle t_{n+1}}
used.
== Explicit Runge–Kutta methods ==
The family of explicit Runge–Kutta methods is a generalization of the RK4 method mentioned above. It is given by
y
n
+
1
=
y
n
+
h
∑
i
=
1
s
b
i
k
i
,
{\displaystyle y_{n+1}=y_{n}+h\sum _{i=1}^{s}b_{i}k_{i},}
where
k
1
=
f
(
t
n
,
y
n
)
,
k
2
=
f
(
t
n
+
c
2
h
,
y
n
+
(
a
21
k
1
)
h
)
,
k
3
=
f
(
t
n
+
c
3
h
,
y
n
+
(
a
31
k
1
+
a
32
k
2
)
h
)
,
⋮
k
s
=
f
(
t
n
+
c
s
h
,
y
n
+
(
a
s
1
k
1
+
a
s
2
k
2
+
⋯
+
a
s
,
s
−
1
k
s
−
1
)
h
)
.
{\displaystyle {\begin{aligned}k_{1}&=f(t_{n},y_{n}),\\k_{2}&=f(t_{n}+c_{2}h,y_{n}+(a_{21}k_{1})h),\\k_{3}&=f(t_{n}+c_{3}h,y_{n}+(a_{31}k_{1}+a_{32}k_{2})h),\\&\ \ \vdots \\k_{s}&=f(t_{n}+c_{s}h,y_{n}+(a_{s1}k_{1}+a_{s2}k_{2}+\cdots +a_{s,s-1}k_{s-1})h).\end{aligned}}}
(Note: the above equations may have different but equivalent definitions in some texts.)
To specify a particular method, one needs to provide the integer s (the number of stages), and the coefficients aij (for 1 ≤ j < i ≤ s), bi (for i = 1, 2, ..., s) and ci (for i = 2, 3, ..., s). The matrix [aij] is called the Runge–Kutta matrix, while the bi and ci are known as the weights and the nodes. These data are usually arranged in a mnemonic device, known as a Butcher tableau (after John C. Butcher):
A Taylor series expansion shows that the Runge–Kutta method is consistent if and only if
∑
i
=
1
s
b
i
=
1.
{\displaystyle \sum _{i=1}^{s}b_{i}=1.}
There are also accompanying requirements if one requires the method to have a certain order p, meaning that the local truncation error is O(hp+1). These can be derived from the definition of the truncation error itself. For example, a two-stage method has order 2 if b1 + b2 = 1, b2c2 = 1/2, and b2a21 = 1/2. Note that a popular condition for determining coefficients is
∑
j
=
1
i
−
1
a
i
j
=
c
i
for
i
=
2
,
…
,
s
.
{\displaystyle \sum _{j=1}^{i-1}a_{ij}=c_{i}{\text{ for }}i=2,\ldots ,s.}
This condition alone, however, is neither sufficient, nor necessary for consistency.
In general, if an explicit
s
{\displaystyle s}
-stage Runge–Kutta method has order
p
{\displaystyle p}
, then it can be proven that the number of stages must satisfy
s
≥
p
{\displaystyle s\geq p}
and if
p
≥
5
{\displaystyle p\geq 5}
, then
s
≥
p
+
1
{\displaystyle s\geq p+1}
.
However, it is not known whether these bounds are sharp in all cases. In some cases, it is proven that the bound cannot be achieved. For instance, Butcher proved that for
p
>
6
{\displaystyle p>6}
, there is no explicit method with
s
=
p
+
1
{\displaystyle s=p+1}
stages. Butcher also proved that for
p
>
7
{\displaystyle p>7}
, there is no explicit Runge-Kutta method with
p
+
2
{\displaystyle p+2}
stages. In general, however, it remains an open problem what the precise minimum number of stages
s
{\displaystyle s}
is for an explicit Runge–Kutta method to have order
p
{\displaystyle p}
. Some values which are known are:
p
1
2
3
4
5
6
7
8
min
s
1
2
3
4
6
7
9
11
{\displaystyle {\begin{array}{c|cccccccc}p&1&2&3&4&5&6&7&8\\\hline \min s&1&2&3&4&6&7&9&11\end{array}}}
The provable bound above then imply that we can not find methods of orders
p
=
1
,
2
,
…
,
6
{\displaystyle p=1,2,\ldots ,6}
that require fewer stages than the methods we already know for these orders. The work of Butcher also proves that 7th and 8th order methods have a minimum of 9 and 11 stages, respectively. An example of an explicit method of order 6 with 7 stages can be found in Ref. Explicit methods of order 7 with 9 stages and explicit methods of order 8 with 11 stages are also known. See Refs. for a summary.
=== Examples ===
The RK4 method falls in this framework. Its tableau is
A slight variation of "the" Runge–Kutta method is also due to Kutta in 1901 and is called the 3/8-rule. The primary advantage this method has is that almost all of the error coefficients are smaller than in the popular method, but it requires slightly more FLOPs (floating-point operations) per time step. Its Butcher tableau is
However, the simplest Runge–Kutta method is the (forward) Euler method, given by the formula
y
n
+
1
=
y
n
+
h
f
(
t
n
,
y
n
)
{\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n})}
. This is the only consistent explicit Runge–Kutta method with one stage. The corresponding tableau is
=== Second-order methods with two stages ===
An example of a second-order method with two stages is provided by the explicit midpoint method:
y
n
+
1
=
y
n
+
h
f
(
t
n
+
1
2
h
,
y
n
+
1
2
h
f
(
t
n
,
y
n
)
)
.
{\displaystyle y_{n+1}=y_{n}+hf\left(t_{n}+{\frac {1}{2}}h,y_{n}+{\frac {1}{2}}hf(t_{n},\ y_{n})\right).}
The corresponding tableau is
The midpoint method is not the only second-order Runge–Kutta method with two stages; there is a family of such methods, parameterized by α and given by the formula
y
n
+
1
=
y
n
+
h
(
(
1
−
1
2
α
)
f
(
t
n
,
y
n
)
+
1
2
α
f
(
t
n
+
α
h
,
y
n
+
α
h
f
(
t
n
,
y
n
)
)
)
.
{\displaystyle y_{n+1}=y_{n}+h{\bigl (}(1-{\tfrac {1}{2\alpha }})f(t_{n},y_{n})+{\tfrac {1}{2\alpha }}f(t_{n}+\alpha h,y_{n}+\alpha hf(t_{n},y_{n})){\bigr )}.}
Its Butcher tableau is
In this family,
α
=
1
2
{\displaystyle \alpha ={\tfrac {1}{2}}}
gives the midpoint method,
α
=
1
{\displaystyle \alpha =1}
is Heun's method, and
α
=
2
3
{\displaystyle \alpha ={\tfrac {2}{3}}}
is Ralston's method.
== Use ==
As an example, consider the two-stage second-order Runge–Kutta method with α = 2/3, also known as Ralston method. It is given by the tableau
with the corresponding equations
k
1
=
f
(
t
n
,
y
n
)
,
k
2
=
f
(
t
n
+
2
3
h
,
y
n
+
2
3
h
k
1
)
,
y
n
+
1
=
y
n
+
h
(
1
4
k
1
+
3
4
k
2
)
.
{\displaystyle {\begin{aligned}k_{1}&=f(t_{n},\ y_{n}),\\k_{2}&=f(t_{n}+{\tfrac {2}{3}}h,\ y_{n}+{\tfrac {2}{3}}hk_{1}),\\y_{n+1}&=y_{n}+h\left({\tfrac {1}{4}}k_{1}+{\tfrac {3}{4}}k_{2}\right).\end{aligned}}}
This method is used to solve the initial-value problem
d
y
d
t
=
tan
(
y
)
+
1
,
y
0
=
1
,
t
∈
[
1
,
1.1
]
{\displaystyle {\frac {dy}{dt}}=\tan(y)+1,\quad y_{0}=1,\ t\in [1,1.1]}
with step size h = 0.025, so the method needs to take four steps.
The method proceeds as follows:
The numerical solutions correspond to the underlined values.
== Adaptive Runge–Kutta methods ==
Adaptive methods are designed to produce an estimate of the local truncation error of a single Runge–Kutta step. This is done by having two methods, one with order
p
{\displaystyle p}
and one with order
p
−
1
{\displaystyle p-1}
. These methods are interwoven, i.e., they have common intermediate steps. Thanks to this, estimating the error has little or negligible computational cost compared to a step with the higher-order method.
During the integration, the step size is adapted such that the estimated error stays below a user-defined threshold: If the error is too high, a step is repeated with a lower step size; if the error is much smaller, the step size is increased to save time. This results in an (almost), optimal step size, which saves computation time. Moreover, the user does not have to spend time on finding an appropriate step size.
The lower-order step is given by
y
n
+
1
∗
=
y
n
+
h
∑
i
=
1
s
b
i
∗
k
i
,
{\displaystyle y_{n+1}^{*}=y_{n}+h\sum _{i=1}^{s}b_{i}^{*}k_{i},}
where
k
i
{\displaystyle k_{i}}
are the same as for the higher-order method. Then the error is
e
n
+
1
=
y
n
+
1
−
y
n
+
1
∗
=
h
∑
i
=
1
s
(
b
i
−
b
i
∗
)
k
i
,
{\displaystyle e_{n+1}=y_{n+1}-y_{n+1}^{*}=h\sum _{i=1}^{s}(b_{i}-b_{i}^{*})k_{i},}
which is
O
(
h
p
)
{\displaystyle O(h^{p})}
.
The Butcher tableau for this kind of method is extended to give the values of
b
i
∗
{\displaystyle b_{i}^{*}}
:
The Runge–Kutta–Fehlberg method has two methods of orders 5 and 4. Its extended Butcher tableau is:
However, the simplest adaptive Runge–Kutta method involves combining Heun's method, which is order 2, with the Euler method, which is order 1. Its extended Butcher tableau is:
Other adaptive Runge–Kutta methods are the Bogacki–Shampine method (orders 3 and 2), the Cash–Karp method and the Dormand–Prince method (both with orders 5 and 4).
== Nonconfluent Runge–Kutta methods ==
A Runge–Kutta method is said to be nonconfluent if all the
c
i
,
i
=
1
,
2
,
…
,
s
{\displaystyle c_{i},\,i=1,2,\ldots ,s}
are distinct.
== Runge–Kutta–Nyström methods ==
Runge–Kutta–Nyström methods are specialized Runge–Kutta methods that are optimized for second-order differential equations. A general Runge–Kutta–Nyström method for a second-order ODE system
y
¨
i
=
f
i
(
y
1
,
y
2
,
…
,
y
n
)
{\displaystyle {\ddot {y}}_{i}=f_{i}(y_{1},y_{2},\ldots ,y_{n})}
with order
s
{\displaystyle s}
is with the form
{
g
i
=
y
m
+
c
i
h
y
˙
m
+
h
2
∑
j
=
1
s
a
i
j
f
(
g
j
)
,
i
=
1
,
2
,
…
,
s
y
m
+
1
=
y
m
+
h
y
˙
m
+
h
2
∑
j
=
1
s
b
¯
j
f
(
g
j
)
y
˙
m
+
1
=
y
˙
m
+
h
∑
j
=
1
s
b
j
f
(
g
j
)
{\displaystyle {\begin{cases}g_{i}=y_{m}+c_{i}h{\dot {y}}_{m}+h^{2}\sum _{j=1}^{s}a_{ij}f(g_{j}),&i=1,2,\ldots ,s\\y_{m+1}=y_{m}+h{\dot {y}}_{m}+h^{2}\sum _{j=1}^{s}{\bar {b}}_{j}f(g_{j})\\{\dot {y}}_{m+1}={\dot {y}}_{m}+h\sum _{j=1}^{s}b_{j}f(g_{j})\end{cases}}}
which forms a Butcher table with the form
c
1
a
11
a
12
…
a
1
s
c
2
a
21
a
22
…
a
2
s
⋮
⋮
⋮
⋱
⋮
c
s
a
s
1
a
s
2
…
a
s
s
b
¯
1
b
¯
2
…
b
¯
s
b
1
b
2
…
b
s
=
c
A
b
¯
⊤
b
⊤
{\displaystyle {\begin{array}{c|cccc}c_{1}&a_{11}&a_{12}&\dots &a_{1s}\\c_{2}&a_{21}&a_{22}&\dots &a_{2s}\\\vdots &\vdots &\vdots &\ddots &\vdots \\c_{s}&a_{s1}&a_{s2}&\dots &a_{ss}\\\hline &{\bar {b}}_{1}&{\bar {b}}_{2}&\dots &{\bar {b}}_{s}\\&b_{1}&b_{2}&\dots &b_{s}\end{array}}={\begin{array}{c|c}\mathbf {c} &\mathbf {A} \\\hline &\mathbf {\bar {b}} ^{\top }\\&\mathbf {b} ^{\top }\end{array}}}
Two fourth-order explicit RKN methods are given by the following Butcher tables:
c
i
a
i
j
3
+
3
6
0
0
0
3
−
3
6
2
−
3
12
0
0
3
+
3
6
0
3
6
0
b
i
¯
5
−
3
3
24
3
+
3
12
1
+
3
24
b
i
3
−
2
3
12
1
2
3
+
2
3
12
{\displaystyle {\begin{array}{c|ccc}c_{i}&&a_{ij}&\\{\frac {3+{\sqrt {3}}}{6}}&0&0&0\\{\frac {3-{\sqrt {3}}}{6}}&{\frac {2-{\sqrt {3}}}{12}}&0&0\\{\frac {3+{\sqrt {3}}}{6}}&0&{\frac {\sqrt {3}}{6}}&0\\\hline {\overline {b_{i}}}&{\frac {5-3{\sqrt {3}}}{24}}&{\frac {3+{\sqrt {3}}}{12}}&{\frac {1+{\sqrt {3}}}{24}}\\\hline b_{i}&{\frac {3-2{\sqrt {3}}}{12}}&{\frac {1}{2}}&{\frac {3+2{\sqrt {3}}}{12}}\end{array}}}
c
i
a
i
j
3
−
3
6
0
0
0
3
+
3
6
2
+
3
12
0
0
3
−
3
6
0
−
3
6
0
b
i
¯
5
+
3
3
24
3
−
3
12
1
−
3
24
b
i
3
+
2
3
12
1
2
3
−
2
3
12
{\displaystyle {\begin{array}{c|ccc}c_{i}&&a_{ij}&\\{\frac {3-{\sqrt {3}}}{6}}&0&0&0\\{\frac {3+{\sqrt {3}}}{6}}&{\frac {2+{\sqrt {3}}}{12}}&0&0\\{\frac {3-{\sqrt {3}}}{6}}&0&-{\frac {\sqrt {3}}{6}}&0\\\hline {\overline {b_{i}}}&{\frac {5+3{\sqrt {3}}}{24}}&{\frac {3-{\sqrt {3}}}{12}}&{\frac {1-{\sqrt {3}}}{24}}\\\hline b_{i}&{\frac {3+2{\sqrt {3}}}{12}}&{\frac {1}{2}}&{\frac {3-2{\sqrt {3}}}{12}}\end{array}}}
These two schemes also have the symplectic-preserving properties when the original equation is derived from a conservative classical mechanical system, i.e. when
f
i
(
x
1
,
…
,
x
n
)
=
∂
V
∂
x
i
(
x
1
,
…
,
x
n
)
{\displaystyle f_{i}(x_{1},\ldots ,x_{n})={\frac {\partial V}{\partial x_{i}}}(x_{1},\ldots ,x_{n})}
for some scalar function
V
{\displaystyle V}
.
== Implicit Runge–Kutta methods ==
All Runge–Kutta methods mentioned up to now are explicit methods. Explicit Runge–Kutta methods are generally unsuitable for the solution of stiff equations because their region of absolute stability is small; in particular, it is bounded.
This issue is especially important in the solution of partial differential equations.
The instability of explicit Runge–Kutta methods motivates the development of implicit methods. An implicit Runge–Kutta method has the form
y
n
+
1
=
y
n
+
h
∑
i
=
1
s
b
i
k
i
,
{\displaystyle y_{n+1}=y_{n}+h\sum _{i=1}^{s}b_{i}k_{i},}
where
k
i
=
f
(
t
n
+
c
i
h
,
y
n
+
h
∑
j
=
1
s
a
i
j
k
j
)
,
i
=
1
,
…
,
s
.
{\displaystyle k_{i}=f\left(t_{n}+c_{i}h,\ y_{n}+h\sum _{j=1}^{s}a_{ij}k_{j}\right),\quad i=1,\ldots ,s.}
The difference with an explicit method is that in an explicit method, the sum over j only goes up to i − 1. This also shows up in the Butcher tableau: the coefficient matrix
a
i
j
{\displaystyle a_{ij}}
of an explicit method is lower triangular. In an implicit method, the sum over j goes up to s and the coefficient matrix is not triangular, yielding a Butcher tableau of the form
c
1
a
11
a
12
…
a
1
s
c
2
a
21
a
22
…
a
2
s
⋮
⋮
⋮
⋱
⋮
c
s
a
s
1
a
s
2
…
a
s
s
b
1
b
2
…
b
s
b
1
∗
b
2
∗
…
b
s
∗
=
c
A
b
T
{\displaystyle {\begin{array}{c|cccc}c_{1}&a_{11}&a_{12}&\dots &a_{1s}\\c_{2}&a_{21}&a_{22}&\dots &a_{2s}\\\vdots &\vdots &\vdots &\ddots &\vdots \\c_{s}&a_{s1}&a_{s2}&\dots &a_{ss}\\\hline &b_{1}&b_{2}&\dots &b_{s}\\&b_{1}^{*}&b_{2}^{*}&\dots &b_{s}^{*}\\\end{array}}={\begin{array}{c|c}\mathbf {c} &A\\\hline &\mathbf {b^{T}} \\\end{array}}}
See Adaptive Runge-Kutta methods above for the explanation of the
b
∗
{\displaystyle b^{*}}
row.
The consequence of this difference is that at every step, a system of algebraic equations has to be solved. This increases the computational cost considerably. If a method with s stages is used to solve a differential equation with m components, then the system of algebraic equations has ms components. This can be contrasted with implicit linear multistep methods (the other big family of methods for ODEs): an implicit s-step linear multistep method needs to solve a system of algebraic equations with only m components, so the size of the system does not increase as the number of steps increases.
=== Examples ===
The simplest example of an implicit Runge–Kutta method is the backward Euler method:
y
n
+
1
=
y
n
+
h
f
(
t
n
+
h
,
y
n
+
1
)
.
{\displaystyle y_{n+1}=y_{n}+hf(t_{n}+h,\ y_{n+1}).\,}
The Butcher tableau for this is simply:
1
1
1
{\displaystyle {\begin{array}{c|c}1&1\\\hline &1\\\end{array}}}
This Butcher tableau corresponds to the formulae
k
1
=
f
(
t
n
+
h
,
y
n
+
h
k
1
)
and
y
n
+
1
=
y
n
+
h
k
1
,
{\displaystyle k_{1}=f(t_{n}+h,\ y_{n}+hk_{1})\quad {\text{and}}\quad y_{n+1}=y_{n}+hk_{1},}
which can be re-arranged to get the formula for the backward Euler method listed above.
Another example for an implicit Runge–Kutta method is the trapezoidal rule. Its Butcher tableau is:
0
0
0
1
1
2
1
2
1
2
1
2
1
0
{\displaystyle {\begin{array}{c|cc}0&0&0\\1&{\frac {1}{2}}&{\frac {1}{2}}\\\hline &{\frac {1}{2}}&{\frac {1}{2}}\\&1&0\\\end{array}}}
The trapezoidal rule is a collocation method (as discussed in that article). All collocation methods are implicit Runge–Kutta methods, but not all implicit Runge–Kutta methods are collocation methods.
The Gauss–Legendre methods form a family of collocation methods based on Gauss quadrature. A Gauss–Legendre method with s stages has order 2s (thus, methods with arbitrarily high order can be constructed). The method with two stages (and thus order four) has Butcher tableau:
1
2
−
1
6
3
1
4
1
4
−
1
6
3
1
2
+
1
6
3
1
4
+
1
6
3
1
4
1
2
1
2
1
2
+
1
2
3
1
2
−
1
2
3
{\displaystyle {\begin{array}{c|cc}{\frac {1}{2}}-{\frac {1}{6}}{\sqrt {3}}&{\frac {1}{4}}&{\frac {1}{4}}-{\frac {1}{6}}{\sqrt {3}}\\{\frac {1}{2}}+{\frac {1}{6}}{\sqrt {3}}&{\frac {1}{4}}+{\frac {1}{6}}{\sqrt {3}}&{\frac {1}{4}}\\\hline &{\frac {1}{2}}&{\frac {1}{2}}\\&{\frac {1}{2}}+{\frac {1}{2}}{\sqrt {3}}&{\frac {1}{2}}-{\frac {1}{2}}{\sqrt {3}}\end{array}}}
=== Stability ===
The advantage of implicit Runge–Kutta methods over explicit ones is their greater stability, especially when applied to stiff equations. Consider the linear test equation
y
′
=
λ
y
{\displaystyle y'=\lambda y}
. A Runge–Kutta method applied to this equation reduces to the iteration
y
n
+
1
=
r
(
h
λ
)
y
n
{\displaystyle y_{n+1}=r(h\lambda )\,y_{n}}
, with r given by
r
(
z
)
=
1
+
z
b
T
(
I
−
z
A
)
−
1
e
=
det
(
I
−
z
A
+
z
e
b
T
)
det
(
I
−
z
A
)
,
{\displaystyle r(z)=1+zb^{T}(I-zA)^{-1}e={\frac {\det(I-zA+zeb^{T})}{\det(I-zA)}},}
where e stands for the vector of ones. The function r is called the stability function. It follows from the formula that r is the quotient of two polynomials of degree s if the method has s stages. Explicit methods have a strictly lower triangular matrix A, which implies that det(I − zA) = 1 and that the stability function is a polynomial.
The numerical solution to the linear test equation decays to zero if | r(z) | < 1 with z = hλ. The set of such z is called the domain of absolute stability. In particular, the method is said to be absolute stable if all z with Re(z) < 0 are in the domain of absolute stability. The stability function of an explicit Runge–Kutta method is a polynomial, so explicit Runge–Kutta methods can never be A-stable.
If the method has order p, then the stability function satisfies
r
(
z
)
=
e
z
+
O
(
z
p
+
1
)
{\displaystyle r(z)={\textrm {e}}^{z}+O(z^{p+1})}
as
z
→
0
{\displaystyle z\to 0}
. Thus, it is of interest to study quotients of polynomials of given degrees that approximate the exponential function the best. These are known as Padé approximants. A Padé approximant with numerator of degree m and denominator of degree n is A-stable if and only if m ≤ n ≤ m + 2.
The Gauss–Legendre method with s stages has order 2s, so its stability function is the Padé approximant with m = n = s. It follows that the method is A-stable. This shows that A-stable Runge–Kutta can have arbitrarily high order. In contrast, the order of A-stable linear multistep methods cannot exceed two.
== B-stability ==
The A-stability concept for the solution of differential equations is related to the linear autonomous equation
y
′
=
λ
y
{\displaystyle y'=\lambda y}
. Dahlquist (1963) proposed the investigation of stability of numerical schemes when applied to nonlinear systems that satisfy a monotonicity condition. The corresponding concepts were defined as G-stability for multistep methods (and the related one-leg methods) and B-stability (Butcher, 1975) for Runge–Kutta methods. A Runge–Kutta method applied to the non-linear system
y
′
=
f
(
y
)
{\displaystyle y'=f(y)}
, which verifies
⟨
f
(
y
)
−
f
(
z
)
,
y
−
z
⟩
≤
0
{\displaystyle \langle f(y)-f(z),\ y-z\rangle \leq 0}
, is called B-stable, if this condition implies
‖
y
n
+
1
−
z
n
+
1
‖
≤
‖
y
n
−
z
n
‖
{\displaystyle \|y_{n+1}-z_{n+1}\|\leq \|y_{n}-z_{n}\|}
for two numerical solutions.
Let
B
{\displaystyle B}
,
M
{\displaystyle M}
and
Q
{\displaystyle Q}
be three
s
×
s
{\displaystyle s\times s}
matrices defined by
B
=
diag
(
b
1
,
b
2
,
…
,
b
s
)
,
M
=
B
A
+
A
T
B
−
b
b
T
,
Q
=
B
A
−
1
+
A
−
T
B
−
A
−
T
b
b
T
A
−
1
.
{\displaystyle {\begin{aligned}B&=\operatorname {diag} (b_{1},b_{2},\ldots ,b_{s}),\\[4pt]M&=BA+A^{T}B-bb^{T},\\[4pt]Q&=BA^{-1}+A^{-T}B-A^{-T}bb^{T}A^{-1}.\end{aligned}}}
A Runge–Kutta method is said to be algebraically stable if the matrices
B
{\displaystyle B}
and
M
{\displaystyle M}
are both non-negative definite. A sufficient condition for B-stability is:
B
{\displaystyle B}
and
Q
{\displaystyle Q}
are non-negative definite.
== Derivation of the Runge–Kutta fourth-order method ==
In general a Runge–Kutta method of order
s
{\displaystyle s}
can be written as:
y
t
+
h
=
y
t
+
h
⋅
∑
i
=
1
s
a
i
k
i
+
O
(
h
s
+
1
)
,
{\displaystyle y_{t+h}=y_{t}+h\cdot \sum _{i=1}^{s}a_{i}k_{i}+{\mathcal {O}}(h^{s+1}),}
where:
k
i
=
∑
j
=
1
s
β
i
j
f
(
k
j
,
t
n
+
α
i
h
)
{\displaystyle k_{i}=\sum _{j=1}^{s}\beta _{ij}f(k_{j},\ t_{n}+\alpha _{i}h)}
are increments obtained evaluating the derivatives of
y
t
{\displaystyle y_{t}}
at the
i
{\displaystyle i}
-th order.
We develop the derivation for the Runge–Kutta fourth-order method using the general formula with
s
=
4
{\displaystyle s=4}
evaluated, as explained above, at the starting point, the midpoint and the end point of any interval
(
t
,
t
+
h
)
{\displaystyle (t,\ t+h)}
; thus, we choose:
α
i
β
i
j
α
1
=
0
β
21
=
1
2
α
2
=
1
2
β
32
=
1
2
α
3
=
1
2
β
43
=
1
α
4
=
1
{\displaystyle {\begin{aligned}&\alpha _{i}&&\beta _{ij}\\\alpha _{1}&=0&\beta _{21}&={\frac {1}{2}}\\\alpha _{2}&={\frac {1}{2}}&\beta _{32}&={\frac {1}{2}}\\\alpha _{3}&={\frac {1}{2}}&\beta _{43}&=1\\\alpha _{4}&=1&&\\\end{aligned}}}
and
β
i
j
=
0
{\displaystyle \beta _{ij}=0}
otherwise. We begin by defining the following quantities:
y
t
+
h
1
=
y
t
+
h
f
(
y
t
,
t
)
y
t
+
h
2
=
y
t
+
h
f
(
y
t
+
h
/
2
1
,
t
+
h
2
)
y
t
+
h
3
=
y
t
+
h
f
(
y
t
+
h
/
2
2
,
t
+
h
2
)
{\displaystyle {\begin{aligned}y_{t+h}^{1}&=y_{t}+hf\left(y_{t},\ t\right)\\y_{t+h}^{2}&=y_{t}+hf\left(y_{t+h/2}^{1},\ t+{\frac {h}{2}}\right)\\y_{t+h}^{3}&=y_{t}+hf\left(y_{t+h/2}^{2},\ t+{\frac {h}{2}}\right)\end{aligned}}}
where
y
t
+
h
/
2
1
=
y
t
+
y
t
+
h
1
2
{\displaystyle y_{t+h/2}^{1}={\dfrac {y_{t}+y_{t+h}^{1}}{2}}}
and
y
t
+
h
/
2
2
=
y
t
+
y
t
+
h
2
2
.
{\displaystyle y_{t+h/2}^{2}={\dfrac {y_{t}+y_{t+h}^{2}}{2}}.}
If we define:
k
1
=
f
(
y
t
,
t
)
k
2
=
f
(
y
t
+
h
/
2
1
,
t
+
h
2
)
=
f
(
y
t
+
h
2
k
1
,
t
+
h
2
)
k
3
=
f
(
y
t
+
h
/
2
2
,
t
+
h
2
)
=
f
(
y
t
+
h
2
k
2
,
t
+
h
2
)
k
4
=
f
(
y
t
+
h
3
,
t
+
h
)
=
f
(
y
t
+
h
k
3
,
t
+
h
)
{\displaystyle {\begin{aligned}k_{1}&=f(y_{t},\ t)\\k_{2}&=f\left(y_{t+h/2}^{1},\ t+{\frac {h}{2}}\right)=f\left(y_{t}+{\frac {h}{2}}k_{1},\ t+{\frac {h}{2}}\right)\\k_{3}&=f\left(y_{t+h/2}^{2},\ t+{\frac {h}{2}}\right)=f\left(y_{t}+{\frac {h}{2}}k_{2},\ t+{\frac {h}{2}}\right)\\k_{4}&=f\left(y_{t+h}^{3},\ t+h\right)=f\left(y_{t}+hk_{3},\ t+h\right)\end{aligned}}}
and for the previous relations we can show that the following equalities hold up to
O
(
h
2
)
{\displaystyle {\mathcal {O}}(h^{2})}
:
k
2
=
f
(
y
t
+
h
/
2
1
,
t
+
h
2
)
=
f
(
y
t
+
h
2
k
1
,
t
+
h
2
)
=
f
(
y
t
,
t
)
+
h
2
d
d
t
f
(
y
t
,
t
)
k
3
=
f
(
y
t
+
h
/
2
2
,
t
+
h
2
)
=
f
(
y
t
+
h
2
f
(
y
t
+
h
2
k
1
,
t
+
h
2
)
,
t
+
h
2
)
=
f
(
y
t
,
t
)
+
h
2
d
d
t
[
f
(
y
t
,
t
)
+
h
2
d
d
t
f
(
y
t
,
t
)
]
k
4
=
f
(
y
t
+
h
3
,
t
+
h
)
=
f
(
y
t
+
h
f
(
y
t
+
h
2
k
2
,
t
+
h
2
)
,
t
+
h
)
=
f
(
y
t
+
h
f
(
y
t
+
h
2
f
(
y
t
+
h
2
f
(
y
t
,
t
)
,
t
+
h
2
)
,
t
+
h
2
)
,
t
+
h
)
=
f
(
y
t
,
t
)
+
h
d
d
t
[
f
(
y
t
,
t
)
+
h
2
d
d
t
[
f
(
y
t
,
t
)
+
h
2
d
d
t
f
(
y
t
,
t
)
]
]
{\displaystyle {\begin{aligned}k_{2}&=f\left(y_{t+h/2}^{1},\ t+{\frac {h}{2}}\right)=f\left(y_{t}+{\frac {h}{2}}k_{1},\ t+{\frac {h}{2}}\right)\\&=f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}f\left(y_{t},\ t\right)\\k_{3}&=f\left(y_{t+h/2}^{2},\ t+{\frac {h}{2}}\right)=f\left(y_{t}+{\frac {h}{2}}f\left(y_{t}+{\frac {h}{2}}k_{1},\ t+{\frac {h}{2}}\right),\ t+{\frac {h}{2}}\right)\\&=f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}\left[f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}f\left(y_{t},\ t\right)\right]\\k_{4}&=f\left(y_{t+h}^{3},\ t+h\right)=f\left(y_{t}+hf\left(y_{t}+{\frac {h}{2}}k_{2},\ t+{\frac {h}{2}}\right),\ t+h\right)\\&=f\left(y_{t}+hf\left(y_{t}+{\frac {h}{2}}f\left(y_{t}+{\frac {h}{2}}f\left(y_{t},\ t\right),\ t+{\frac {h}{2}}\right),\ t+{\frac {h}{2}}\right),\ t+h\right)\\&=f\left(y_{t},\ t\right)+h{\frac {d}{dt}}\left[f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}\left[f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}f\left(y_{t},\ t\right)\right]\right]\end{aligned}}}
where:
d
d
t
f
(
y
t
,
t
)
=
∂
∂
y
f
(
y
t
,
t
)
y
˙
t
+
∂
∂
t
f
(
y
t
,
t
)
=
f
y
(
y
t
,
t
)
y
˙
t
+
f
t
(
y
t
,
t
)
:=
y
¨
t
{\displaystyle {\frac {d}{dt}}f(y_{t},\ t)={\frac {\partial }{\partial y}}f(y_{t},\ t){\dot {y}}_{t}+{\frac {\partial }{\partial t}}f(y_{t},\ t)=f_{y}(y_{t},\ t){\dot {y}}_{t}+f_{t}(y_{t},\ t):={\ddot {y}}_{t}}
is the total derivative of
f
{\displaystyle f}
with respect to time.
If we now express the general formula using what we just derived we obtain:
y
t
+
h
=
y
t
+
h
{
a
⋅
f
(
y
t
,
t
)
+
b
⋅
[
f
(
y
t
,
t
)
+
h
2
d
d
t
f
(
y
t
,
t
)
]
+
+
c
⋅
[
f
(
y
t
,
t
)
+
h
2
d
d
t
[
f
(
y
t
,
t
)
+
h
2
d
d
t
f
(
y
t
,
t
)
]
]
+
+
d
⋅
[
f
(
y
t
,
t
)
+
h
d
d
t
[
f
(
y
t
,
t
)
+
h
2
d
d
t
[
f
(
y
t
,
t
)
+
h
2
d
d
t
f
(
y
t
,
t
)
]
]
]
}
+
O
(
h
5
)
=
y
t
+
a
⋅
h
f
t
+
b
⋅
h
f
t
+
b
⋅
h
2
2
d
f
t
d
t
+
c
⋅
h
f
t
+
c
⋅
h
2
2
d
f
t
d
t
+
+
c
⋅
h
3
4
d
2
f
t
d
t
2
+
d
⋅
h
f
t
+
d
⋅
h
2
d
f
t
d
t
+
d
⋅
h
3
2
d
2
f
t
d
t
2
+
d
⋅
h
4
4
d
3
f
t
d
t
3
+
O
(
h
5
)
{\displaystyle {\begin{aligned}y_{t+h}={}&y_{t}+h\left\lbrace a\cdot f(y_{t},\ t)+b\cdot \left[f(y_{t},\ t)+{\frac {h}{2}}{\frac {d}{dt}}f(y_{t},\ t)\right]\right.+\\&{}+c\cdot \left[f(y_{t},\ t)+{\frac {h}{2}}{\frac {d}{dt}}\left[f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}f(y_{t},\ t)\right]\right]+\\&{}+d\cdot \left[f(y_{t},\ t)+h{\frac {d}{dt}}\left[f(y_{t},\ t)+{\frac {h}{2}}{\frac {d}{dt}}\left[f(y_{t},\ t)+\left.{\frac {h}{2}}{\frac {d}{dt}}f(y_{t},\ t)\right]\right]\right]\right\rbrace +{\mathcal {O}}(h^{5})\\={}&y_{t}+a\cdot hf_{t}+b\cdot hf_{t}+b\cdot {\frac {h^{2}}{2}}{\frac {df_{t}}{dt}}+c\cdot hf_{t}+c\cdot {\frac {h^{2}}{2}}{\frac {df_{t}}{dt}}+\\&{}+c\cdot {\frac {h^{3}}{4}}{\frac {d^{2}f_{t}}{dt^{2}}}+d\cdot hf_{t}+d\cdot h^{2}{\frac {df_{t}}{dt}}+d\cdot {\frac {h^{3}}{2}}{\frac {d^{2}f_{t}}{dt^{2}}}+d\cdot {\frac {h^{4}}{4}}{\frac {d^{3}f_{t}}{dt^{3}}}+{\mathcal {O}}(h^{5})\end{aligned}}}
and comparing this with the Taylor series of
y
t
+
h
{\displaystyle y_{t+h}}
around
t
{\displaystyle t}
:
y
t
+
h
=
y
t
+
h
y
˙
t
+
h
2
2
y
¨
t
+
h
3
6
y
t
(
3
)
+
h
4
24
y
t
(
4
)
+
O
(
h
5
)
=
=
y
t
+
h
f
(
y
t
,
t
)
+
h
2
2
d
d
t
f
(
y
t
,
t
)
+
h
3
6
d
2
d
t
2
f
(
y
t
,
t
)
+
h
4
24
d
3
d
t
3
f
(
y
t
,
t
)
{\displaystyle {\begin{aligned}y_{t+h}&=y_{t}+h{\dot {y}}_{t}+{\frac {h^{2}}{2}}{\ddot {y}}_{t}+{\frac {h^{3}}{6}}y_{t}^{(3)}+{\frac {h^{4}}{24}}y_{t}^{(4)}+{\mathcal {O}}(h^{5})=\\&=y_{t}+hf(y_{t},\ t)+{\frac {h^{2}}{2}}{\frac {d}{dt}}f(y_{t},\ t)+{\frac {h^{3}}{6}}{\frac {d^{2}}{dt^{2}}}f(y_{t},\ t)+{\frac {h^{4}}{24}}{\frac {d^{3}}{dt^{3}}}f(y_{t},\ t)\end{aligned}}}
we obtain a system of constraints on the coefficients:
{
a
+
b
+
c
+
d
=
1
1
2
b
+
1
2
c
+
d
=
1
2
1
4
c
+
1
2
d
=
1
6
1
4
d
=
1
24
{\displaystyle {\begin{cases}&a+b+c+d=1\\[6pt]&{\frac {1}{2}}b+{\frac {1}{2}}c+d={\frac {1}{2}}\\[6pt]&{\frac {1}{4}}c+{\frac {1}{2}}d={\frac {1}{6}}\\[6pt]&{\frac {1}{4}}d={\frac {1}{24}}\end{cases}}}
which when solved gives
a
=
1
6
,
b
=
1
3
,
c
=
1
3
,
d
=
1
6
{\displaystyle a={\frac {1}{6}},b={\frac {1}{3}},c={\frac {1}{3}},d={\frac {1}{6}}}
as stated above.
== See also ==
Euler's method
List of Runge–Kutta methods
Numerical methods for ordinary differential equations
Runge–Kutta method (SDE)
General linear methods
Lie group integrator
== Notes ==
== References ==
Runge, Carl David Tolmé (1895), "Über die numerische Auflösung von Differentialgleichungen", Mathematische Annalen, 46 (2), Springer: 167–178, doi:10.1007/BF01446807, S2CID 119924854.
Kutta, Wilhelm (1901), "Beitrag zur näherungsweisen Integration totaler Differentialgleichungen", Zeitschrift für Mathematik und Physik, 46: 435–453.
Ascher, Uri M.; Petzold, Linda R. (1998), Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, Philadelphia: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-412-8.
Atkinson, Kendall A. (1989), An Introduction to Numerical Analysis (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-50023-0.
Butcher, John C. (May 1963), "Coefficients for the study of Runge-Kutta integration processes", Journal of the Australian Mathematical Society, 3 (2): 185–201, doi:10.1017/S1446788700027932.
Butcher, John C. (May 1964), "On Runge-Kutta processes of high order", Journal of the Australian Mathematical Society, 4 (2): 179–194, doi:10.1017/S1446788700023387
Butcher, John C. (1975), "A stability property of implicit Runge-Kutta methods", BIT, 15 (4): 358–361, doi:10.1007/bf01931672, S2CID 120854166.
Butcher, John C. (2000), "Numerical methods for ordinary differential equations in the 20th century", J. Comput. Appl. Math., 125 (1–2): 1–29, Bibcode:2000JCoAM.125....1B, doi:10.1016/S0377-0427(00)00455-6.
Butcher, John C. (2008), Numerical Methods for Ordinary Differential Equations, New York: John Wiley & Sons, ISBN 978-0-470-72335-7.
Cellier, F.; Kofman, E. (2006), Continuous System Simulation, Springer Verlag, ISBN 0-387-26102-8.
Dahlquist, Germund (1963), "A special stability problem for linear multistep methods", BIT, 3: 27–43, doi:10.1007/BF01963532, hdl:10338.dmlcz/103497, ISSN 0006-3835, S2CID 120241743.
Forsythe, George E.; Malcolm, Michael A.; Moler, Cleve B. (1977), Computer Methods for Mathematical Computations, Prentice-Hall (see Chapter 6).
Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems, Berlin, New York: Springer-Verlag, ISBN 978-3-540-56670-0.
Hairer, Ernst; Wanner, Gerhard (1996), Solving ordinary differential equations II: Stiff and differential-algebraic problems (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-60452-5.
Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, Bibcode:1996fcna.book.....I, ISBN 978-0-521-55655-2.
Lambert, J.D (1991), Numerical Methods for Ordinary Differential Systems. The Initial Value Problem, John Wiley & Sons, ISBN 0-471-92990-5
Kaw, Autar; Kalu, Egwu (2008), Numerical Methods with Applications (1st ed.), autarkaw.com.
Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007), "Section 17.1 Runge-Kutta Method", Numerical Recipes: The Art of Scientific Computing (3rd ed.), Cambridge University Press, ISBN 978-0-521-88068-8. Also, Section 17.2. Adaptive Stepsize Control for Runge-Kutta.
Stoer, Josef; Bulirsch, Roland (2002), Introduction to Numerical Analysis (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-95452-3.
Süli, Endre; Mayers, David (2003), An Introduction to Numerical Analysis, Cambridge University Press, ISBN 0-521-00794-1.
Tan, Delin; Chen, Zheng (2012), "On A General Formula of Fourth Order Runge-Kutta Method" (PDF), Journal of Mathematical Science & Mathematics Education, 7 (2): 1–10.
advance discrete maths ignou reference book (code- mcs033)
John C. Butcher: "B-Series : Algebraic Analysis of Numerical Methods", Springer(SSCM, volume 55), ISBN 978-3030709556 (April, 2021).
Butcher, J.C. (1985), "The non-existence of ten stage eighth order explicit Runge-Kutta methods", BIT Numerical Mathematics, 25 (3): 521–540, doi:10.1007/BF01935372.
Butcher, J.C. (1965), "On the attainable order of Runge-Kutta methods", Mathematics of Computation, 19 (91): 408–417, doi:10.1090/S0025-5718-1965-0179943-X.
Curtis, A.R. (1970), "An eighth order Runge-Kutta process with eleven function evaluations per step", Numerische Mathematik, 16 (3): 268–277, doi:10.1007/BF02219778.
Cooper, G.J.; Verner, J.H. (1972), "Some Explicit Runge–Kutta Methods of High Order", SIAM Journal on Numerical Analysis, 9 (3): 389–405, Bibcode:1972SJNA....9..389C, doi:10.1137/0709037.
Butcher, J.C. (1996), "A History of Runge-Kutta Methods", Applied Numerical Mathematics, 20 (3): 247–260, doi:10.1016/0168-9274(95)00108-5.
== External links ==
"Runge-Kutta method", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Runge–Kutta 4th-Order Method
Tracker Component Library Implementation in Matlab — Implements 32 embedded Runge Kutta algorithms in RungeKStep, 24 embedded Runge-Kutta Nyström algorithms in RungeKNystroemSStep and 4 general Runge-Kutta Nyström algorithms in RungeKNystroemGStep. | Wikipedia/Runge–Kutta_methods |
In mathematics, the Korteweg–De Vries (KdV) equation is a partial differential equation (PDE) which serves as a mathematical model of waves on shallow water surfaces. It is particularly notable as the prototypical example of an integrable PDE, exhibiting typical behaviors such as a large number of explicit solutions, in particular soliton solutions, and an infinite number of conserved quantities, despite the nonlinearity which typically renders PDEs intractable. The KdV can be solved by the inverse scattering method (ISM). In fact, Clifford Gardner, John M. Greene, Martin Kruskal and Robert Miura developed the classical inverse scattering method to solve the KdV equation.
The KdV equation was first introduced by Joseph Valentin Boussinesq (1877, footnote on page 360) and rediscovered by Diederik Korteweg and Gustav de Vries in 1895, who found the simplest solution, the one-soliton solution. Understanding of the equation and behavior of solutions was greatly advanced by the computer simulations of Norman Zabusky and Kruskal in 1965 and then the development of the inverse scattering transform in 1967.
In 1972, T. Kawahara proposed a fifth-order KdV type of equation, known as Kawahara equation, that describes dispersive waves, particularly in cases when the coefficient of the KdV equation becomes very small or zero.
== Definition ==
The KdV equation is a partial differential equation that models (spatially) one-dimensional nonlinear dispersive nondissipative waves described by a function
ϕ
(
x
,
t
)
{\displaystyle \phi (x,t)}
adhering to:
∂
t
ϕ
+
∂
x
3
ϕ
−
6
ϕ
∂
x
ϕ
=
0
x
∈
R
,
t
≥
0
,
{\displaystyle \partial _{t}\phi +\partial _{x}^{3}\phi -6\,\phi \,\partial _{x}\phi =0\,\quad x\in \mathbb {R} ,\;t\geq 0,}
where
∂
x
3
ϕ
{\displaystyle \partial _{x}^{3}\phi }
accounts for dispersion and the nonlinear element
ϕ
∂
x
ϕ
{\displaystyle \phi \partial _{x}\phi }
is an advection term.
For modelling shallow water waves,
ϕ
{\displaystyle \phi }
is the height displacement of the water surface from its equilibrium height.
The constant
6
{\displaystyle 6}
in front of the last term is conventional but of no great significance: multiplying
t
{\displaystyle t}
,
x
{\displaystyle x}
, and
ϕ
{\displaystyle \phi }
by constants can be used to make the coefficients of any of the three terms equal to any given non-zero constants.
== Soliton solutions ==
=== One-soliton solution ===
Consider solutions in which a fixed waveform, given by
f
(
X
)
{\displaystyle f(X)}
, maintains its shape as it travels to the right at phase speed
c
{\displaystyle c}
. Such a solution is given by
φ
(
x
,
t
)
=
f
(
x
−
c
t
−
a
)
=
f
(
X
)
{\displaystyle \varphi (x,t)=f(x-ct-a)=f(X)}
. Substituting it into the KdV equation gives the ordinary differential equation
−
c
d
f
d
X
+
d
3
f
d
X
3
−
6
f
d
f
d
X
=
0
,
{\displaystyle -c{\frac {df}{dX}}+{\frac {d^{3}f}{dX^{3}}}-6f{\frac {df}{dX}}=0,}
or, integrating with respect to
X
{\displaystyle X}
,
−
c
f
+
d
2
f
d
X
2
−
3
f
2
=
A
{\displaystyle -cf+{\frac {d^{2}f}{dX^{2}}}-3f^{2}=A}
where
A
{\displaystyle A}
is a constant of integration. Interpreting the independent variable
X
{\displaystyle X}
above as a virtual time variable, this means
f
{\displaystyle f}
satisfies Newton's equation of motion of a particle of unit mass in a cubic potential
V
(
f
)
=
−
(
f
3
+
1
2
c
f
2
+
A
f
)
{\displaystyle V(f)=-\left(f^{3}+{\frac {1}{2}}cf^{2}+Af\right)}
.
If
A
=
0
,
c
>
0
{\displaystyle A=0,\,c>0}
then the potential function
V
(
f
)
{\displaystyle V(f)}
has local maximum at
f
=
0
{\displaystyle f=0}
; there is a solution in which
f
(
X
)
{\displaystyle f(X)}
starts at this point at 'virtual time'
−
∞
{\displaystyle -\infty }
, eventually slides down to the local minimum, then back up the other side, reaching an equal height, and then reverses direction, ending up at the local maximum again at time
∞
{\displaystyle \infty }
. In other words,
f
(
X
)
{\displaystyle f(X)}
approaches
0
{\displaystyle 0}
as
X
→
−
∞
{\displaystyle X\to -\infty }
. This is the characteristic shape of the solitary wave solution.
More precisely, the solution is
ϕ
(
x
,
t
)
=
−
1
2
c
sech
2
[
c
2
(
x
−
c
t
−
a
)
]
{\displaystyle \phi (x,t)=-{\frac {1}{2}}\,c\,\operatorname {sech} ^{2}\left[{{\sqrt {c}} \over 2}(x-c\,t-a)\right]}
where
sech
{\displaystyle \operatorname {sech} }
stands for the hyperbolic secant and
a
{\displaystyle a}
is an arbitrary constant. This describes a right-moving soliton with velocity
c
{\displaystyle c}
.
=== N-soliton solution ===
There is a known expression for a solution which is an
N
{\displaystyle N}
-soliton solution, which at late times resolves into
N
{\displaystyle N}
separate single solitons. The solution depends on a set of decreasing positive parameters
χ
1
>
⋯
>
χ
N
>
0
{\displaystyle \chi _{1}>\cdots >\chi _{N}>0}
and a set of non-zero parameters
β
1
,
⋯
,
β
N
{\displaystyle \beta _{1},\cdots ,\beta _{N}}
. The solution is given in the form
ϕ
(
x
,
t
)
=
−
2
∂
2
∂
x
2
l
o
g
[
d
e
t
A
(
x
,
t
)
]
{\displaystyle \phi (x,t)=-2{\frac {\partial ^{2}}{\partial x^{2}}}\mathrm {log} [\mathrm {det} A(x,t)]}
where the components of the matrix
A
(
x
,
t
)
{\displaystyle A(x,t)}
are
A
n
m
(
x
,
t
)
=
δ
n
m
+
β
n
e
8
χ
n
3
t
e
−
(
χ
n
+
χ
m
)
x
χ
n
+
χ
m
.
{\displaystyle A_{nm}(x,t)=\delta _{nm}+{\frac {\beta _{n}e^{8\chi _{n}^{3}t}e^{-(\chi _{n}+\chi _{m})x}}{\chi _{n}+\chi _{m}}}.}
This is derived using the inverse scattering method.
== Integrals of motion ==
The KdV equation has infinitely many integrals of motion, functionals on a solution
ϕ
(
t
)
{\displaystyle \phi (t)}
which do not change with time. They can be given explicitly as
∫
−
∞
+
∞
P
2
n
−
1
(
ϕ
,
∂
x
ϕ
,
∂
x
2
ϕ
,
…
)
d
x
{\displaystyle \int _{-\infty }^{+\infty }P_{2n-1}(\phi ,\,\partial _{x}\phi ,\,\partial _{x}^{2}\phi ,\,\ldots )\,{\text{d}}x\,}
where the polynomials
P
n
{\displaystyle P_{n}}
are defined recursively by
P
1
=
ϕ
,
P
n
=
−
d
P
n
−
1
d
x
+
∑
i
=
1
n
−
2
P
i
P
n
−
1
−
i
for
n
≥
2.
{\displaystyle {\begin{aligned}P_{1}&=\phi ,\\P_{n}&=-{\frac {dP_{n-1}}{dx}}+\sum _{i=1}^{n-2}\,P_{i}\,P_{n-1-i}\quad {\text{ for }}n\geq 2.\end{aligned}}}
The first few integrals of motion are:
the mass
∫
ϕ
d
x
,
{\displaystyle \int \phi \,\mathrm {d} x,}
the momentum
∫
ϕ
2
d
x
,
{\displaystyle \int \phi ^{2}\,\mathrm {d} x,}
the energy
∫
[
2
ϕ
3
−
(
∂
x
ϕ
)
2
]
d
x
{\displaystyle \int \left[2\phi ^{3}-\left(\partial _{x}\phi \right)^{2}\right]\,\mathrm {d} x}
.
Only the odd-numbered terms
P
2
n
+
1
{\displaystyle P_{2n+1}}
result in non-trivial (meaning non-zero) integrals of motion.
== Lax pairs ==
The KdV equation
∂
t
ϕ
=
6
ϕ
∂
x
ϕ
−
∂
x
3
ϕ
{\displaystyle \partial _{t}\phi =6\,\phi \,\partial _{x}\phi -\partial _{x}^{3}\phi }
can be reformulated as the Lax equation
L
t
=
[
L
,
A
]
≡
L
A
−
A
L
{\displaystyle L_{t}=[L,A]\equiv LA-AL\,}
with
L
{\displaystyle L}
a Sturm–Liouville operator:
L
=
−
∂
x
2
+
ϕ
,
A
=
4
∂
x
3
−
6
ϕ
∂
x
−
3
[
∂
x
,
ϕ
]
{\displaystyle {\begin{aligned}L&=-\partial _{x}^{2}+\phi ,\\A&=4\partial _{x}^{3}-6\phi \,\partial _{x}-3[\partial _{x},\phi ]\end{aligned}}}
where
[
∂
x
,
ϕ
]
{\displaystyle [\partial _{x},\phi ]}
is the commutator such that
[
∂
x
,
ϕ
]
f
=
f
∂
x
ϕ
{\displaystyle [\partial _{x},\phi ]f=f\partial _{x}\phi }
. The Lax pair accounts for the infinite number of first integrals of the KdV equation.
In fact,
L
{\displaystyle L}
is the time-independent Schrödinger operator (disregarding constants) with potential
ϕ
(
x
,
t
)
{\displaystyle \phi (x,t)}
. It can be shown that due to this Lax formulation that in fact the eigenvalues do not depend on
t
{\displaystyle t}
.
=== Zero-curvature representation ===
Setting the components of the Lax connection to be
L
x
=
(
0
1
ϕ
−
λ
0
)
,
L
t
=
(
−
ϕ
x
2
ϕ
+
4
λ
2
ϕ
2
−
ϕ
x
x
+
2
ϕ
λ
−
4
λ
2
ϕ
x
)
,
{\displaystyle L_{x}={\begin{pmatrix}0&1\\\phi -\lambda &0\end{pmatrix}},L_{t}={\begin{pmatrix}-\phi _{x}&2\phi +4\lambda \\2\phi ^{2}-\phi _{xx}+2\phi \lambda -4\lambda ^{2}&\phi _{x}\end{pmatrix}},}
the KdV equation is equivalent to the zero-curvature equation for the Lax connection,
∂
t
L
x
−
∂
x
L
t
+
[
L
x
,
L
t
]
=
0.
{\displaystyle \partial _{t}L_{x}-\partial _{x}L_{t}+[L_{x},L_{t}]=0.}
== Least action principle ==
The Korteweg–De Vries equation
∂
t
ϕ
+
6
ϕ
∂
x
ϕ
+
∂
x
3
ϕ
=
0
,
{\displaystyle \partial _{t}\phi +6\phi \,\partial _{x}\phi +\partial _{x}^{3}\phi =0,}
is the Euler–Lagrange equation of motion derived from the Lagrangian density,
L
{\displaystyle {\mathcal {L}}\,}
with
ϕ
{\displaystyle \phi }
defined by
ϕ
:=
∂
ψ
∂
x
.
{\displaystyle \phi :={\frac {\partial \psi }{\partial x}}.}
== Long-time asymptotics ==
It can be shown that any sufficiently fast decaying smooth solution will eventually split into a finite superposition of solitons travelling to the right plus a decaying dispersive part travelling to the left. This was first observed by Zabusky & Kruskal (1965) and can be rigorously proven using the nonlinear steepest descent analysis for oscillatory Riemann–Hilbert problems.
== History ==
The history of the KdV equation started with experiments by John Scott Russell in 1834, followed by theoretical investigations by Lord Rayleigh and Joseph Boussinesq around 1870 and, finally, Korteweg and De Vries in 1895.
The KdV equation was not studied much after this until Zabusky & Kruskal (1965) discovered numerically that its solutions seemed to decompose at large times into a collection of "solitons": well separated solitary waves. Moreover, the solitons seems to be almost unaffected in shape by passing through each other (though this could cause a change in their position). They also made the connection to earlier numerical experiments by Fermi, Pasta, Ulam, and Tsingou by showing that the KdV equation was the continuum limit of the FPUT system. Development of the analytic solution by means of the inverse scattering transform was done in 1967 by Gardner, Greene, Kruskal and Miura.
The KdV equation is now seen to be closely connected to Huygens' principle.
== Applications and connections ==
The KdV equation has several connections to physical problems. In addition to being the governing equation of the string in the Fermi–Pasta–Ulam–Tsingou problem in the continuum limit, it approximately describes the evolution of long, one-dimensional waves in many physical settings, including:
shallow-water waves with weakly non-linear restoring forces,
long internal waves in a density-stratified ocean,
ion acoustic waves in a plasma,
acoustic waves on a crystal lattice.
The KdV equation can also be solved using the inverse scattering transform such as those applied to the non-linear Schrödinger equation.
=== KdV equation and the Gross–Pitaevskii equation ===
Considering the simplified solutions of the form
ϕ
(
x
,
t
)
=
ϕ
(
x
±
t
)
{\displaystyle \phi (x,t)=\phi (x\pm t)}
we obtain the KdV equation as
±
∂
x
ϕ
+
∂
x
3
ϕ
+
6
ϕ
∂
x
ϕ
=
0
{\displaystyle \pm \partial _{x}\phi +\partial _{x}^{3}\phi +6\,\phi \,\partial _{x}\phi =0\,}
or
±
∂
x
ϕ
+
∂
x
(
∂
x
2
ϕ
+
3
ϕ
2
)
=
0
{\displaystyle \pm \partial _{x}\phi +\partial _{x}(\partial _{x}^{2}\phi +3\phi ^{2})=0\,}
Integrating and taking the special case in which the integration constant is zero, we have:
−
∂
x
2
ϕ
−
3
ϕ
2
=
±
ϕ
{\displaystyle -\partial _{x}^{2}\phi -3\phi ^{2}=\pm \phi \,}
which is the
λ
=
1
{\displaystyle \lambda =1}
special case of the generalized stationary Gross–Pitaevskii equation (GPE)
−
∂
x
2
ϕ
−
3
ϕ
λ
ϕ
=
±
ϕ
{\displaystyle -\partial _{x}^{2}\phi -3\phi ^{\lambda }\phi =\pm \phi \,}
Therefore, for the certain class of solutions of generalized GPE (
λ
=
4
{\displaystyle \lambda =4}
for the true one-dimensional condensate and
λ
=
2
{\displaystyle \lambda =2}
while using the three dimensional equation in one dimension), two equations are one. Furthermore, taking the
λ
=
3
{\displaystyle \lambda =3}
case with the minus sign and the
ϕ
{\displaystyle \phi }
real, one obtains an attractive self-interaction that should yield a bright soliton.
== Variations ==
Many different variations of the KdV equations have been studied. Some are listed in the following table.
== See also ==
== Notes ==
== References ==
Berest, Yuri Y.; Loutsenko, Igor M. (1997). "Huygens' Principle in Minkowski Spaces and Soliton Solutions of the Korteweg-de Vries Equation". Communications in Mathematical Physics. 190 (1): 113–132. arXiv:solv-int/9704012. doi:10.1007/s002200050235. ISSN 0010-3616.
Boussinesq, J. (1877), Essai sur la theorie des eaux courantes, Memoires presentes par divers savants ` l’Acad. des Sci. Inst. Nat. France, XXIII, pp. 1–680
Chalub, Fabio A.C.C.; Zubelli, Jorge P. (2006). "Huygens' principle for hyperbolic operators and integrable hierarchies" (PDF). Physica D: Nonlinear Phenomena. 213 (2): 231–245. doi:10.1016/j.physd.2005.11.008.
Darrigol, Olivier (2005). Worlds of Flow. Oxford; New York: Oxford University Press. ISBN 978-0-19-856843-8.
Dauxois, Thierry; Peyrard, Michel (2006). Physics of Solitons. Cambridge, UK; New York: Cambridge University Press. ISBN 0-521-85421-0. OCLC 61757137.
Dingemans, M. W. (1997). Water Wave Propagation Over Uneven Bottoms. River Edge, NJ: World Scientific. ISBN 981-02-0427-2.
Dunajski, Maciej (2009). Solitons, Instantons, and Twistors. Oxford; New York: OUP Oxford. ISBN 978-0-19-857063-9. OCLC 320199531.
Gardner, Clifford S.; Greene, John M.; Kruskal, Martin D.; Miura, Robert M. (1967). "Method for Solving the Korteweg-deVries Equation". Physical Review Letters. 19 (19): 1095–1097. doi:10.1103/PhysRevLett.19.1095. ISSN 0031-9007.
Grunert, Katrin; Teschl, Gerald (2009), "Long-Time Asymptotics for the Korteweg–De Vries Equation via Nonlinear Steepest Descent", Math. Phys. Anal. Geom., vol. 12, no. 3, pp. 287–324, arXiv:0807.5041, Bibcode:2009MPAG...12..287G, doi:10.1007/s11040-009-9062-2, S2CID 8740754
Korteweg, D. J.; de Vries, G. (1895). "XLI. On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 39 (240): 422–443. doi:10.1080/14786449508620739. ISSN 1941-5982.
Lax, Peter D. (1968). "Integrals of nonlinear equations of evolution and solitary waves". Communications on Pure and Applied Mathematics. 21 (5): 467–490. doi:10.1002/cpa.3160210503. ISSN 0010-3640. OSTI 4522657.
Miura, Robert M.; Gardner, Clifford S.; Kruskal, Martin D. (1968), "Korteweg–De Vries equation and generalizations. II. Existence of conservation laws and constants of motion", J. Math. Phys., 9 (8): 1204–1209, Bibcode:1968JMP.....9.1204M, doi:10.1063/1.1664701, MR 0252826
Polyanin, Andrei D.; Zaitsev, Valentin F. (2003). Handbook of Nonlinear Partial Differential Equations. Boca Raton, Fla: Chapman and Hall/CRC. ISBN 978-1-58488-355-5.
Vakakis, Alexander F. (2002). Normal Modes and Localization in Nonlinear Systems. Dordrecht; Boston: Springer Science & Business Media. ISBN 978-0-7923-7010-9.
Zabusky, N. J.; Kruskal, M. D. (1965). "Interaction of "Solitons" in a Collisionless Plasma and the Recurrence of Initial States". Physical Review Letters. 15 (6): 240–243. doi:10.1103/PhysRevLett.15.240. ISSN 0031-9007.
== External links ==
Korteweg–De Vries equation at EqWorld: The World of Mathematical Equations.
Korteweg–De Vries equation at NEQwiki, the nonlinear equations encyclopedia.
Cylindrical Korteweg–De Vries equation at EqWorld: The World of Mathematical Equations.
Modified Korteweg–De Vries equation at EqWorld: The World of Mathematical Equations.
Modified Korteweg–De Vries equation at NEQwiki, the nonlinear equations encyclopedia.
Weisstein, Eric W. "Korteweg–deVries Equation". MathWorld.
Derivation of the Korteweg–De Vries equation for a narrow canal.
Three Solitons Solution of KdV Equation – [1]
Three Solitons (unstable) Solution of KdV Equation – [2]
Mathematical aspects of equations of Korteweg–De Vries type are discussed on the Dispersive PDE Wiki.
Solitons from the Korteweg–De Vries Equation by S. M. Blinder, The Wolfram Demonstrations Project.
Solitons & Nonlinear Wave Equations | Wikipedia/Korteweg–De_Vries_equation |
In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a second-order method in time. It is implicit in time, can be written as an implicit Runge–Kutta method, and it is numerically stable. The method was developed by John Crank and Phyllis Nicolson in the 1940s.
For diffusion equations (and many other equations), it can be shown the Crank–Nicolson method is unconditionally stable. However, the approximate solutions can still contain (decaying) spurious oscillations if the ratio of time step
Δ
t
{\displaystyle \Delta t}
times the thermal diffusivity to the square of space step,
Δ
x
2
{\displaystyle \Delta x^{2}}
, is large (typically, larger than 1/2 per Von Neumann stability analysis). For this reason, whenever large time steps or high spatial resolution is necessary, the less accurate backward Euler method is often used, which is both stable and immune to oscillations.
== Principle ==
The Crank–Nicolson method is based on the trapezoidal rule, giving second-order convergence in time. For linear equations, the trapezoidal rule is equivalent to the implicit midpoint method—the simplest example of a Gauss–Legendre implicit Runge–Kutta method—which also has the property of being a geometric integrator. For example, in one dimension, suppose the partial differential equation is
∂
u
∂
t
=
F
(
u
,
x
,
t
,
∂
u
∂
x
,
∂
2
u
∂
x
2
)
.
{\displaystyle {\frac {\partial u}{\partial t}}=F\left(u,x,t,{\frac {\partial u}{\partial x}},{\frac {\partial ^{2}u}{\partial x^{2}}}\right).}
Letting
u
(
i
Δ
x
,
n
Δ
t
)
=
u
i
n
{\displaystyle u(i\Delta x,n\Delta t)=u_{i}^{n}}
and
F
i
n
=
F
{\displaystyle F_{i}^{n}=F}
evaluated for
i
,
n
{\displaystyle i,n}
and
u
i
n
{\displaystyle u_{i}^{n}}
, the equation for Crank–Nicolson method is a combination of the forward Euler method at
n
{\displaystyle n}
and the backward Euler method at
n
+
1
{\displaystyle n+1}
(note, however, that the method itself is not simply the average of those two methods, as the backward Euler equation has an implicit dependence on the solution):
Note that this is an implicit method: to get the "next" value of
u
{\displaystyle u}
in time, a system of algebraic equations must be solved. If the partial differential equation is nonlinear, the discretization will also be nonlinear, so that advancing in time will involve the solution of a system of nonlinear algebraic equations, though linearizations are possible. In many problems, especially linear diffusion, the algebraic problem is tridiagonal and may be efficiently solved with the tridiagonal matrix algorithm, which gives a fast
O
(
N
)
{\displaystyle {\mathcal {O}}(N)}
direct solution, as opposed to the usual
O
(
N
3
)
{\displaystyle {\mathcal {O}}(N^{3})}
for a full matrix, in which
N
{\displaystyle N}
indicates the matrix size.
== Example: 1D diffusion ==
The Crank–Nicolson method is often applied to diffusion problems. As an example, for linear diffusion,
∂
u
∂
t
=
a
∂
2
u
∂
x
2
,
{\displaystyle {\frac {\partial u}{\partial t}}=a{\frac {\partial ^{2}u}{\partial x^{2}}},}
applying a finite difference spatial discretization for the right-hand side, the Crank–Nicolson discretization is then
u
i
n
+
1
−
u
i
n
Δ
t
=
a
2
(
Δ
x
)
2
(
(
u
i
+
1
n
+
1
−
2
u
i
n
+
1
+
u
i
−
1
n
+
1
)
+
(
u
i
+
1
n
−
2
u
i
n
+
u
i
−
1
n
)
)
{\displaystyle {\frac {u_{i}^{n+1}-u_{i}^{n}}{\Delta t}}={\frac {a}{2(\Delta x)^{2}}}\left((u_{i+1}^{n+1}-2u_{i}^{n+1}+u_{i-1}^{n+1})+(u_{i+1}^{n}-2u_{i}^{n}+u_{i-1}^{n})\right)}
or, letting
r
=
a
Δ
t
2
(
Δ
x
)
2
{\displaystyle r={\frac {a\Delta t}{2(\Delta x)^{2}}}}
,
−
r
u
i
+
1
n
+
1
+
(
1
+
2
r
)
u
i
n
+
1
−
r
u
i
−
1
n
+
1
=
r
u
i
+
1
n
+
(
1
−
2
r
)
u
i
n
+
r
u
i
−
1
n
.
{\displaystyle -ru_{i+1}^{n+1}+(1+2r)u_{i}^{n+1}-ru_{i-1}^{n+1}=ru_{i+1}^{n}+(1-2r)u_{i}^{n}+ru_{i-1}^{n}.}
Given that the terms on the right-hand side of the equation are known, this is a tridiagonal problem, so that
u
i
n
+
1
{\displaystyle u_{i}^{n+1}}
may be efficiently solved by using the tridiagonal matrix algorithm in favor over the much more costly matrix inversion.
A quasilinear equation, such as (this is a minimalistic example and not general)
∂
u
∂
t
=
a
(
u
)
∂
2
u
∂
x
2
,
{\displaystyle {\frac {\partial u}{\partial t}}=a(u){\frac {\partial ^{2}u}{\partial x^{2}}},}
would lead to a nonlinear system of algebraic equations, which could not be easily solved as above; however, it is possible in some cases to linearize the problem by using the old value for
a
{\displaystyle a}
, that is,
a
i
n
(
u
)
{\displaystyle a_{i}^{n}(u)}
instead of
a
i
n
+
1
(
u
)
{\displaystyle a_{i}^{n+1}(u)}
. Other times, it may be possible to estimate
a
i
n
+
1
(
u
)
{\displaystyle a_{i}^{n+1}(u)}
using an explicit method and maintain stability.
== Example: 1D diffusion with advection for steady flow, with multiple channel connections ==
This is a solution usually employed for many purposes when there is a contamination problem in streams or rivers under steady flow conditions, but information is given in one dimension only. Often the problem can be simplified into a 1-dimensional problem and still yield useful information.
Here we model the concentration of a solute contaminant in water. This problem is composed of three parts: the known diffusion equation (
D
x
{\displaystyle D_{x}}
chosen as constant), an advective component (which means that the system is evolving in space due to a velocity field), which we choose to be a constant
U
x
{\displaystyle U_{x}}
, and a lateral interaction between longitudinal channels (
k
{\displaystyle k}
):
where
C
{\displaystyle C}
is the concentration of the contaminant, and subscripts
N
{\displaystyle N}
and
M
{\displaystyle M}
correspond to previous and next channel.
The Crank–Nicolson method (where
i
{\displaystyle i}
represents position, and
j
{\displaystyle j}
time) transforms each component of the PDE into the following:
Now we create the following constants to simplify the algebra:
λ
=
D
x
Δ
t
2
Δ
x
2
,
{\displaystyle \lambda ={\frac {D_{x}\,\Delta t}{2\,\Delta x^{2}}},}
α
=
U
x
Δ
t
4
Δ
x
,
{\displaystyle \alpha ={\frac {U_{x}\,\Delta t}{4\,\Delta x}},}
β
=
k
Δ
t
2
,
{\displaystyle \beta ={\frac {k\,\Delta t}{2}},}
and substitute (2), (3), (4), (5), (6), (7),
α
{\displaystyle \alpha }
,
β
{\displaystyle \beta }
and
λ
{\displaystyle \lambda }
into (1). We then put the new time terms on the left (
j
+
1
{\displaystyle j+1}
) and the present time terms on the right (
j
{\displaystyle j}
) to get
−
β
C
N
i
j
+
1
−
(
λ
+
α
)
C
i
−
1
j
+
1
+
(
1
+
2
λ
+
2
β
)
C
i
j
+
1
−
(
λ
−
α
)
C
i
+
1
j
+
1
−
β
C
M
i
j
+
1
=
{\displaystyle -\beta C_{Ni}^{j+1}-(\lambda +\alpha )C_{i-1}^{j+1}+(1+2\lambda +2\beta )C_{i}^{j+1}-(\lambda -\alpha )C_{i+1}^{j+1}-\beta C_{Mi}^{j+1}={}}
β
C
N
i
j
+
(
λ
+
α
)
C
i
−
1
j
+
(
1
−
2
λ
−
2
β
)
C
i
j
+
(
λ
−
α
)
C
i
+
1
j
+
β
C
M
i
j
.
{\displaystyle \qquad \beta C_{Ni}^{j}+(\lambda +\alpha )C_{i-1}^{j}+(1-2\lambda -2\beta )C_{i}^{j}+(\lambda -\alpha )C_{i+1}^{j}+\beta C_{Mi}^{j}.}
To model the first channel, we realize that it can only be in contact with the following channel (
M
{\displaystyle M}
), so the expression is simplified to
−
(
λ
+
α
)
C
i
−
1
j
+
1
+
(
1
+
2
λ
+
β
)
C
i
j
+
1
−
(
λ
−
α
)
C
i
+
1
j
+
1
−
β
C
M
i
j
+
1
=
{\displaystyle -(\lambda +\alpha )C_{i-1}^{j+1}+(1+2\lambda +\beta )C_{i}^{j+1}-(\lambda -\alpha )C_{i+1}^{j+1}-\beta C_{Mi}^{j+1}={}}
+
(
λ
+
α
)
C
i
−
1
j
+
(
1
−
2
λ
−
β
)
C
i
j
+
(
λ
−
α
)
C
i
+
1
j
+
β
C
M
i
j
.
{\displaystyle \qquad {}+(\lambda +\alpha )C_{i-1}^{j}+(1-2\lambda -\beta )C_{i}^{j}+(\lambda -\alpha )C_{i+1}^{j}+\beta C_{Mi}^{j}.}
In the same way, to model the last channel, we realize that it can only be in contact with the previous channel (
N
{\displaystyle N}
), so the expression is simplified to
−
β
C
N
i
j
+
1
−
(
λ
+
α
)
C
i
−
1
j
+
1
+
(
1
+
2
λ
+
β
)
C
i
j
+
1
−
(
λ
−
α
)
C
i
+
1
j
+
1
=
{\displaystyle -\beta C_{Ni}^{j+1}-(\lambda +\alpha )C_{i-1}^{j+1}+(1+2\lambda +\beta )C_{i}^{j+1}-(\lambda -\alpha )C_{i+1}^{j+1}={}}
β
C
N
i
j
+
(
λ
+
α
)
C
i
−
1
j
+
(
1
−
2
λ
−
β
)
C
i
j
+
(
λ
−
α
)
C
i
+
1
j
.
{\displaystyle \qquad \beta C_{Ni}^{j}+(\lambda +\alpha )C_{i-1}^{j}+(1-2\lambda -\beta )C_{i}^{j}+(\lambda -\alpha )C_{i+1}^{j}.}
To solve this linear system of equations, we must now see that boundary conditions must be given first to the beginning of the channels:
C
0
j
{\displaystyle C_{0}^{j}}
: boundary condition for the channel at present time step,
C
0
j
+
1
{\displaystyle C_{0}^{j+1}}
: boundary condition for the channel at next time step,
C
N
0
j
{\displaystyle C_{N0}^{j}}
: boundary condition for the previous channel to the one analyzed at present time step,
C
M
0
j
{\displaystyle C_{M0}^{j}}
: boundary condition for the next channel to the one analyzed at present time step.
For the last cell of the channels (
z
{\displaystyle z}
), the most convenient condition becomes an adiabatic one, so
∂
C
∂
x
|
x
=
z
=
C
i
+
1
−
C
i
−
1
2
Δ
x
=
0.
{\displaystyle \left.{\frac {\partial C}{\partial x}}\right|_{x=z}={\frac {C_{i+1}-C_{i-1}}{2\,\Delta x}}=0.}
This condition is satisfied if and only if (regardless of a null value)
C
i
+
1
j
+
1
=
C
i
−
1
j
+
1
.
{\displaystyle C_{i+1}^{j+1}=C_{i-1}^{j+1}.}
Let us solve this problem (in a matrix form) for the case of 3 channels and 5 nodes (including the initial boundary condition). We express this as a linear system problem:
A
A
C
j
+
1
=
B
B
C
j
+
d
,
{\displaystyle \mathbf {AA} \,\mathbf {C^{j+1}} =\mathbf {BB} \,\mathbf {C^{j}} +\mathbf {d} ,}
where
C
j
+
1
=
[
C
11
j
+
1
C
12
j
+
1
C
13
j
+
1
C
14
j
+
1
C
21
j
+
1
C
22
j
+
1
C
23
j
+
1
C
24
j
+
1
C
31
j
+
1
C
32
j
+
1
C
33
j
+
1
C
34
j
+
1
]
,
C
j
=
[
C
11
j
C
12
j
C
13
j
C
14
j
C
21
j
C
22
j
C
23
j
C
24
j
C
31
j
C
32
j
C
33
j
C
34
j
]
.
{\displaystyle \mathbf {C^{j+1}} ={\begin{bmatrix}C_{11}^{j+1}\\C_{12}^{j+1}\\C_{13}^{j+1}\\C_{14}^{j+1}\\C_{21}^{j+1}\\C_{22}^{j+1}\\C_{23}^{j+1}\\C_{24}^{j+1}\\C_{31}^{j+1}\\C_{32}^{j+1}\\C_{33}^{j+1}\\C_{34}^{j+1}\end{bmatrix}},\quad \mathbf {C^{j}} ={\begin{bmatrix}C_{11}^{j}\\C_{12}^{j}\\C_{13}^{j}\\C_{14}^{j}\\C_{21}^{j}\\C_{22}^{j}\\C_{23}^{j}\\C_{24}^{j}\\C_{31}^{j}\\C_{32}^{j}\\C_{33}^{j}\\C_{34}^{j}\end{bmatrix}}.}
Now we must realize that AA and BB should be arrays made of four different subarrays (remember that only three channels are considered for this example, but it covers the main part discussed above):
A
A
=
[
A
A
1
A
A
3
0
A
A
3
A
A
2
A
A
3
0
A
A
3
A
A
1
]
,
B
B
=
[
B
B
1
−
A
A
3
0
−
A
A
3
B
B
2
−
A
A
3
0
−
A
A
3
B
B
1
]
,
{\displaystyle \mathbf {AA} ={\begin{bmatrix}AA1&AA3&0\\AA3&AA2&AA3\\0&AA3&AA1\end{bmatrix}},\quad \mathbf {BB} ={\begin{bmatrix}BB1&-AA3&0\\-AA3&BB2&-AA3\\0&-AA3&BB1\end{bmatrix}},}
where the elements mentioned above correspond to the next arrays, and an additional 4×4 full of zeros. Please note that the sizes of AA and BB are 12×12:
A
A
1
=
[
(
1
+
2
λ
+
β
)
−
(
λ
−
α
)
0
0
−
(
λ
+
α
)
(
1
+
2
λ
+
β
)
−
(
λ
−
α
)
0
0
−
(
λ
+
α
)
(
1
+
2
λ
+
β
)
−
(
λ
−
α
)
0
0
−
2
λ
(
1
+
2
λ
+
β
)
]
,
{\displaystyle \mathbf {AA1} ={\begin{bmatrix}(1+2\lambda +\beta )&-(\lambda -\alpha )&0&0\\-(\lambda +\alpha )&(1+2\lambda +\beta )&-(\lambda -\alpha )&0\\0&-(\lambda +\alpha )&(1+2\lambda +\beta )&-(\lambda -\alpha )\\0&0&-2\lambda &(1+2\lambda +\beta )\end{bmatrix}},}
A
A
2
=
[
(
1
+
2
λ
+
2
β
)
−
(
λ
−
α
)
0
0
−
(
λ
+
α
)
(
1
+
2
λ
+
2
β
)
−
(
λ
−
α
)
0
0
−
(
λ
+
α
)
(
1
+
2
λ
+
2
β
)
−
(
λ
−
α
)
0
0
−
2
λ
(
1
+
2
λ
+
2
β
)
]
,
{\displaystyle \mathbf {AA2} ={\begin{bmatrix}(1+2\lambda +2\beta )&-(\lambda -\alpha )&0&0\\-(\lambda +\alpha )&(1+2\lambda +2\beta )&-(\lambda -\alpha )&0\\0&-(\lambda +\alpha )&(1+2\lambda +2\beta )&-(\lambda -\alpha )\\0&0&-2\lambda &(1+2\lambda +2\beta )\end{bmatrix}},}
A
A
3
=
[
−
β
0
0
0
0
−
β
0
0
0
0
−
β
0
0
0
0
−
β
]
,
{\displaystyle \mathbf {AA3} ={\begin{bmatrix}-\beta &0&0&0\\0&-\beta &0&0\\0&0&-\beta &0\\0&0&0&-\beta \end{bmatrix}},}
B
B
1
=
[
(
1
−
2
λ
−
β
)
(
λ
−
α
)
0
0
(
λ
+
α
)
(
1
−
2
λ
−
β
)
(
λ
−
α
)
0
0
(
λ
+
α
)
(
1
−
2
λ
−
β
)
(
λ
−
α
)
0
0
2
λ
(
1
−
2
λ
−
β
)
]
,
{\displaystyle \mathbf {BB1} ={\begin{bmatrix}(1-2\lambda -\beta )&(\lambda -\alpha )&0&0\\(\lambda +\alpha )&(1-2\lambda -\beta )&(\lambda -\alpha )&0\\0&(\lambda +\alpha )&(1-2\lambda -\beta )&(\lambda -\alpha )\\0&0&2\lambda &(1-2\lambda -\beta )\end{bmatrix}},}
B
B
2
=
[
(
1
−
2
λ
−
2
β
)
(
λ
−
α
)
0
0
(
λ
+
α
)
(
1
−
2
λ
−
2
β
)
(
λ
−
α
)
0
0
(
λ
+
α
)
(
1
−
2
λ
−
2
β
)
(
λ
−
α
)
0
0
2
λ
(
1
−
2
λ
−
2
β
)
]
.
{\displaystyle \mathbf {BB2} ={\begin{bmatrix}(1-2\lambda -2\beta )&(\lambda -\alpha )&0&0\\(\lambda +\alpha )&(1-2\lambda -2\beta )&(\lambda -\alpha )&0\\0&(\lambda +\alpha )&(1-2\lambda -2\beta )&(\lambda -\alpha )\\0&0&2\lambda &(1-2\lambda -2\beta )\end{bmatrix}}.}
The d vector here is used to hold the boundary conditions. In this example it is a 12×1 vector:
d
=
[
(
λ
+
α
)
(
C
10
j
+
1
+
C
10
j
)
0
0
0
(
λ
+
α
)
(
C
20
j
+
1
+
C
20
j
)
0
0
0
(
λ
+
α
)
(
C
30
j
+
1
+
C
30
j
)
0
0
0
]
.
{\displaystyle \mathbf {d} ={\begin{bmatrix}(\lambda +\alpha )(C_{10}^{j+1}+C_{10}^{j})\\0\\0\\0\\(\lambda +\alpha )(C_{20}^{j+1}+C_{20}^{j})\\0\\0\\0\\(\lambda +\alpha )(C_{30}^{j+1}+C_{30}^{j})\\0\\0\\0\end{bmatrix}}.}
To find the concentration at any time, one must iterate the following equation:
C
j
+
1
=
A
A
−
1
(
B
B
C
j
+
d
)
.
{\displaystyle \mathbf {C^{j+1}} =\mathbf {AA} ^{-1}(\mathbf {BB} \,\mathbf {C^{j}} +\mathbf {d} ).}
== Example: 2D diffusion ==
When extending into two dimensions on a uniform Cartesian grid, the derivation is similar and the results may lead to a system of band-diagonal equations rather than tridiagonal ones. The two-dimensional heat equation
∂
u
∂
t
=
a
∇
2
u
,
{\displaystyle {\frac {\partial u}{\partial t}}=a\,\nabla ^{2}u,}
∂
u
∂
t
=
a
(
∂
2
u
∂
x
2
+
∂
2
u
∂
y
2
)
{\displaystyle {\frac {\partial u}{\partial t}}=a\left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}\right)}
can be solved with the Crank–Nicolson discretization of
u
i
,
j
n
+
1
=
u
i
,
j
n
+
1
2
a
Δ
t
(
Δ
x
)
2
[
(
u
i
+
1
,
j
n
+
1
+
u
i
−
1
,
j
n
+
1
+
u
i
,
j
+
1
n
+
1
+
u
i
,
j
−
1
n
+
1
−
4
u
i
,
j
n
+
1
)
+
(
u
i
+
1
,
j
n
+
u
i
−
1
,
j
n
+
u
i
,
j
+
1
n
+
u
i
,
j
−
1
n
−
4
u
i
,
j
n
)
]
,
{\displaystyle {\begin{aligned}u_{i,j}^{n+1}={}&u_{i,j}^{n}+{\frac {1}{2}}{\frac {a\Delta t}{(\Delta x)^{2}}}{\big [}(u_{i+1,j}^{n+1}+u_{i-1,j}^{n+1}+u_{i,j+1}^{n+1}+u_{i,j-1}^{n+1}-4u_{i,j}^{n+1})\\&+(u_{i+1,j}^{n}+u_{i-1,j}^{n}+u_{i,j+1}^{n}+u_{i,j-1}^{n}-4u_{i,j}^{n}){\big ]},\end{aligned}}}
assuming that a square grid is used, so that
Δ
x
=
Δ
y
{\displaystyle \Delta x=\Delta y}
. This equation can be simplified somewhat by rearranging terms and using the CFL number
μ
=
a
Δ
t
(
Δ
x
)
2
.
{\displaystyle \mu ={\frac {a\,\Delta t}{(\Delta x)^{2}}}.}
For the Crank–Nicolson numerical scheme, a low CFL number is not required for stability, however, it is required for numerical accuracy. We can now write the scheme as
(
1
+
2
μ
)
u
i
,
j
n
+
1
−
μ
2
(
u
i
+
1
,
j
n
+
1
+
u
i
−
1
,
j
n
+
1
+
u
i
,
j
+
1
n
+
1
+
u
i
,
j
−
1
n
+
1
)
{\displaystyle (1+2\mu )u_{i,j}^{n+1}-{\frac {\mu }{2}}\left(u_{i+1,j}^{n+1}+u_{i-1,j}^{n+1}+u_{i,j+1}^{n+1}+u_{i,j-1}^{n+1}\right)}
=
(
1
−
2
μ
)
u
i
,
j
n
+
μ
2
(
u
i
+
1
,
j
n
+
u
i
−
1
,
j
n
+
u
i
,
j
+
1
n
+
u
i
,
j
−
1
n
)
.
{\displaystyle \qquad =(1-2\mu )u_{i,j}^{n}+{\frac {\mu }{2}}\left(u_{i+1,j}^{n}+u_{i-1,j}^{n}+u_{i,j+1}^{n}+u_{i,j-1}^{n}\right).}
Solving such a linear system is costly. Hence an alternating-direction implicit method can be implemented to solve the numerical PDE, whereby one dimension is treated implicitly, and other dimension explicitly for half of the assigned time step and conversely for the remainder half of the time step. The benefit of this strategy is that the implicit solver only requires a tridiagonal matrix algorithm to be solved. The difference between the true Crank–Nicolson solution and ADI approximated solution has an order of accuracy of
O
(
Δ
t
4
)
{\displaystyle O(\Delta t^{4})}
and hence can be ignored with a sufficiently small time step.
== Crank–Nicolson for nonlinear problems ==
Because the Crank–Nicolson method is implicit, it is generally impossible to solve exactly. Instead, an iterative technique should be used to converge to the solution. One option is to use Newton's method to converge on the prediction, but this requires the computation of the Jacobian. For a high-dimensional system like those in computational fluid dynamics or numerical relativity, it may be infeasible to compute this Jacobian.
A Jacobian-free alternative is fixed-point iteration. If
f
{\displaystyle f}
is the velocity of the system, then the Crank–Nicolson prediction will be a fixed point of the map
Φ
(
x
)
=
x
0
+
h
2
[
f
(
x
0
)
+
f
(
x
)
]
.
{\displaystyle \Phi (x)=x_{0}+{\frac {h}{2}}\left[f(x_{0})+f(x)\right].}
If the map iteration
x
(
i
+
1
)
=
Φ
(
x
(
i
)
)
{\displaystyle x^{(i+1)}=\Phi (x^{(i)})}
does not converge, the parameterized map
Θ
(
x
,
α
)
=
α
x
+
(
1
−
α
)
Φ
(
x
)
{\displaystyle \Theta (x,\alpha )=\alpha x+(1-\alpha )\Phi (x)}
, with
α
∈
(
0
,
1
)
{\displaystyle \alpha \in (0,1)}
, may be better behaved. In expanded form, the update formula is
x
i
+
1
=
α
x
i
+
(
1
−
α
)
[
x
0
+
h
2
(
f
(
x
0
)
+
f
(
x
i
)
)
]
,
{\displaystyle x^{i+1}=\alpha x^{i}+(1-\alpha )\left[x_{0}+{\frac {h}{2}}\left(f(x_{0})+f(x^{i})\right)\right],}
where
x
i
{\displaystyle x^{i}}
is the current guess and
x
i
−
1
{\displaystyle x_{i-1}}
is the previous time-step.
Even for high-dimensional systems, iteration of this map can converge surprisingly quickly.
== Application in financial mathematics ==
Because a number of other phenomena can be modeled with the heat equation (often called the diffusion equation in financial mathematics), the Crank–Nicolson method has been applied to those areas as well. Particularly, the Black–Scholes option pricing model's differential equation can be transformed into the heat equation, and thus numerical solutions for option pricing can be obtained with the Crank–Nicolson method.
The importance of this for finance is that option pricing problems, when extended beyond the standard assumptions (e.g. incorporating changing dividends), cannot be solved in closed form, but can be solved using this method. Note however, that for non-smooth final conditions (which happen for most financial instruments), the Crank–Nicolson method is not satisfactory as numerical oscillations are not damped. For vanilla options, this results in oscillation in the gamma value around the strike price. Therefore, special damping initialization steps are necessary (e.g., fully implicit finite difference method).
== See also ==
Financial mathematics
Trapezoidal rule
== References ==
== External links ==
Numerical PDE Techniques for Scientists and Engineers, open access Lectures and Codes for Numerical PDEs
An example of how to apply and implement the Crank–Nicolson method for the Advection equation | Wikipedia/Crank–Nicolson_method |
In mathematics, an exact differential equation or total differential equation is a certain kind of ordinary differential equation which is widely used in physics and engineering.
== Definition ==
Given a simply connected and open subset D of
R
2
{\displaystyle \mathbb {R} ^{2}}
and two functions I and J which are continuous on D, an implicit first-order ordinary differential equation of the form
I
(
x
,
y
)
d
x
+
J
(
x
,
y
)
d
y
=
0
,
{\displaystyle I(x,y)\,dx+J(x,y)\,dy=0,}
is called an exact differential equation if there exists a continuously differentiable function F, called the potential function, so that
∂
F
∂
x
=
I
{\displaystyle {\frac {\partial F}{\partial x}}=I}
and
∂
F
∂
y
=
J
.
{\displaystyle {\frac {\partial F}{\partial y}}=J.}
An exact equation may also be presented in the following form:
I
(
x
,
y
)
+
J
(
x
,
y
)
y
′
(
x
)
=
0
{\displaystyle I(x,y)+J(x,y)\,y'(x)=0}
where the same constraints on I and J apply for the differential equation to be exact.
The nomenclature of "exact differential equation" refers to the exact differential of a function. For a function
F
(
x
0
,
x
1
,
.
.
.
,
x
n
−
1
,
x
n
)
{\displaystyle F(x_{0},x_{1},...,x_{n-1},x_{n})}
, the exact or total derivative with respect to
x
0
{\displaystyle x_{0}}
is given by
d
F
d
x
0
=
∂
F
∂
x
0
+
∑
i
=
1
n
∂
F
∂
x
i
d
x
i
d
x
0
.
{\displaystyle {\frac {dF}{dx_{0}}}={\frac {\partial F}{\partial x_{0}}}+\sum _{i=1}^{n}{\frac {\partial F}{\partial x_{i}}}{\frac {dx_{i}}{dx_{0}}}.}
=== Example ===
The function
F
:
R
2
→
R
{\displaystyle F:\mathbb {R} ^{2}\to \mathbb {R} }
given by
F
(
x
,
y
)
=
1
2
(
x
2
+
y
2
)
+
c
{\displaystyle F(x,y)={\frac {1}{2}}(x^{2}+y^{2})+c}
is a potential function for the differential equation
x
d
x
+
y
d
y
=
0.
{\displaystyle x\,dx+y\,dy=0.\,}
== First-order exact differential equations ==
=== Identifying first-order exact differential equations ===
Let the functions
M
{\textstyle M}
,
N
{\textstyle N}
,
M
y
{\textstyle M_{y}}
, and
N
x
{\textstyle N_{x}}
, where the subscripts denote the partial derivative with respect to the relative variable, be continuous in the region
R
:
α
<
x
<
β
,
γ
<
y
<
δ
{\textstyle R:\alpha <x<\beta ,\gamma <y<\delta }
. Then the differential equation
M
(
x
,
y
)
+
N
(
x
,
y
)
d
y
d
x
=
0
{\displaystyle M(x,y)+N(x,y){\frac {dy}{dx}}=0}
is exact if and only if
M
y
(
x
,
y
)
=
N
x
(
x
,
y
)
{\displaystyle M_{y}(x,y)=N_{x}(x,y)}
That is, there exists a function
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
, called a potential function, such that
ψ
x
(
x
,
y
)
=
M
(
x
,
y
)
and
ψ
y
(
x
,
y
)
=
N
(
x
,
y
)
{\displaystyle \psi _{x}(x,y)=M(x,y){\text{ and }}\psi _{y}(x,y)=N(x,y)}
So, in general:
M
y
(
x
,
y
)
=
N
x
(
x
,
y
)
⟺
{
∃
ψ
(
x
,
y
)
ψ
x
(
x
,
y
)
=
M
(
x
,
y
)
ψ
y
(
x
,
y
)
=
N
(
x
,
y
)
{\displaystyle M_{y}(x,y)=N_{x}(x,y)\iff {\begin{cases}\exists \psi (x,y)\\\psi _{x}(x,y)=M(x,y)\\\psi _{y}(x,y)=N(x,y)\end{cases}}}
==== Proof ====
The proof has two parts.
First, suppose there is a function
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
such that
ψ
x
(
x
,
y
)
=
M
(
x
,
y
)
and
ψ
y
(
x
,
y
)
=
N
(
x
,
y
)
{\displaystyle \psi _{x}(x,y)=M(x,y){\text{ and }}\psi _{y}(x,y)=N(x,y)}
It then follows that
M
y
(
x
,
y
)
=
ψ
x
y
(
x
,
y
)
and
N
x
(
x
,
y
)
=
ψ
y
x
(
x
,
y
)
{\displaystyle M_{y}(x,y)=\psi _{xy}(x,y){\text{ and }}N_{x}(x,y)=\psi _{yx}(x,y)}
Since
M
y
{\displaystyle M_{y}}
and
N
x
{\displaystyle N_{x}}
are continuous, then
ψ
x
y
{\displaystyle \psi _{xy}}
and
ψ
y
x
{\displaystyle \psi _{yx}}
are also continuous which guarantees their equality.
The second part of the proof involves the construction of
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
and can also be used as a procedure for solving first-order exact differential equations. Suppose that
M
y
(
x
,
y
)
=
N
x
(
x
,
y
)
{\displaystyle M_{y}(x,y)=N_{x}(x,y)}
and let there be a function
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
for which
ψ
x
(
x
,
y
)
=
M
(
x
,
y
)
and
ψ
y
(
x
,
y
)
=
N
(
x
,
y
)
{\displaystyle \psi _{x}(x,y)=M(x,y){\text{ and }}\psi _{y}(x,y)=N(x,y)}
Begin by integrating the first equation with respect to
x
{\displaystyle x}
. In practice, it doesn't matter if you integrate the first or the second equation, so long as the integration is done with respect to the appropriate variable.
∂
ψ
∂
x
(
x
,
y
)
=
M
(
x
,
y
)
{\displaystyle {\frac {\partial \psi }{\partial x}}(x,y)=M(x,y)}
ψ
(
x
,
y
)
=
∫
M
(
x
,
y
)
d
x
+
h
(
y
)
{\displaystyle \psi (x,y)=\int M(x,y)\,dx+h(y)}
ψ
(
x
,
y
)
=
Q
(
x
,
y
)
+
h
(
y
)
{\displaystyle \psi (x,y)=Q(x,y)+h(y)}
where
Q
(
x
,
y
)
{\displaystyle Q(x,y)}
is any differentiable function such that
Q
x
=
M
{\displaystyle Q_{x}=M}
. The function
h
(
y
)
{\displaystyle h(y)}
plays the role of a constant of integration, but instead of just a constant, it is function of
y
{\displaystyle y}
, since
M
{\displaystyle M}
is a function of both
x
{\displaystyle x}
and
y
{\displaystyle y}
and we are only integrating with respect to
x
{\displaystyle x}
.
Now to show that it is always possible to find an
h
(
y
)
{\displaystyle h(y)}
such that
ψ
y
=
N
{\displaystyle \psi _{y}=N}
.
ψ
(
x
,
y
)
=
Q
(
x
,
y
)
+
h
(
y
)
{\displaystyle \psi (x,y)=Q(x,y)+h(y)}
Differentiate both sides with respect to
y
{\displaystyle y}
.
∂
ψ
∂
y
(
x
,
y
)
=
∂
Q
∂
y
(
x
,
y
)
+
h
′
(
y
)
{\displaystyle {\frac {\partial \psi }{\partial y}}(x,y)={\frac {\partial Q}{\partial y}}(x,y)+h'(y)}
Set the result equal to
N
{\displaystyle N}
and solve for
h
′
(
y
)
{\displaystyle h'(y)}
.
h
′
(
y
)
=
N
(
x
,
y
)
−
∂
Q
∂
y
(
x
,
y
)
{\displaystyle h'(y)=N(x,y)-{\frac {\partial Q}{\partial y}}(x,y)}
In order to determine
h
′
(
y
)
{\displaystyle h'(y)}
from this equation, the right-hand side must depend only on
y
{\displaystyle y}
. This can be proven by showing that its derivative with respect to
x
{\displaystyle x}
is always zero, so differentiate the right-hand side with respect to
x
{\displaystyle x}
.
∂
N
∂
x
(
x
,
y
)
−
∂
∂
x
∂
Q
∂
y
(
x
,
y
)
⟺
∂
N
∂
x
(
x
,
y
)
−
∂
∂
y
∂
Q
∂
x
(
x
,
y
)
{\displaystyle {\frac {\partial N}{\partial x}}(x,y)-{\frac {\partial }{\partial x}}{\frac {\partial Q}{\partial y}}(x,y)\iff {\frac {\partial N}{\partial x}}(x,y)-{\frac {\partial }{\partial y}}{\frac {\partial Q}{\partial x}}(x,y)}
Since
Q
x
=
M
{\displaystyle Q_{x}=M}
,
∂
N
∂
x
(
x
,
y
)
−
∂
M
∂
y
(
x
,
y
)
{\displaystyle {\frac {\partial N}{\partial x}}(x,y)-{\frac {\partial M}{\partial y}}(x,y)}
Now, this is zero based on our initial supposition that
M
y
(
x
,
y
)
=
N
x
(
x
,
y
)
{\displaystyle M_{y}(x,y)=N_{x}(x,y)}
Therefore,
h
′
(
y
)
=
N
(
x
,
y
)
−
∂
Q
∂
y
(
x
,
y
)
{\displaystyle h'(y)=N(x,y)-{\frac {\partial Q}{\partial y}}(x,y)}
h
(
y
)
=
∫
(
N
(
x
,
y
)
−
∂
Q
∂
y
(
x
,
y
)
)
d
y
{\displaystyle h(y)=\int {\left(N(x,y)-{\frac {\partial Q}{\partial y}}(x,y)\right)dy}}
ψ
(
x
,
y
)
=
Q
(
x
,
y
)
+
∫
(
N
(
x
,
y
)
−
∂
Q
∂
y
(
x
,
y
)
)
d
y
+
C
{\displaystyle \psi (x,y)=Q(x,y)+\int \left(N(x,y)-{\frac {\partial Q}{\partial y}}(x,y)\right)\,dy+C}
And this completes the proof.
=== Solutions to first-order exact differential equations ===
First-order exact differential equations of the form
M
(
x
,
y
)
+
N
(
x
,
y
)
d
y
d
x
=
0
{\displaystyle M(x,y)+N(x,y){\frac {dy}{dx}}=0}
can be written in terms of the potential function
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
∂
ψ
∂
x
+
∂
ψ
∂
y
d
y
d
x
=
0
{\displaystyle {\frac {\partial \psi }{\partial x}}+{\frac {\partial \psi }{\partial y}}{\frac {dy}{dx}}=0}
where
{
ψ
x
(
x
,
y
)
=
M
(
x
,
y
)
ψ
y
(
x
,
y
)
=
N
(
x
,
y
)
{\displaystyle {\begin{cases}\psi _{x}(x,y)=M(x,y)\\\psi _{y}(x,y)=N(x,y)\end{cases}}}
This is equivalent to taking the total derivative of
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
.
∂
ψ
∂
x
+
∂
ψ
∂
y
d
y
d
x
=
0
⟺
d
d
x
ψ
(
x
,
y
(
x
)
)
=
0
{\displaystyle {\frac {\partial \psi }{\partial x}}+{\frac {\partial \psi }{\partial y}}{\frac {dy}{dx}}=0\iff {\frac {d}{dx}}\psi (x,y(x))=0}
The solutions to an exact differential equation are then given by
ψ
(
x
,
y
(
x
)
)
=
c
{\displaystyle \psi (x,y(x))=c}
and the problem reduces to finding
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
.
This can be done by integrating the two expressions
M
(
x
,
y
)
d
x
{\displaystyle M(x,y)\,dx}
and
N
(
x
,
y
)
d
y
{\displaystyle N(x,y)\,dy}
and then writing down each term in the resulting expressions only once and summing them up in order to get
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
.
The reasoning behind this is the following. Since
{
ψ
x
(
x
,
y
)
=
M
(
x
,
y
)
ψ
y
(
x
,
y
)
=
N
(
x
,
y
)
{\displaystyle {\begin{cases}\psi _{x}(x,y)=M(x,y)\\\psi _{y}(x,y)=N(x,y)\end{cases}}}
it follows, by integrating both sides, that
{
ψ
(
x
,
y
)
=
∫
M
(
x
,
y
)
d
x
+
h
(
y
)
=
Q
(
x
,
y
)
+
h
(
y
)
ψ
(
x
,
y
)
=
∫
N
(
x
,
y
)
d
y
+
g
(
x
)
=
P
(
x
,
y
)
+
g
(
x
)
{\displaystyle {\begin{cases}\psi (x,y)=\int M(x,y)\,dx+h(y)=Q(x,y)+h(y)\\\psi (x,y)=\int N(x,y)\,dy+g(x)=P(x,y)+g(x)\end{cases}}}
Therefore,
Q
(
x
,
y
)
+
h
(
y
)
=
P
(
x
,
y
)
+
g
(
x
)
{\displaystyle Q(x,y)+h(y)=P(x,y)+g(x)}
where
Q
(
x
,
y
)
{\displaystyle Q(x,y)}
and
P
(
x
,
y
)
{\displaystyle P(x,y)}
are differentiable functions such that
Q
x
=
M
{\displaystyle Q_{x}=M}
and
P
y
=
N
{\displaystyle P_{y}=N}
.
In order for this to be true and for both sides to result in the exact same expression, namely
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
, then
h
(
y
)
{\displaystyle h(y)}
must be contained within the expression for
P
(
x
,
y
)
{\displaystyle P(x,y)}
because it cannot be contained within
g
(
x
)
{\displaystyle g(x)}
, since it is entirely a function of
y
{\displaystyle y}
and not
x
{\displaystyle x}
and is therefore not allowed to have anything to do with
x
{\displaystyle x}
. By analogy,
g
(
x
)
{\displaystyle g(x)}
must be contained within the expression
Q
(
x
,
y
)
{\displaystyle Q(x,y)}
.
Ergo,
Q
(
x
,
y
)
=
g
(
x
)
+
f
(
x
,
y
)
and
P
(
x
,
y
)
=
h
(
y
)
+
d
(
x
,
y
)
{\displaystyle Q(x,y)=g(x)+f(x,y){\text{ and }}P(x,y)=h(y)+d(x,y)}
for some expressions
f
(
x
,
y
)
{\displaystyle f(x,y)}
and
d
(
x
,
y
)
{\displaystyle d(x,y)}
.
Plugging in into the above equation, we find that
g
(
x
)
+
f
(
x
,
y
)
+
h
(
y
)
=
h
(
y
)
+
d
(
x
,
y
)
+
g
(
x
)
⇒
f
(
x
,
y
)
=
d
(
x
,
y
)
{\displaystyle g(x)+f(x,y)+h(y)=h(y)+d(x,y)+g(x)\Rightarrow f(x,y)=d(x,y)}
and so
f
(
x
,
y
)
{\displaystyle f(x,y)}
and
d
(
x
,
y
)
{\displaystyle d(x,y)}
turn out to be the same function. Therefore,
Q
(
x
,
y
)
=
g
(
x
)
+
f
(
x
,
y
)
and
P
(
x
,
y
)
=
h
(
y
)
+
f
(
x
,
y
)
{\displaystyle Q(x,y)=g(x)+f(x,y){\text{ and }}P(x,y)=h(y)+f(x,y)}
Since we already showed that
{
ψ
(
x
,
y
)
=
Q
(
x
,
y
)
+
h
(
y
)
ψ
(
x
,
y
)
=
P
(
x
,
y
)
+
g
(
x
)
{\displaystyle {\begin{cases}\psi (x,y)=Q(x,y)+h(y)\\\psi (x,y)=P(x,y)+g(x)\end{cases}}}
it follows that
ψ
(
x
,
y
)
=
g
(
x
)
+
f
(
x
,
y
)
+
h
(
y
)
{\displaystyle \psi (x,y)=g(x)+f(x,y)+h(y)}
So, we can construct
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
by doing
∫
M
(
x
,
y
)
d
x
{\displaystyle \int M(x,y)\,dx}
and
∫
N
(
x
,
y
)
d
y
{\displaystyle \int N(x,y)\,dy}
and then taking the common terms we find within the two resulting expressions (that would be
f
(
x
,
y
)
{\displaystyle f(x,y)}
) and then adding the terms which are uniquely found in either one of them –
g
(
x
)
{\displaystyle g(x)}
and
h
(
y
)
{\displaystyle h(y)}
.
== Second-order exact differential equations ==
The concept of exact differential equations can be extended to second-order equations. Consider starting with the first-order exact equation:
I
(
x
,
y
)
+
J
(
x
,
y
)
d
y
d
x
=
0
{\displaystyle I(x,y)+J(x,y){dy \over dx}=0}
Since both functions
I
(
x
,
y
)
{\displaystyle I(x,y)}
,
J
(
x
,
y
)
{\displaystyle J(x,y)}
are functions of two variables, implicitly differentiating the multivariate function yields
d
I
d
x
+
(
d
J
d
x
)
d
y
d
x
+
d
2
y
d
x
2
(
J
(
x
,
y
)
)
=
0
{\displaystyle {dI \over dx}+\left({dJ \over dx}\right){dy \over dx}+{d^{2}y \over dx^{2}}(J(x,y))=0}
Expanding the total derivatives gives that
d
I
d
x
=
∂
I
∂
x
+
∂
I
∂
y
d
y
d
x
{\displaystyle {dI \over dx}={\partial I \over \partial x}+{\partial I \over \partial y}{dy \over dx}}
and that
d
J
d
x
=
∂
J
∂
x
+
∂
J
∂
y
d
y
d
x
{\displaystyle {dJ \over dx}={\partial J \over \partial x}+{\partial J \over \partial y}{dy \over dx}}
Combining the
d
y
d
x
{\textstyle {dy \over dx}}
terms gives
∂
I
∂
x
+
d
y
d
x
(
∂
I
∂
y
+
∂
J
∂
x
+
∂
J
∂
y
d
y
d
x
)
+
d
2
y
d
x
2
(
J
(
x
,
y
)
)
=
0
{\displaystyle {\partial I \over \partial x}+{dy \over dx}\left({\partial I \over \partial y}+{\partial J \over \partial x}+{\partial J \over \partial y}{dy \over dx}\right)+{d^{2}y \over dx^{2}}(J(x,y))=0}
If the equation is exact, then
∂
J
∂
x
=
∂
I
∂
y
{\textstyle {\partial J \over \partial x}={\partial I \over \partial y}}
. Additionally, the total derivative of
J
(
x
,
y
)
{\displaystyle J(x,y)}
is equal to its implicit ordinary derivative
d
J
d
x
{\textstyle {dJ \over dx}}
. This leads to the rewritten equation
∂
I
∂
x
+
d
y
d
x
(
∂
J
∂
x
+
d
J
d
x
)
+
d
2
y
d
x
2
(
J
(
x
,
y
)
)
=
0
{\displaystyle {\partial I \over \partial x}+{dy \over dx}\left({\partial J \over \partial x}+{dJ \over dx}\right)+{d^{2}y \over dx^{2}}(J(x,y))=0}
Now, let there be some second-order differential equation
f
(
x
,
y
)
+
g
(
x
,
y
,
d
y
d
x
)
d
y
d
x
+
d
2
y
d
x
2
(
J
(
x
,
y
)
)
=
0
{\displaystyle f(x,y)+g\left(x,y,{dy \over dx}\right){dy \over dx}+{d^{2}y \over dx^{2}}(J(x,y))=0}
If
∂
J
∂
x
=
∂
I
∂
y
{\displaystyle {\partial J \over \partial x}={\partial I \over \partial y}}
for exact differential equations, then
∫
(
∂
I
∂
y
)
d
y
=
∫
(
∂
J
∂
x
)
d
y
{\displaystyle \int \left({\partial I \over \partial y}\right)\,dy=\int \left({\partial J \over \partial x}\right)\,dy}
and
∫
(
∂
I
∂
y
)
d
y
=
∫
(
∂
J
∂
x
)
d
y
=
I
(
x
,
y
)
−
h
(
x
)
{\displaystyle \int \left({\partial I \over \partial y}\right)\,dy=\int \left({\partial J \over \partial x}\right)\,dy=I(x,y)-h(x)}
where
h
(
x
)
{\displaystyle h(x)}
is some arbitrary function only of
x
{\displaystyle x}
that was differentiated away to zero upon taking the partial derivative of
I
(
x
,
y
)
{\displaystyle I(x,y)}
with respect to
y
{\displaystyle y}
. Although the sign on
h
(
x
)
{\displaystyle h(x)}
could be positive, it is more intuitive to think of the integral's result as
I
(
x
,
y
)
{\displaystyle I(x,y)}
that is missing some original extra function
h
(
x
)
{\displaystyle h(x)}
that was partially differentiated to zero.
Next, if
d
I
d
x
=
∂
I
∂
x
+
∂
I
∂
y
d
y
d
x
{\displaystyle {dI \over dx}={\partial I \over \partial x}+{\partial I \over \partial y}{dy \over dx}}
then the term
∂
I
∂
x
{\displaystyle {\partial I \over \partial x}}
should be a function only of
x
{\displaystyle x}
and
y
{\displaystyle y}
, since partial differentiation with respect to
x
{\displaystyle x}
will hold
y
{\displaystyle y}
constant and not produce any derivatives of
y
{\displaystyle y}
. In the second-order equation
f
(
x
,
y
)
+
g
(
x
,
y
,
d
y
d
x
)
d
y
d
x
+
d
2
y
d
x
2
(
J
(
x
,
y
)
)
=
0
{\displaystyle f(x,y)+g\left(x,y,{dy \over dx}\right){dy \over dx}+{d^{2}y \over dx^{2}}(J(x,y))=0}
only the term
f
(
x
,
y
)
{\displaystyle f(x,y)}
is a term purely of
x
{\displaystyle x}
and
y
{\displaystyle y}
. Let
∂
I
∂
x
=
f
(
x
,
y
)
{\displaystyle {\partial I \over \partial x}=f(x,y)}
. If
∂
I
∂
x
=
f
(
x
,
y
)
{\displaystyle {\partial I \over \partial x}=f(x,y)}
, then
f
(
x
,
y
)
=
d
I
d
x
−
∂
I
∂
y
d
y
d
x
{\displaystyle f(x,y)={dI \over dx}-{\partial I \over \partial y}{dy \over dx}}
Since the total derivative of
I
(
x
,
y
)
{\displaystyle I(x,y)}
with respect to
x
{\displaystyle x}
is equivalent to the implicit ordinary derivative
d
I
d
x
{\displaystyle {dI \over dx}}
, then
f
(
x
,
y
)
+
∂
I
∂
y
d
y
d
x
=
d
I
d
x
=
d
d
x
(
I
(
x
,
y
)
−
h
(
x
)
)
+
d
h
(
x
)
d
x
{\displaystyle f(x,y)+{\partial I \over \partial y}{dy \over dx}={dI \over dx}={d \over dx}(I(x,y)-h(x))+{dh(x) \over dx}}
So,
d
h
(
x
)
d
x
=
f
(
x
,
y
)
+
∂
I
∂
y
d
y
d
x
−
d
d
x
(
I
(
x
,
y
)
−
h
(
x
)
)
{\displaystyle {dh(x) \over dx}=f(x,y)+{\partial I \over \partial y}{dy \over dx}-{d \over dx}(I(x,y)-h(x))}
and
h
(
x
)
=
∫
(
f
(
x
,
y
)
+
∂
I
∂
y
d
y
d
x
−
d
d
x
(
I
(
x
,
y
)
−
h
(
x
)
)
)
d
x
{\displaystyle h(x)=\int \left(f(x,y)+{\partial I \over \partial y}{dy \over dx}-{d \over dx}(I(x,y)-h(x))\right)\,dx}
Thus, the second-order differential equation
f
(
x
,
y
)
+
g
(
x
,
y
,
d
y
d
x
)
d
y
d
x
+
d
2
y
d
x
2
(
J
(
x
,
y
)
)
=
0
{\displaystyle f(x,y)+g\left(x,y,{dy \over dx}\right){dy \over dx}+{d^{2}y \over dx^{2}}(J(x,y))=0}
is exact only if
g
(
x
,
y
,
d
y
d
x
)
=
d
J
d
x
+
∂
J
∂
x
=
d
J
d
x
+
∂
J
∂
x
{\displaystyle g\left(x,y,{dy \over dx}\right)={dJ \over dx}+{\partial J \over \partial x}={dJ \over dx}+{\partial J \over \partial x}}
and only if the below expression
∫
(
f
(
x
,
y
)
+
∂
I
∂
y
d
y
d
x
−
d
d
x
(
I
(
x
,
y
)
−
h
(
x
)
)
)
d
x
=
∫
(
f
(
x
,
y
)
−
∂
(
I
(
x
,
y
)
−
h
(
x
)
)
∂
x
)
d
x
{\displaystyle \int \left(f(x,y)+{\partial I \over \partial y}{dy \over dx}-{d \over dx}(I(x,y)-h(x))\right)\,dx=\int \left(f(x,y)-{\partial \left(I(x,y)-h(x)\right) \over \partial x}\right)\,dx}
is a function solely of
x
{\displaystyle x}
. Once
h
(
x
)
{\displaystyle h(x)}
is calculated with its arbitrary constant, it is added to
I
(
x
,
y
)
−
h
(
x
)
{\displaystyle I(x,y)-h(x)}
to make
I
(
x
,
y
)
{\displaystyle I(x,y)}
. If the equation is exact, then we can reduce to the first-order exact form which is solvable by the usual method for first-order exact equations.
I
(
x
,
y
)
+
J
(
x
,
y
)
d
y
d
x
=
0
{\displaystyle I(x,y)+J(x,y){dy \over dx}=0}
Now, however, in the final implicit solution there will be a
C
1
x
{\displaystyle C_{1}x}
term from integration of
h
(
x
)
{\displaystyle h(x)}
with respect to
x
{\displaystyle x}
twice as well as a
C
2
{\displaystyle C_{2}}
, two arbitrary constants as expected from a second-order equation.
=== Example ===
Given the differential equation
(
1
−
x
2
)
y
″
−
4
x
y
′
−
2
y
=
0
{\displaystyle (1-x^{2})y''-4xy'-2y=0}
one can always easily check for exactness by examining the
y
″
{\displaystyle y''}
term. In this case, both the partial and total derivative of
1
−
x
2
{\displaystyle 1-x^{2}}
with respect to
x
{\displaystyle x}
are
−
2
x
{\displaystyle -2x}
, so their sum is
−
4
x
{\displaystyle -4x}
, which is exactly the term in front of
y
′
{\displaystyle y'}
. With one of the conditions for exactness met, one can calculate that
∫
(
−
2
x
)
d
y
=
I
(
x
,
y
)
−
h
(
x
)
=
−
2
x
y
{\displaystyle \int (-2x)\,dy=I(x,y)-h(x)=-2xy}
Letting
f
(
x
,
y
)
=
−
2
y
{\displaystyle f(x,y)=-2y}
, then
∫
(
−
2
y
−
2
x
y
′
−
d
d
x
(
−
2
x
y
)
)
d
x
=
∫
(
−
2
y
−
2
x
y
′
+
2
x
y
′
+
2
y
)
d
x
=
∫
(
0
)
d
x
=
h
(
x
)
{\displaystyle \int \left(-2y-2xy'-{d \over dx}(-2xy)\right)\,dx=\int (-2y-2xy'+2xy'+2y)\,dx=\int (0)\,dx=h(x)}
So,
h
(
x
)
{\displaystyle h(x)}
is indeed a function only of
x
{\displaystyle x}
and the second-order differential equation is exact. Therefore,
h
(
x
)
=
C
1
{\displaystyle h(x)=C_{1}}
and
I
(
x
,
y
)
=
−
2
x
y
+
C
1
{\displaystyle I(x,y)=-2xy+C_{1}}
. Reduction to a first-order exact equation yields
−
2
x
y
+
C
1
+
(
1
−
x
2
)
y
′
=
0
{\displaystyle -2xy+C_{1}+(1-x^{2})y'=0}
Integrating
I
(
x
,
y
)
{\displaystyle I(x,y)}
with respect to
x
{\displaystyle x}
yields
−
x
2
y
+
C
1
x
+
i
(
y
)
=
0
{\displaystyle -x^{2}y+C_{1}x+i(y)=0}
where
i
(
y
)
{\displaystyle i(y)}
is some arbitrary function of
y
{\displaystyle y}
. Differentiating with respect to
y
{\displaystyle y}
gives an equation correlating the derivative and the
y
′
{\displaystyle y'}
term.
−
x
2
+
i
′
(
y
)
=
1
−
x
2
{\displaystyle -x^{2}+i'(y)=1-x^{2}}
So,
i
(
y
)
=
y
+
C
2
{\displaystyle i(y)=y+C_{2}}
and the full implicit solution becomes
C
1
x
+
C
2
+
y
−
x
2
y
=
0
{\displaystyle C_{1}x+C_{2}+y-x^{2}y=0}
Solving explicitly for
y
{\displaystyle y}
yields
y
=
C
1
x
+
C
2
1
−
x
2
{\displaystyle y={\frac {C_{1}x+C_{2}}{1-x^{2}}}}
== Higher-order exact differential equations ==
The concepts of exact differential equations can be extended to any order. Starting with the exact second-order equation
d
2
y
d
x
2
(
J
(
x
,
y
)
)
+
d
y
d
x
(
d
J
d
x
+
∂
J
∂
x
)
+
f
(
x
,
y
)
=
0
{\displaystyle {d^{2}y \over dx^{2}}(J(x,y))+{dy \over dx}\left({dJ \over dx}+{\partial J \over \partial x}\right)+f(x,y)=0}
it was previously shown that equation is defined such that
f
(
x
,
y
t
)
=
d
h
t
(
x
)
d
x
+
d
d
x
(
I
(
x
,
y
)
−
h
(
x
)
)
−
∂
J
∂
x
d
y
d
x
{\displaystyle f(x,yt)={dht(x) \over dx}+{d \over dx}(I(x,y)-h(x))-{\partial J \over \partial x}{dy \over dx}}
Implicit differentiation of the exact second-order equation
n
{\displaystyle n}
times will yield an
(
n
+
2
)
{\displaystyle (n+2)}
th-order differential equation with new conditions for exactness that can be readily deduced from the form of the equation produced. For example, differentiating the above second-order differential equation once to yield a third-order exact equation gives the following form
d
3
y
d
x
3
(
J
(
x
,
y
)
)
+
d
2
y
d
x
2
d
J
d
x
+
d
2
y
d
x
2
(
d
J
d
x
+
∂
J
∂
x
)
+
d
y
d
x
(
d
2
J
d
x
2
+
d
d
x
(
∂
J
∂
x
)
)
+
d
f
(
x
,
y
)
d
x
=
0
{\displaystyle {d^{3}y \over dx^{3}}(J(x,y))+{d^{2}y \over dx^{2}}{dJ \over dx}+{d^{2}y \over dx^{2}}\left({dJ \over dx}+{\partial J \over \partial x}\right)+{dy \over dx}\left({d^{2}J \over dx^{2}}+{d \over dx}\left({\partial J \over \partial x}\right)\right)+{df(x,y) \over dx}=0}
where
d
f
(
x
,
y
)
d
x
=
d
2
h
(
x
)
d
x
2
+
d
2
d
x
2
(
I
(
x
,
y
)
−
h
(
x
)
)
−
d
2
y
d
x
2
∂
J
∂
x
−
d
y
d
x
d
d
x
(
∂
J
∂
x
)
=
F
(
x
,
y
,
d
y
d
x
)
{\displaystyle {df(x,y) \over dx}={d^{2}h(x) \over dx^{2}}+{d^{2} \over dx^{2}}(I(x,y)-h(x))-{d^{2}y \over dx^{2}}{\partial J \over \partial x}-{dy \over dx}{d \over dx}\left({\partial J \over \partial x}\right)=F\left(x,y,{dy \over dx}\right)}
and where
F
(
x
,
y
,
d
y
d
x
)
{\displaystyle F\left(x,y,{dy \over dx}\right)}
is a function only of
x
,
y
{\displaystyle x,y}
and
d
y
d
x
{\displaystyle {dy \over dx}}
. Combining all
d
y
d
x
{\displaystyle {dy \over dx}}
and
d
2
y
d
x
2
{\displaystyle {d^{2}y \over dx^{2}}}
terms not coming from
F
(
x
,
y
,
d
y
d
x
)
{\displaystyle F\left(x,y,{dy \over dx}\right)}
gives
d
3
y
d
x
3
(
J
(
x
,
y
)
)
+
d
2
y
d
x
2
(
2
d
J
d
x
+
∂
J
∂
x
)
+
d
y
d
x
(
d
2
J
d
x
2
+
d
d
x
(
∂
J
∂
x
)
)
+
F
(
x
,
y
,
d
y
d
x
)
=
0
{\displaystyle {d^{3}y \over dx^{3}}(J(x,y))+{d^{2}y \over dx^{2}}\left(2{dJ \over dx}+{\partial J \over \partial x}\right)+{dy \over dx}\left({d^{2}J \over dx^{2}}+{d \over dx}\left({\partial J \over \partial x}\right)\right)+F\left(x,y,{dy \over dx}\right)=0}
Thus, the three conditions for exactness for a third-order differential equation are: the
d
2
y
d
x
2
{\displaystyle {d^{2}y \over dx^{2}}}
term must be
2
d
J
d
x
+
∂
J
∂
x
{\displaystyle 2{dJ \over dx}+{\partial J \over \partial x}}
, the
d
y
d
x
{\displaystyle {dy \over dx}}
term must be
d
2
J
d
x
2
+
d
d
x
(
∂
J
∂
x
)
{\displaystyle {d^{2}J \over dx^{2}}+{d \over dx}\left({\partial J \over \partial x}\right)}
and
F
(
x
,
y
,
d
y
d
x
)
−
d
2
d
x
2
(
I
(
x
,
y
)
−
h
(
x
)
)
+
d
2
y
d
x
2
∂
J
∂
x
+
d
y
d
x
d
d
x
(
∂
J
∂
x
)
{\displaystyle F\left(x,y,{dy \over dx}\right)-{d^{2} \over dx^{2}}(I(x,y)-h(x))+{d^{2}y \over dx^{2}}{\partial J \over \partial x}+{dy \over dx}{d \over dx}\left({\partial J \over \partial x}\right)}
must be a function solely of
x
{\displaystyle x}
.
=== Example ===
Consider the nonlinear third-order differential equation
y
y
‴
+
3
y
′
y
″
+
12
x
2
=
0
{\displaystyle yy'''+3y'y''+12x^{2}=0}
If
J
(
x
,
y
)
=
y
{\displaystyle J(x,y)=y}
, then
y
″
(
2
d
J
d
x
+
∂
J
∂
x
)
{\displaystyle y''\left(2{dJ \over dx}+{\partial J \over \partial x}\right)}
is
2
y
′
y
″
{\displaystyle 2y'y''}
and
y
′
(
d
2
J
d
x
2
+
d
d
x
(
∂
J
∂
x
)
)
=
y
′
y
″
{\displaystyle y'\left({d^{2}J \over dx^{2}}+{d \over dx}\left({\partial J \over \partial x}\right)\right)=y'y''}
which together sum to
3
y
′
y
″
{\displaystyle 3y'y''}
. Fortunately, this appears in our equation. For the last condition of exactness,
F
(
x
,
y
,
d
y
d
x
)
−
d
2
d
x
2
(
I
(
x
,
y
)
−
h
(
x
)
)
+
d
2
y
d
x
2
∂
J
∂
x
+
d
y
d
x
d
d
x
(
∂
J
∂
x
)
=
12
x
2
−
0
+
0
+
0
=
12
x
2
{\displaystyle F\left(x,y,{dy \over dx}\right)-{d^{2} \over dx^{2}}\left(I(x,y)-h(x)\right)+{d^{2}y \over dx^{2}}{\partial J \over \partial x}+{dy \over dx}{d \over dx}\left({\partial J \over \partial x}\right)=12x^{2}-0+0+0=12x^{2}}
which is indeed a function only of
x
{\displaystyle x}
. So, the differential equation is exact. Integrating twice yields that
h
(
x
)
=
x
4
+
C
1
x
+
C
2
=
I
(
x
,
y
)
{\displaystyle h(x)=x^{4}+C_{1}x+C_{2}=I(x,y)}
. Rewriting the equation as a first-order exact differential equation yields
x
4
+
C
1
x
+
C
2
+
y
y
′
=
0
{\displaystyle x^{4}+C_{1}x+C_{2}+yy'=0}
Integrating
I
(
x
,
y
)
{\displaystyle I(x,y)}
with respect to
x
{\displaystyle x}
gives that
x
5
5
+
C
1
x
2
+
C
2
x
+
i
(
y
)
=
0
{\displaystyle {x^{5} \over 5}+C_{1}x^{2}+C_{2}x+i(y)=0}
. Differentiating with respect to
y
{\displaystyle y}
and equating that to the term in front of
y
′
{\displaystyle y'}
in the first-order equation gives that
i
′
(
y
)
=
y
{\displaystyle i'(y)=y}
and that
i
(
y
)
=
y
2
2
+
C
3
{\displaystyle i(y)={y^{2} \over 2}+C_{3}}
. The full implicit solution becomes
x
5
5
+
C
1
x
2
+
C
2
x
+
C
3
+
y
2
2
=
0
{\displaystyle {x^{5} \over 5}+C_{1}x^{2}+C_{2}x+C_{3}+{y^{2} \over 2}=0}
The explicit solution, then, is
y
=
±
C
1
x
2
+
C
2
x
+
C
3
−
2
x
5
5
{\displaystyle y=\pm {\sqrt {C_{1}x^{2}+C_{2}x+C_{3}-{\frac {2x^{5}}{5}}}}}
== See also ==
Exact differential
Inexact differential equation
== References ==
== Further reading ==
Boyce, William E.; DiPrima, Richard C. (1986). Elementary Differential Equations (4th ed.). New York: John Wiley & Sons, Inc. ISBN 0-471-07894-8 | Wikipedia/Exact_differential_equation |
In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function (in the style of a higher-order function in computer science).
This article considers mainly linear differential operators, which are the most common type. However, non-linear differential operators also exist, such as the Schwarzian derivative.
== Definition ==
Given a nonnegative integer m, an order-
m
{\displaystyle m}
linear differential operator is a map
P
{\displaystyle P}
from a function space
F
1
{\displaystyle {\mathcal {F}}_{1}}
on
R
n
{\displaystyle \mathbb {R} ^{n}}
to another function space
F
2
{\displaystyle {\mathcal {F}}_{2}}
that can be written as:
P
=
∑
|
α
|
≤
m
a
α
(
x
)
D
α
,
{\displaystyle P=\sum _{|\alpha |\leq m}a_{\alpha }(x)D^{\alpha }\ ,}
where
α
=
(
α
1
,
α
2
,
⋯
,
α
n
)
{\displaystyle \alpha =(\alpha _{1},\alpha _{2},\cdots ,\alpha _{n})}
is a multi-index of non-negative integers,
|
α
|
=
α
1
+
α
2
+
⋯
+
α
n
{\displaystyle |\alpha |=\alpha _{1}+\alpha _{2}+\cdots +\alpha _{n}}
, and for each
α
{\displaystyle \alpha }
,
a
α
(
x
)
{\displaystyle a_{\alpha }(x)}
is a function on some open domain in n-dimensional space. The operator
D
α
{\displaystyle D^{\alpha }}
is interpreted as
D
α
=
∂
|
α
|
∂
x
1
α
1
∂
x
2
α
2
⋯
∂
x
n
α
n
{\displaystyle D^{\alpha }={\frac {\partial ^{|\alpha |}}{\partial x_{1}^{\alpha _{1}}\partial x_{2}^{\alpha _{2}}\cdots \partial x_{n}^{\alpha _{n}}}}}
Thus for a function
f
∈
F
1
{\displaystyle f\in {\mathcal {F}}_{1}}
:
P
f
=
∑
|
α
|
≤
m
a
α
(
x
)
∂
|
α
|
f
∂
x
1
α
1
∂
x
2
α
2
⋯
∂
x
n
α
n
{\displaystyle Pf=\sum _{|\alpha |\leq m}a_{\alpha }(x){\frac {\partial ^{|\alpha |}f}{\partial x_{1}^{\alpha _{1}}\partial x_{2}^{\alpha _{2}}\cdots \partial x_{n}^{\alpha _{n}}}}}
The notation
D
α
{\displaystyle D^{\alpha }}
is justified (i.e., independent of order of differentiation) because of the symmetry of second derivatives.
The polynomial p obtained by replacing partials
∂
∂
x
i
{\displaystyle {\frac {\partial }{\partial x_{i}}}}
by variables
ξ
i
{\displaystyle \xi _{i}}
in P is called the total symbol of P; i.e., the total symbol of P above is:
p
(
x
,
ξ
)
=
∑
|
α
|
≤
m
a
α
(
x
)
ξ
α
{\displaystyle p(x,\xi )=\sum _{|\alpha |\leq m}a_{\alpha }(x)\xi ^{\alpha }}
where
ξ
α
=
ξ
1
α
1
⋯
ξ
n
α
n
.
{\displaystyle \xi ^{\alpha }=\xi _{1}^{\alpha _{1}}\cdots \xi _{n}^{\alpha _{n}}.}
The highest homogeneous component of the symbol, namely,
σ
(
x
,
ξ
)
=
∑
|
α
|
=
m
a
α
(
x
)
ξ
α
{\displaystyle \sigma (x,\xi )=\sum _{|\alpha |=m}a_{\alpha }(x)\xi ^{\alpha }}
is called the principal symbol of P. While the total symbol is not intrinsically defined, the principal symbol is intrinsically defined (i.e., it is a function on the cotangent bundle).
More generally, let E and F be vector bundles over a manifold X. Then the linear operator
P
:
C
∞
(
E
)
→
C
∞
(
F
)
{\displaystyle P:C^{\infty }(E)\to C^{\infty }(F)}
is a differential operator of order
k
{\displaystyle k}
if, in local coordinates on X, we have
P
u
(
x
)
=
∑
|
α
|
=
k
P
α
(
x
)
∂
α
u
∂
x
α
+
lower-order terms
{\displaystyle Pu(x)=\sum _{|\alpha |=k}P^{\alpha }(x){\frac {\partial ^{\alpha }u}{\partial x^{\alpha }}}+{\text{lower-order terms}}}
where, for each multi-index α,
P
α
(
x
)
:
E
→
F
{\displaystyle P^{\alpha }(x):E\to F}
is a bundle map, symmetric on the indices α.
The kth order coefficients of P transform as a symmetric tensor
σ
P
:
S
k
(
T
∗
X
)
⊗
E
→
F
{\displaystyle \sigma _{P}:S^{k}(T^{*}X)\otimes E\to F}
whose domain is the tensor product of the kth symmetric power of the cotangent bundle of X with E, and whose codomain is F. This symmetric tensor is known as the principal symbol (or just the symbol) of P.
The coordinate system xi permits a local trivialization of the cotangent bundle by the coordinate differentials dxi, which determine fiber coordinates ξi. In terms of a basis of frames eμ, fν of E and F, respectively, the differential operator P decomposes into components
(
P
u
)
ν
=
∑
μ
P
ν
μ
u
μ
{\displaystyle (Pu)_{\nu }=\sum _{\mu }P_{\nu \mu }u_{\mu }}
on each section u of E. Here Pνμ is the scalar differential operator defined by
P
ν
μ
=
∑
α
P
ν
μ
α
∂
∂
x
α
.
{\displaystyle P_{\nu \mu }=\sum _{\alpha }P_{\nu \mu }^{\alpha }{\frac {\partial }{\partial x^{\alpha }}}.}
With this trivialization, the principal symbol can now be written
(
σ
P
(
ξ
)
u
)
ν
=
∑
|
α
|
=
k
∑
μ
P
ν
μ
α
(
x
)
ξ
α
u
μ
.
{\displaystyle (\sigma _{P}(\xi )u)_{\nu }=\sum _{|\alpha |=k}\sum _{\mu }P_{\nu \mu }^{\alpha }(x)\xi _{\alpha }u_{\mu }.}
In the cotangent space over a fixed point x of X, the symbol
σ
P
{\displaystyle \sigma _{P}}
defines a homogeneous polynomial of degree k in
T
x
∗
X
{\displaystyle T_{x}^{*}X}
with values in
Hom
(
E
x
,
F
x
)
{\displaystyle \operatorname {Hom} (E_{x},F_{x})}
.
== Fourier interpretation ==
A differential operator P and its symbol appear naturally in connection with the Fourier transform as follows. Let ƒ be a Schwartz function. Then by the inverse Fourier transform,
P
f
(
x
)
=
1
(
2
π
)
d
2
∫
R
d
e
i
x
⋅
ξ
p
(
x
,
i
ξ
)
f
^
(
ξ
)
d
ξ
.
{\displaystyle Pf(x)={\frac {1}{(2\pi )^{\frac {d}{2}}}}\int \limits _{\mathbf {R} ^{d}}e^{ix\cdot \xi }p(x,i\xi ){\hat {f}}(\xi )\,d\xi .}
This exhibits P as a Fourier multiplier. A more general class of functions p(x,ξ) which satisfy at most polynomial growth conditions in ξ under which this integral is well-behaved comprises the pseudo-differential operators.
== Examples ==
The differential operator
P
{\displaystyle P}
is elliptic if its symbol is invertible; that is for each nonzero
θ
∈
T
∗
X
{\displaystyle \theta \in T^{*}X}
the bundle map
σ
P
(
θ
,
…
,
θ
)
{\displaystyle \sigma _{P}(\theta ,\dots ,\theta )}
is invertible. On a compact manifold, it follows from the elliptic theory that P is a Fredholm operator: it has finite-dimensional kernel and cokernel.
In the study of hyperbolic and parabolic partial differential equations, zeros of the principal symbol correspond to the characteristics of the partial differential equation.
In applications to the physical sciences, operators such as the Laplace operator play a major role in setting up and solving partial differential equations.
In differential topology, the exterior derivative and Lie derivative operators have intrinsic meaning.
In abstract algebra, the concept of a derivation allows for generalizations of differential operators, which do not require the use of calculus. Frequently such generalizations are employed in algebraic geometry and commutative algebra. See also Jet (mathematics).
In the development of holomorphic functions of a complex variable z = x + i y, sometimes a complex function is considered to be a function of two real variables x and y. Use is made of the Wirtinger derivatives, which are partial differential operators:
∂
∂
z
=
1
2
(
∂
∂
x
−
i
∂
∂
y
)
,
∂
∂
z
¯
=
1
2
(
∂
∂
x
+
i
∂
∂
y
)
.
{\displaystyle {\frac {\partial }{\partial z}}={\frac {1}{2}}\left({\frac {\partial }{\partial x}}-i{\frac {\partial }{\partial y}}\right)\ ,\quad {\frac {\partial }{\partial {\bar {z}}}}={\frac {1}{2}}\left({\frac {\partial }{\partial x}}+i{\frac {\partial }{\partial y}}\right)\ .}
This approach is also used to study functions of several complex variables and functions of a motor variable.
The differential operator del, also called nabla, is an important vector differential operator. It appears frequently in physics in places like the differential form of Maxwell's equations. In three-dimensional Cartesian coordinates, del is defined as
∇
=
x
^
∂
∂
x
+
y
^
∂
∂
y
+
z
^
∂
∂
z
.
{\displaystyle \nabla =\mathbf {\hat {x}} {\partial \over \partial x}+\mathbf {\hat {y}} {\partial \over \partial y}+\mathbf {\hat {z}} {\partial \over \partial z}.}
Del defines the gradient, and is used to calculate the curl, divergence, and Laplacian of various objects.
A chiral differential operator. For now, see [1]
== History ==
The conceptual step of writing a differential operator as something free-standing is attributed to Louis François Antoine Arbogast in 1800.
== Notations ==
The most common differential operator is the action of taking the derivative. Common notations for taking the first derivative with respect to a variable x include:
d
d
x
{\displaystyle {d \over dx}}
,
D
{\displaystyle D}
,
D
x
,
{\displaystyle D_{x},}
and
∂
x
{\displaystyle \partial _{x}}
.
When taking higher, nth order derivatives, the operator may be written:
d
n
d
x
n
{\displaystyle {d^{n} \over dx^{n}}}
,
D
n
{\displaystyle D^{n}}
,
D
x
n
{\displaystyle D_{x}^{n}}
, or
∂
x
n
{\displaystyle \partial _{x}^{n}}
.
The derivative of a function f of an argument x is sometimes given as either of the following:
[
f
(
x
)
]
′
{\displaystyle [f(x)]'}
f
′
(
x
)
.
{\displaystyle f'(x).}
The D notation's use and creation is credited to Oliver Heaviside, who considered differential operators of the form
∑
k
=
0
n
c
k
D
k
{\displaystyle \sum _{k=0}^{n}c_{k}D^{k}}
in his study of differential equations.
One of the most frequently seen differential operators is the Laplacian operator, defined by
Δ
=
∇
2
=
∑
k
=
1
n
∂
2
∂
x
k
2
.
{\displaystyle \Delta =\nabla ^{2}=\sum _{k=1}^{n}{\frac {\partial ^{2}}{\partial x_{k}^{2}}}.}
Another differential operator is the Θ operator, or theta operator, defined by
Θ
=
z
d
d
z
.
{\displaystyle \Theta =z{d \over dz}.}
This is sometimes also called the homogeneity operator, because its eigenfunctions are the monomials in z:
Θ
(
z
k
)
=
k
z
k
,
k
=
0
,
1
,
2
,
…
{\displaystyle \Theta (z^{k})=kz^{k},\quad k=0,1,2,\dots }
In n variables the homogeneity operator is given by
Θ
=
∑
k
=
1
n
x
k
∂
∂
x
k
.
{\displaystyle \Theta =\sum _{k=1}^{n}x_{k}{\frac {\partial }{\partial x_{k}}}.}
As in one variable, the eigenspaces of Θ are the spaces of homogeneous functions. (Euler's homogeneous function theorem)
In writing, following common mathematical convention, the argument of a differential operator is usually placed on the right side of the operator itself. Sometimes an alternative notation is used: The result of applying the operator to the function on the left side of the operator and on the right side of the operator, and the difference obtained when applying the differential operator to the functions on both sides, are denoted by arrows as follows:
f
∂
x
←
g
=
g
⋅
∂
x
f
{\displaystyle f{\overleftarrow {\partial _{x}}}g=g\cdot \partial _{x}f}
f
∂
x
→
g
=
f
⋅
∂
x
g
{\displaystyle f{\overrightarrow {\partial _{x}}}g=f\cdot \partial _{x}g}
f
∂
x
↔
g
=
f
⋅
∂
x
g
−
g
⋅
∂
x
f
.
{\displaystyle f{\overleftrightarrow {\partial _{x}}}g=f\cdot \partial _{x}g-g\cdot \partial _{x}f.}
Such a bidirectional-arrow notation is frequently used for describing the probability current of quantum mechanics.
== Adjoint of an operator ==
Given a linear differential operator
T
{\displaystyle T}
T
u
=
∑
k
=
0
n
a
k
(
x
)
D
k
u
{\displaystyle Tu=\sum _{k=0}^{n}a_{k}(x)D^{k}u}
the adjoint of this operator is defined as the operator
T
∗
{\displaystyle T^{*}}
such that
⟨
T
u
,
v
⟩
=
⟨
u
,
T
∗
v
⟩
{\displaystyle \langle Tu,v\rangle =\langle u,T^{*}v\rangle }
where the notation
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
is used for the scalar product or inner product. This definition therefore depends on the definition of the scalar product (or inner product).
=== Formal adjoint in one variable ===
In the functional space of square-integrable functions on a real interval (a, b), the scalar product is defined by
⟨
f
,
g
⟩
=
∫
a
b
f
(
x
)
¯
g
(
x
)
d
x
,
{\displaystyle \langle f,g\rangle =\int _{a}^{b}{\overline {f(x)}}\,g(x)\,dx,}
where the line over f(x) denotes the complex conjugate of f(x). If one moreover adds the condition that f or g vanishes as
x
→
a
{\displaystyle x\to a}
and
x
→
b
{\displaystyle x\to b}
, one can also define the adjoint of T by
T
∗
u
=
∑
k
=
0
n
(
−
1
)
k
D
k
[
a
k
(
x
)
¯
u
]
.
{\displaystyle T^{*}u=\sum _{k=0}^{n}(-1)^{k}D^{k}\left[{\overline {a_{k}(x)}}u\right].}
This formula does not explicitly depend on the definition of the scalar product. It is therefore sometimes chosen as a definition of the adjoint operator. When
T
∗
{\displaystyle T^{*}}
is defined according to this formula, it is called the formal adjoint of T.
A (formally) self-adjoint operator is an operator equal to its own (formal) adjoint.
=== Several variables ===
If Ω is a domain in Rn, and P a differential operator on Ω, then the adjoint of P is defined in L2(Ω) by duality in the analogous manner:
⟨
f
,
P
∗
g
⟩
L
2
(
Ω
)
=
⟨
P
f
,
g
⟩
L
2
(
Ω
)
{\displaystyle \langle f,P^{*}g\rangle _{L^{2}(\Omega )}=\langle Pf,g\rangle _{L^{2}(\Omega )}}
for all smooth L2 functions f, g. Since smooth functions are dense in L2, this defines the adjoint on a dense subset of L2: P* is a densely defined operator.
=== Example ===
The Sturm–Liouville operator is a well-known example of a formal self-adjoint operator. This second-order linear differential operator L can be written in the form
L
u
=
−
(
p
u
′
)
′
+
q
u
=
−
(
p
u
″
+
p
′
u
′
)
+
q
u
=
−
p
u
″
−
p
′
u
′
+
q
u
=
(
−
p
)
D
2
u
+
(
−
p
′
)
D
u
+
(
q
)
u
.
{\displaystyle Lu=-(pu')'+qu=-(pu''+p'u')+qu=-pu''-p'u'+qu=(-p)D^{2}u+(-p')Du+(q)u.}
This property can be proven using the formal adjoint definition above.
This operator is central to Sturm–Liouville theory where the eigenfunctions (analogues to eigenvectors) of this operator are considered.
== Properties ==
Differentiation is linear, i.e.
D
(
f
+
g
)
=
(
D
f
)
+
(
D
g
)
,
{\displaystyle D(f+g)=(Df)+(Dg),}
D
(
a
f
)
=
a
(
D
f
)
,
{\displaystyle D(af)=a(Df),}
where f and g are functions, and a is a constant.
Any polynomial in D with function coefficients is also a differential operator. We may also compose differential operators by the rule
(
D
1
∘
D
2
)
(
f
)
=
D
1
(
D
2
(
f
)
)
.
{\displaystyle (D_{1}\circ D_{2})(f)=D_{1}(D_{2}(f)).}
Some care is then required: firstly any function coefficients in the operator D2 must be differentiable as many times as the application of D1 requires. To get a ring of such operators we must assume derivatives of all orders of the coefficients used. Secondly, this ring will not be commutative: an operator gD isn't the same in general as Dg. For example we have the relation basic in quantum mechanics:
D
x
−
x
D
=
1.
{\displaystyle Dx-xD=1.}
The subring of operators that are polynomials in D with constant coefficients is, by contrast, commutative. It can be characterised another way: it consists of the translation-invariant operators.
The differential operators also obey the shift theorem.
== Ring of polynomial differential operators ==
=== Ring of univariate polynomial differential operators ===
If R is a ring, let
R
⟨
D
,
X
⟩
{\displaystyle R\langle D,X\rangle }
be the non-commutative polynomial ring over R in the variables D and X, and I the two-sided ideal generated by DX − XD − 1. Then the ring of univariate polynomial differential operators over R is the quotient ring
R
⟨
D
,
X
⟩
/
I
{\displaystyle R\langle D,X\rangle /I}
. This is a non-commutative simple ring. Every element can be written in a unique way as a R-linear combination of monomials of the form
X
a
D
b
mod
I
{\displaystyle X^{a}D^{b}{\text{ mod }}I}
. It supports an analogue of Euclidean division of polynomials.
Differential modules over
R
[
X
]
{\displaystyle R[X]}
(for the standard derivation) can be identified with modules over
R
⟨
D
,
X
⟩
/
I
{\displaystyle R\langle D,X\rangle /I}
.
=== Ring of multivariate polynomial differential operators ===
If R is a ring, let
R
⟨
D
1
,
…
,
D
n
,
X
1
,
…
,
X
n
⟩
{\displaystyle R\langle D_{1},\ldots ,D_{n},X_{1},\ldots ,X_{n}\rangle }
be the non-commutative polynomial ring over R in the variables
D
1
,
…
,
D
n
,
X
1
,
…
,
X
n
{\displaystyle D_{1},\ldots ,D_{n},X_{1},\ldots ,X_{n}}
, and I the two-sided ideal generated by the elements
(
D
i
X
j
−
X
j
D
i
)
−
δ
i
,
j
,
D
i
D
j
−
D
j
D
i
,
X
i
X
j
−
X
j
X
i
{\displaystyle (D_{i}X_{j}-X_{j}D_{i})-\delta _{i,j},\ \ \ D_{i}D_{j}-D_{j}D_{i},\ \ \ X_{i}X_{j}-X_{j}X_{i}}
for all
1
≤
i
,
j
≤
n
,
{\displaystyle 1\leq i,j\leq n,}
where
δ
{\displaystyle \delta }
is Kronecker delta. Then the ring of multivariate polynomial differential operators over R is the quotient ring
R
⟨
D
1
,
…
,
D
n
,
X
1
,
…
,
X
n
⟩
/
I
{\displaystyle R\langle D_{1},\ldots ,D_{n},X_{1},\ldots ,X_{n}\rangle /I}
.
This is a non-commutative simple ring.
Every element can be written in a unique way as a R-linear combination of monomials of the form
X
1
a
1
…
X
n
a
n
D
1
b
1
…
D
n
b
n
{\displaystyle X_{1}^{a_{1}}\ldots X_{n}^{a_{n}}D_{1}^{b_{1}}\ldots D_{n}^{b_{n}}}
.
== Coordinate-independent description ==
In differential geometry and algebraic geometry it is often convenient to have a coordinate-independent description of differential operators between two vector bundles. Let E and F be two vector bundles over a differentiable manifold M. An R-linear mapping of sections P : Γ(E) → Γ(F) is said to be a kth-order linear differential operator if it factors through the jet bundle Jk(E).
In other words, there exists a linear mapping of vector bundles
i
P
:
J
k
(
E
)
→
F
{\displaystyle i_{P}:J^{k}(E)\to F}
such that
P
=
i
P
∘
j
k
{\displaystyle P=i_{P}\circ j^{k}}
where jk: Γ(E) → Γ(Jk(E)) is the prolongation that associates to any section of E its k-jet.
This just means that for a given section s of E, the value of P(s) at a point x ∈ M is fully determined by the kth-order infinitesimal behavior of s in x. In particular this implies that P(s)(x) is determined by the germ of s in x, which is expressed by saying that differential operators are local. A foundational result is the Peetre theorem showing that the converse is also true: any (linear) local operator is differential.
=== Relation to commutative algebra ===
An equivalent, but purely algebraic description of linear differential operators is as follows: an R-linear map P is a kth-order linear differential operator, if for any k + 1 smooth functions
f
0
,
…
,
f
k
∈
C
∞
(
M
)
{\displaystyle f_{0},\ldots ,f_{k}\in C^{\infty }(M)}
we have
[
f
k
,
[
f
k
−
1
,
[
⋯
[
f
0
,
P
]
⋯
]
]
=
0.
{\displaystyle [f_{k},[f_{k-1},[\cdots [f_{0},P]\cdots ]]=0.}
Here the bracket
[
f
,
P
]
:
Γ
(
E
)
→
Γ
(
F
)
{\displaystyle [f,P]:\Gamma (E)\to \Gamma (F)}
is defined as the commutator
[
f
,
P
]
(
s
)
=
P
(
f
⋅
s
)
−
f
⋅
P
(
s
)
.
{\displaystyle [f,P](s)=P(f\cdot s)-f\cdot P(s).}
This characterization of linear differential operators shows that they are particular mappings between modules over a commutative algebra, allowing the concept to be seen as a part of commutative algebra.
== Variants ==
=== A differential operator of infinite order ===
A differential operator of infinite order is (roughly) a differential operator whose total symbol is a power series instead of a polynomial.
=== Bidifferential operator ===
A differential operator acting on two functions
D
(
g
,
f
)
{\displaystyle D(g,f)}
is called a bidifferential operator. The notion appears, for instance, in an associative algebra structure on a deformation quantization of a Poisson algebra.
=== Microdifferential operator ===
A microdifferential operator is a type of operator on an open subset of a cotangent bundle, as opposed to an open subset of a manifold. It is obtained by extending the notion of a differential operator to the cotangent bundle.
== See also ==
== Notes ==
== References ==
Freed, Daniel S. (1987), Geometry of Dirac operators, p. 8, CiteSeerX 10.1.1.186.8445
Hörmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft., vol. 256, Springer, doi:10.1007/978-3-642-96750-4, ISBN 3-540-12104-8, MR 0717035.
Schapira, Pierre (1985). Microdifferential Systems in the Complex Domain. Grundlehren der mathematischen Wissenschaften. Vol. 269. Springer. doi:10.1007/978-3-642-61665-5. ISBN 978-3-642-64904-2.
Wells, R.O. (1973), Differential analysis on complex manifolds, Springer-Verlag, ISBN 0-387-90419-0.
== Further reading ==
Fedosov, Boris; Schulze, Bert-Wolfgang; Tarkhanov, Nikolai (2002). "Analytic index formulas for elliptic corner operators". Annales de l'Institut Fourier. 52 (3): 899–982. doi:10.5802/aif.1906. ISSN 1777-5310.
https://mathoverflow.net/questions/451110/reference-request-inverse-of-differential-operators
== External links ==
Media related to Differential operators at Wikimedia Commons
"Differential operator", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Differential_operators |
Computer simulation is the running of a mathematical model on a computer, the model being designed to represent the behaviour of, or the outcome of, a real-world or physical system. The reliability of some mathematical models can be determined by comparing their results to the real-world outcomes they aim to predict. Computer simulations have become a useful tool for the mathematical modeling of many natural systems in physics (computational physics), astrophysics, climatology, chemistry, biology and manufacturing, as well as human systems in economics, psychology, social science, health care and engineering. Simulation of a system is represented as the running of the system's model. It can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions.
Computer simulations are realized by running computer programs that can be either small, running almost instantly on small devices, or large-scale programs that run for hours or days on network-based groups of computers. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using traditional paper-and-pencil mathematical modeling. In 1997, a desert-battle simulation of one force invading another involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program.
Other examples include a 1-billion-atom model of material deformation; a 2.64-million-atom model of the complex protein-producing organelle of all living organisms, the ribosome, in 2005;
a complete simulation of the life cycle of Mycoplasma genitalium in 2012; and the Blue Brain project at EPFL (Switzerland), begun in May 2005 to create the first computer simulation of the entire human brain, right down to the molecular level.
Because of the computational cost of simulation, computer experiments are used to perform inference such as uncertainty quantification.
== Simulation versus model ==
A model consists of the equations used to capture the behavior of a system. By contrast, computer simulation is the actual running of the program that perform algorithms which solve those equations, often in an approximate manner. Simulation, therefore, is the process of running a model. Thus one would not "build a simulation"; instead, one would "build a model (or a simulator)", and then either "run the model" or equivalently "run a simulation".
== History ==
Computer simulation developed hand-in-hand with the rapid growth of the computer, following its first large-scale deployment during the Manhattan Project in World War II to model the process of nuclear detonation. It was a simulation of 12 hard spheres using a Monte Carlo algorithm. Computer simulation is often used as an adjunct to, or substitute for, modeling systems for which simple closed form analytic solutions are not possible. There are many types of computer simulations; their common feature is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states of the model would be prohibitive or impossible.
== Data preparation ==
The external data requirements of simulations and models vary widely. For some, the input might be just a few numbers (for example, simulation of a waveform of AC electricity on a wire), while others might require terabytes of information (such as weather and climate models).
Input sources also vary widely:
Sensors and other physical devices connected to the model;
Control surfaces used to direct the progress of the simulation in some way;
Current or historical data entered by hand;
Values extracted as a by-product from other processes;
Values output for the purpose by other simulations, models, or processes.
Lastly, the time at which data is available varies:
"invariant" data is often built into the model code, either because the value is truly invariant (e.g., the value of π) or because the designers consider the value to be invariant for all cases of interest;
data can be entered into the simulation when it starts up, for example by reading one or more files, or by reading data from a preprocessor;
data can be provided during the simulation run, for example by a sensor network.
Because of this variety, and because diverse simulation systems have many common elements, there are a large number of specialized simulation languages. The best-known may be Simula. There are now many others.
Systems that accept data from external sources must be very careful in knowing what they are receiving. While it is easy for computers to read in values from text or binary files, what is much harder is knowing what the accuracy (compared to measurement resolution and precision) of the values are. Often they are expressed as "error bars", a minimum and maximum deviation from the value range within which the true value (is expected to) lie. Because digital computer mathematics is not perfect, rounding and truncation errors multiply this error, so it is useful to perform an "error analysis" to confirm that values output by the simulation will still be usefully accurate.
== Types ==
Models used for computer simulations can be classified according to several independent pairs of attributes, including:
Stochastic or deterministic (and as a special case of deterministic, chaotic) – see external links below for examples of stochastic vs. deterministic simulations
Steady-state or dynamic
Continuous or discrete (and as an important special case of discrete, discrete event or DE models)
Dynamic system simulation, e.g. electric systems, hydraulic systems or multi-body mechanical systems (described primarily by DAE:s) or dynamics simulation of field problems, e.g. CFD of FEM simulations (described by PDE:s).
Local or distributed.
Another way of categorizing models is to look at the underlying data structures. For time-stepped simulations, there are two main classes:
Simulations which store their data in regular grids and require only next-neighbor access are called stencil codes. Many CFD applications belong to this category.
If the underlying graph is not a regular grid, the model may belong to the meshfree method class.
For steady-state simulations, equations define the relationships between elements of the modeled system and attempt to find a state in which the system is in equilibrium. Such models are often used in simulating physical systems, as a simpler modeling case before dynamic simulation is attempted.
Dynamic simulations attempt to capture changes in a system in response to (usually changing) input signals.
Stochastic models use random number generators to model chance or random events;
A discrete event simulation (DES) manages events in time. Most computer, logic-test and fault-tree simulations are of this type. In this type of simulation, the simulator maintains a queue of events sorted by the simulated time they should occur. The simulator reads the queue and triggers new events as each event is processed. It is not important to execute the simulation in real time. It is often more important to be able to access the data produced by the simulation and to discover logic defects in the design or the sequence of events.
A continuous dynamic simulation performs numerical solution of differential-algebraic equations or differential equations (either partial or ordinary). Periodically, the simulation program solves all the equations and uses the numbers to change the state and output of the simulation. Applications include flight simulators, construction and management simulation games, chemical process modeling, and simulations of electrical circuits. Originally, these kinds of simulations were actually implemented on analog computers, where the differential equations could be represented directly by various electrical components such as op-amps. By the late 1980s, however, most "analog" simulations were run on conventional digital computers that emulate the behavior of an analog computer.
A special type of discrete simulation that does not rely on a model with an underlying equation, but can nonetheless be represented formally, is agent-based simulation. In agent-based simulation, the individual entities (such as molecules, cells, trees or consumers) in the model are represented directly (rather than by their density or concentration) and possess an internal state and set of behaviors or rules that determine how the agent's state is updated from one time-step to the next.
Distributed models run on a network of interconnected computers, possibly through the Internet. Simulations dispersed across multiple host computers like this are often referred to as "distributed simulations". There are several standards for distributed simulation, including Aggregate Level Simulation Protocol (ALSP), Distributed Interactive Simulation (DIS), the High Level Architecture (simulation) (HLA) and the Test and Training Enabling Architecture (TENA).
== Visualization ==
Formerly, the output data from a computer simulation was sometimes presented in a table or a matrix showing how data were affected by numerous changes in the simulation parameters. The use of the matrix format was related to traditional use of the matrix concept in mathematical models. However, psychologists and others noted that humans could quickly perceive trends by looking at graphs or even moving-images or motion-pictures generated from the data, as displayed by computer-generated-imagery (CGI) animation. Although observers could not necessarily read out numbers or quote math formulas, from observing a moving weather chart they might be able to predict events (and "see that rain was headed their way") much faster than by scanning tables of rain-cloud coordinates. Such intense graphical displays, which transcended the world of numbers and formulae, sometimes also led to output that lacked a coordinate grid or omitted timestamps, as if straying too far from numeric data displays. Today, weather forecasting models tend to balance the view of moving rain/snow clouds against a map that uses numeric coordinates and numeric timestamps of events.
Similarly, CGI computer simulations of CAT scans can simulate how a tumor might shrink or change during an extended period of medical treatment, presenting the passage of time as a spinning view of the visible human head, as the tumor changes.
Other applications of CGI computer simulations are being developed to graphically display large amounts of data, in motion, as changes occur during a simulation run.
== In science ==
Generic examples of types of computer simulations in science, which are derived from an underlying mathematical description:
a numerical simulation of differential equations that cannot be solved analytically, theories that involve continuous systems such as phenomena in physical cosmology, fluid dynamics (e.g., climate models, roadway noise models, roadway air dispersion models), continuum mechanics and chemical kinetics fall into this category.
a stochastic simulation, typically used for discrete systems where events occur probabilistically and which cannot be described directly with differential equations (this is a discrete simulation in the above sense). Phenomena in this category include genetic drift, biochemical or gene regulatory networks with small numbers of molecules. (see also: Monte Carlo method).
multiparticle simulation of the response of nanomaterials at multiple scales to an applied force for the purpose of modeling their thermoelastic and thermodynamic properties. Techniques used for such simulations are Molecular dynamics, Molecular mechanics, Monte Carlo method, and Multiscale Green's function.
Specific examples of computer simulations include:
statistical simulations based upon an agglomeration of a large number of input profiles, such as the forecasting of equilibrium temperature of receiving waters, allowing the gamut of meteorological data to be input for a specific locale. This technique was developed for thermal pollution forecasting.
agent based simulation has been used effectively in ecology, where it is often called "individual based modeling" and is used in situations for which individual variability in the agents cannot be neglected, such as population dynamics of salmon and trout (most purely mathematical models assume all trout behave identically).
time stepped dynamic model. In hydrology there are several such hydrology transport models such as the SWMM and DSSAM Models developed by the U.S. Environmental Protection Agency for river water quality forecasting.
computer simulations have also been used to formally model theories of human cognition and performance, e.g., ACT-R.
computer simulation using molecular modeling for drug discovery.
computer simulation to model viral infection in mammalian cells.
computer simulation for studying the selective sensitivity of bonds by mechanochemistry during grinding of organic molecules.
Computational fluid dynamics simulations are used to simulate the behaviour of flowing air, water and other fluids. One-, two- and three-dimensional models are used. A one-dimensional model might simulate the effects of water hammer in a pipe. A two-dimensional model might be used to simulate the drag forces on the cross-section of an aeroplane wing. A three-dimensional simulation might estimate the heating and cooling requirements of a large building.
An understanding of statistical thermodynamic molecular theory is fundamental to the appreciation of molecular solutions. Development of the Potential Distribution Theorem (PDT) allows this complex subject to be simplified to down-to-earth presentations of molecular theory.
Notable, and sometimes controversial, computer simulations used in science include: Donella Meadows' World3 used in the Limits to Growth, James Lovelock's Daisyworld and Thomas Ray's Tierra.
In social sciences, computer simulation is an integral component of the five angles of analysis fostered by the data percolation methodology, which also includes qualitative and quantitative methods, reviews of the literature (including scholarly), and interviews with experts, and which forms an extension of data triangulation. Of course, similar to any other scientific method, replication is an important part of computational modeling
== In practical contexts ==
Computer simulations are used in a wide variety of practical contexts, such as:
analysis of air pollutant dispersion using atmospheric dispersion modeling
As a possible humane alternative to live animal testing in respect to animal rights.
design of complex systems such as aircraft and also logistics systems.
design of noise barriers to effect roadway noise mitigation
modeling of application performance
flight simulators to train pilots
weather forecasting
forecasting of risk
simulation of electrical circuits
Power system simulation
simulation of other computers is emulation.
forecasting of prices on financial markets (for example Adaptive Modeler)
behavior of structures (such as buildings and industrial parts) under stress and other conditions
design of industrial processes, such as chemical processing plants
strategic management and organizational studies
reservoir simulation for the petroleum engineering to model the subsurface reservoir
process engineering simulation tools.
robot simulators for the design of robots and robot control algorithms
urban simulation models that simulate dynamic patterns of urban development and responses to urban land use and transportation policies.
traffic engineering to plan or redesign parts of the street network from single junctions over cities to a national highway network to transportation system planning, design and operations. See a more detailed article on Simulation in Transportation.
modeling car crashes to test safety mechanisms in new vehicle models.
crop-soil systems in agriculture, via dedicated software frameworks (e.g. BioMA, OMS3, APSIM)
The reliability and the trust people put in computer simulations depends on the validity of the simulation model, therefore verification and validation are of crucial importance in the development of computer simulations. Another important aspect of computer simulations is that of reproducibility of the results, meaning that a simulation model should not provide a different answer for each execution. Although this might seem obvious, this is a special point of attention in stochastic simulations, where random numbers should actually be semi-random numbers. An exception to reproducibility are human-in-the-loop simulations such as flight simulations and computer games. Here a human is part of the simulation and thus influences the outcome in a way that is hard, if not impossible, to reproduce exactly.
Vehicle manufacturers make use of computer simulation to test safety features in new designs. By building a copy of the car in a physics simulation environment, they can save the hundreds of thousands of dollars that would otherwise be required to build and test a unique prototype. Engineers can step through the simulation milliseconds at a time to determine the exact stresses being put upon each section of the prototype.
Computer graphics can be used to display the results of a computer simulation. Animations can be used to experience a simulation in real-time, e.g., in training simulations. In some cases animations may also be useful in faster than real-time or even slower than real-time modes. For example, faster than real-time animations can be useful in visualizing the buildup of queues in the simulation of humans evacuating a building. Furthermore, simulation results are often aggregated into static images using various ways of scientific visualization.
In debugging, simulating a program execution under test (rather than executing natively) can detect far more errors than the hardware itself can detect and, at the same time, log useful debugging information such as instruction trace, memory alterations and instruction counts. This technique can also detect buffer overflow and similar "hard to detect" errors as well as produce performance information and tuning data.
== Pitfalls ==
Although sometimes ignored in computer simulations, it is very important to perform a sensitivity analysis to ensure that the accuracy of the results is properly understood. For example, the probabilistic risk analysis of factors determining the success of an oilfield exploration program involves combining samples from a variety of statistical distributions using the Monte Carlo method. If, for instance, one of the key parameters (e.g., the net ratio of oil-bearing strata) is known to only one significant figure, then the result of the simulation might not be more precise than one significant figure, although it might (misleadingly) be presented as having four significant figures.
== See also ==
== References ==
== Further reading ==
Young, Joseph and Findley, Michael. 2014. "Computational Modeling to Study Conflicts and Terrorism." Routledge Handbook of Research Methods in Military Studies edited by Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan. pp. 249–260. New York: Routledge,
R. Frigg and S. Hartmann, Models in Science. Entry in the Stanford Encyclopedia of Philosophy.
E. Winsberg Simulation in Science. Entry in the Stanford Encyclopedia of Philosophy.
S. Hartmann, The World as a Process: Simulations in the Natural and Social Sciences, in: R. Hegselmann et al. (eds.), Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View, Theory and Decision Library. Dordrecht: Kluwer 1996, 77–100.
E. Winsberg, Science in the Age of Computer Simulation. Chicago: University of Chicago Press, 2010.
P. Humphreys, Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Oxford: Oxford University Press, 2004.
James J. Nutaro (2011). Building Software for Simulation: Theory and Algorithms, with Applications in C++. John Wiley & Sons. ISBN 978-1-118-09945-2.
Desa, W. L. H. M., Kamaruddin, S., & Nawawi, M. K. M. (2012). Modeling of Aircraft Composite Parts Using Simulation. Advanced Material Research, 591–593, 557–560.
== External links ==
Guide to the Computer Simulation Oral History Archive 2003-2018 | Wikipedia/Computer_model |
In fluid mechanics, the thin-film equation is a partial differential equation that approximately predicts the time evolution of the thickness h of a liquid film that lies on a surface. The equation is derived via lubrication theory which is based on the assumption that the length-scales in the surface directions are significantly larger than in the direction normal to the surface. In the non-dimensional form of the Navier-Stokes equation the requirement is that terms of order ε2 and ε2Re are negligible, where ε ≪ 1 is the aspect ratio and Re is the Reynolds number. This significantly simplifies the governing equations. However, lubrication theory, as the name suggests, is typically derived for flow between two solid surfaces, hence the liquid forms a lubricating layer. The thin-film equation holds when there is a single free surface. With two free surfaces, the flow must be treated as a viscous sheet.
== Definition ==
The basic form of a 2-dimensional thin film equation is
∂
h
∂
t
=
−
∇
⋅
Q
{\displaystyle {\frac {\partial h}{\partial t}}=-\nabla \cdot \mathbf {Q} }
where the fluid flux
Q
{\displaystyle \mathbf {Q} }
is
Q
=
h
3
3
μ
[
∇
(
γ
∇
2
h
+
ρ
g
⋅
e
^
n
)
+
ρ
g
⋅
e
^
i
]
+
h
2
2
μ
A
{\displaystyle \mathbf {Q} ={\frac {h^{3}}{3\mu }}\left[\nabla \right(\gamma \nabla ^{2}h+\rho \mathbf {g} \cdot \mathbf {{\hat {e}}_{n}} )+\rho \mathbf {g} \cdot \mathbf {{\hat {e}}_{i}} ]+{\frac {h^{2}}{2\mu }}\mathbf {A} }
,
and μ is the viscosity (or dynamic viscosity) of the liquid, h(x,y,t) is film thickness, γ is the interfacial tension between the liquid and the gas phase above it,
ρ
{\displaystyle \rho }
is the liquid density and
A
{\displaystyle \mathbf {A} }
the surface shear. The surface shear could be caused by flow of the overlying gas or surface tension gradients. The vectors
e
^
i
{\displaystyle \mathbf {{\hat {e}}_{i}} }
represent the unit vector in the surface co-ordinate directions, the dot product serving to identify the gravity component in each direction. The vector
e
^
n
{\displaystyle \mathbf {{\hat {e}}_{n}} }
is the unit vector perpendicular to the surface.
A generalised thin film equation is discussed in SIAM (Society for Industrial and Applied Mathematics)
∂
h
∂
t
=
−
1
3
μ
∇
⋅
(
h
n
∇
(
γ
∇
2
h
)
)
{\displaystyle {\frac {\partial h}{\partial t}}=-{\frac {1}{3\mu }}\nabla \cdot \left(h^{n}\,\nabla \left(\gamma \,\nabla ^{2}h\right)\right)}
.
When
n
<
3
{\displaystyle n<3}
this may represent flow with slip at the solid surface while
n
=
1
{\displaystyle n=1}
describes the thickness of a thin bridge between two masses of fluid in a Hele-Shaw cell. The value
n
=
3
{\displaystyle n=3}
represents surface tension driven flow.
A form frequently investigated with regard to the rupture of thin liquid films involves the addition of a disjoining pressure Π(h) in the equation, as in
∂
h
∂
t
=
−
1
3
μ
∇
⋅
(
h
3
∇
(
γ
∇
2
h
−
Π
(
h
)
)
)
{\displaystyle {\frac {\partial h}{\partial t}}=-{\frac {1}{3\mu }}\nabla \cdot \left(h^{3}\nabla \left(\gamma \,\nabla ^{2}h-\Pi (h)\right)\right)}
where the function Π(h) is usually very small in value for moderate-large film thicknesses h and grows very rapidly when h goes very close to zero.
== Properties ==
Physical applications, properties and solution behaviour of the thin-film equation are reviewed in Reviews of Modern Physics and SIAM. With the inclusion of phase change at the substrate a form of thin film equation for an arbitrary surface is derived in Physics of Fluids. A detailed study of the steady-flow of a thin film near a moving contact line is given in another SIAM paper. For a yield-stress fluid flow driven by gravity and surface tension is investigated in Journal of Non-Newtonian Fluid Mechanics.
For purely surface tension driven flow it is easy to see that one static (time-independent) solution is a paraboloid of revolution
h
(
x
,
y
)
=
A
−
B
(
x
2
+
y
2
)
{\displaystyle h(x,y)=A-B(x^{2}+y^{2})\,}
and this is consistent with the experimentally observed spherical cap shape of a static sessile drop, as a "flat" spherical cap that has small height can be accurately approximated in second order with a paraboloid. This, however, does not handle correctly the circumference of the droplet where the value of the function h(x,y) drops to zero and below, as a real physical liquid film can't have a negative thickness. This is one reason why the disjoining pressure term Π(h) is important in the theory.
One possible realistic form of the disjoining pressure term is
Π
(
h
)
=
B
[
(
h
∗
h
)
n
−
(
h
∗
h
)
m
]
{\displaystyle \Pi (h)=B\left[\left({\frac {h_{*}}{h}}\right)^{n}-\left({\frac {h_{*}}{h}}\right)^{m}\right]}
where B, h*, m and n are some parameters. These constants and the surface tension
γ
{\displaystyle \gamma }
can be approximately related to the equilibrium liquid-solid contact angle
θ
e
{\displaystyle \theta _{e}}
through the equation
B
≈
(
m
−
1
)
(
n
−
1
)
h
∗
(
n
−
m
)
γ
(
1
−
cos
θ
e
)
{\displaystyle B\approx {\frac {(m-1)(n-1)}{h_{*}(n-m)}}\gamma (1-\cos \theta _{e})}
.
The thin film equation can be used to simulate several behaviors of liquids, such as the fingering instability in gravity driven flow.
The lack of a second-order time derivative in the thin-film equation is a result of the assumption of small Reynold's number in its derivation, which allows the ignoring of inertial terms dependent on fluid density
ρ
{\displaystyle \rho }
. This is somewhat similar to the situation with Washburn's equation, which describes the capillarity-driven flow of a liquid in a thin tube.
== See also ==
Partial differential equation
Lubrication theory
Disjoining pressure
== References ==
== External links ==
Viscous Thin Films - Max Planck Institute | Wikipedia/Thin-film_equation |
A mathematical model is an abstract description of a concrete system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in applied mathematics and in the natural sciences (such as physics, biology, earth science, chemistry) and engineering disciplines (such as computer science, electrical engineering), as well as in non-physical systems such as the social sciences (such as economics, psychology, sociology, political science). It can also be taught as a subject in its own right.
The use of mathematical models to solve problems in business or military operations is a large part of the field of operations research. Mathematical models are also used in music, linguistics, and
philosophy (for example, intensively in analytic philosophy). A model may help to explain a system and to study the effects of different components, and to make predictions about behavior.
== Elements of a mathematical model ==
Mathematical models can take many forms, including dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. In general, mathematical models may include logical models. In many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed. In the physical sciences, a traditional mathematical model contains most of the following elements:
Governing equations
Supplementary sub-models
Defining equations
Constitutive equations
Assumptions and constraints
Initial and boundary conditions
Classical constraints and kinematic equations
== Classifications ==
Mathematical models are of different types:
Linear vs. nonlinear. If all the operators in a mathematical model exhibit linearity, the resulting mathematical model is defined as linear. A model is considered to be nonlinear otherwise. The definition of linearity and nonlinearity is dependent on context, and linear models may have nonlinear expressions in them. For example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, a differential equation is said to be linear if it can be written with linear differential operators, but it can still have nonlinear expressions in it. In a mathematical programming model, if the objective functions and constraints are represented entirely by linear equations, then the model is regarded as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation, then the model is known as a nonlinear model.Linear structure implies that a problem can be decomposed into simpler parts that can be treated independently and/or analyzed at a different scale and the results obtained will remain valid for the initial problem when recomposed and rescaled.Nonlinearity, even in fairly simple systems, is often associated with phenomena such as chaos and irreversibility. Although there are exceptions, nonlinear systems and models tend to be more difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be problematic if one is trying to study aspects such as irreversibility, which are strongly tied to nonlinearity.
Static vs. dynamic. A dynamic model accounts for time-dependent changes in the state of the system, while a static (or steady-state) model calculates the system in equilibrium, and thus is time-invariant. Dynamic models typically are represented by differential equations or difference equations.
Explicit vs. implicit. If all of the input parameters of the overall model are known, and the output parameters can be calculated by a finite series of computations, the model is said to be explicit. But sometimes it is the output parameters which are known, and the corresponding inputs must be solved for by an iterative procedure, such as Newton's method or Broyden's method. In such a case the model is said to be implicit. For example, a jet engine's physical properties such as turbine and nozzle throat areas can be explicitly calculated given a design thermodynamic cycle (air and fuel flow rates, pressures, and temperatures) at a specific flight condition and power setting, but the engine's operating cycles at other flight conditions and power settings cannot be explicitly calculated from the constant physical properties.
Discrete vs. continuous. A discrete model treats objects as discrete, such as the particles in a molecular model or the states in a statistical model; while a continuous model represents the objects in a continuous manner, such as the velocity field of fluid in pipe flows, temperatures and stresses in a solid, and electric field that applies continuously over the entire model due to a point charge.
Deterministic vs. probabilistic (stochastic). A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables; therefore, a deterministic model always performs the same way for a given set of initial conditions. Conversely, in a stochastic model—usually called a "statistical model"—randomness is present, and variable states are not described by unique values, but rather by probability distributions.
Deductive, inductive, or floating. A deductive model is a logical structure based on a theory. An inductive model arises from empirical findings and generalization from them. The floating model rests on neither theory nor observation, but is merely the invocation of expected structure. Application of mathematics in social sciences outside of economics has been criticized for unfounded models. Application of catastrophe theory in science has been characterized as a floating model.
Strategic vs. non-strategic. Models used in game theory are different in a sense that they model agents with incompatible incentives, such as competing species or bidders in an auction. Strategic models assume that players are autonomous decision makers who rationally choose actions that maximize their objective function. A key challenge of using strategic models is defining and computing solution concepts such as Nash equilibrium. An interesting property of strategic models is that they separate reasoning about rules of the game from reasoning about behavior of the players.
== Construction ==
In business and engineering, mathematical models may be used to maximize a certain output. The system under consideration will require certain inputs. The system relating inputs to outputs depends on other variables too: decision variables, state variables, exogenous variables, and random variables. Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known as parameters or constants. The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables. Furthermore, the output variables are dependent on the state of the system (represented by the state variables).
Objectives and constraints of the system and its users can be represented as functions of the output variables or state variables. The objective functions will depend on the perspective of the model's user. Depending on the context, an objective function is also known as an index of performance, as it is some measure of interest to the user. Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved (computationally) as the number increases. For example, economists often apply linear algebra when using input–output models. Complicated mathematical models that have many variables may be consolidated by use of vectors where one symbol represents several variables.
=== A priori information ===
Mathematical modeling problems are often classified into black box or white box models, according to how much a priori information on the system is available. A black-box model is a system of which there is no a priori information available. A white-box model (also called glass box or clear box) is a system where all necessary information is available. Practically all systems are somewhere between the black-box and white-box models, so this concept is useful only as an intuitive guide for deciding which approach to take.
Usually, it is preferable to use as much a priori information as possible to make the model more accurate. Therefore, the white-box models are usually considered easier, because if you have used the information correctly, then the model will behave correctly. Often the a priori information comes in forms of knowing the type of functions relating different variables. For example, if we make a model of how a medicine works in a human system, we know that usually the amount of medicine in the blood is an exponentially decaying function, but we are still left with several unknown parameters; how rapidly does the medicine amount decay, and what is the initial amount of medicine in blood? This example is therefore not a completely white-box model. These parameters have to be estimated through some means before one can use the model.
In black-box models, one tries to estimate both the functional form of relations between variables and the numerical parameters in those functions. Using a priori information we could end up, for example, with a set of functions that probably could describe the system adequately. If there is no a priori information we would try to use functions as general as possible to cover all different models. An often used approach for black-box models are neural networks which usually do not make assumptions about incoming data. Alternatively, the NARMAX (Nonlinear AutoRegressive Moving Average model with eXogenous inputs) algorithms which were developed as part of nonlinear system identification can be used to select the model terms, determine the model structure, and estimate the unknown parameters in the presence of correlated and nonlinear noise. The advantage of NARMAX models compared to neural networks is that NARMAX produces models that can be written down and related to the underlying process, whereas neural networks produce an approximation that is opaque.
==== Subjective information ====
Sometimes it is useful to incorporate subjective information into a mathematical model. This can be done based on intuition, experience, or expert opinion, or based on convenience of mathematical form. Bayesian statistics provides a theoretical framework for incorporating such subjectivity into a rigorous analysis: we specify a prior probability distribution (which can be subjective), and then update this distribution based on empirical data.
An example of when such approach would be necessary is a situation in which an experimenter bends a coin slightly and tosses it once, recording whether it comes up heads, and is then given the task of predicting the probability that the next flip comes up heads. After bending the coin, the true probability that the coin will come up heads is unknown; so the experimenter would need to make a decision (perhaps by looking at the shape of the coin) about what prior distribution to use. Incorporation of such subjective information might be important to get an accurate estimate of the probability.
=== Complexity ===
In general, model complexity involves a trade-off between simplicity and accuracy of the model. Occam's razor is a principle particularly relevant to modeling, its essential idea being that among models with roughly equal predictive power, the simplest one is the most desirable. While added complexity usually improves the realism of a model, it can make the model difficult to understand and analyze, and can also pose computational problems, including numerical instability. Thomas Kuhn argues that as science progresses, explanations tend to become more complex before a paradigm shift offers radical simplification.
For example, when modeling the flight of an aircraft, we could embed each mechanical part of the aircraft into our model and would thus acquire an almost white-box model of the system. However, the computational cost of adding such a huge amount of detail would effectively inhibit the usage of such a model. Additionally, the uncertainty would increase due to an overly complex system, because each separate part induces some amount of variance into the model. It is therefore usually appropriate to make some approximations to reduce the model to a sensible size. Engineers often can accept some approximations in order to get a more robust and simple model. For example, Newton's classical mechanics is an approximated model of the real world. Still, Newton's model is quite sufficient for most ordinary-life situations, that is, as long as particle speeds are well below the speed of light, and we study macro-particles only. Note that better accuracy does not necessarily mean a better model. Statistical models are prone to overfitting which means that a model is fitted to data too much and it has lost its ability to generalize to new events that were not observed before.
=== Training, tuning, and fitting ===
Any model which is not pure white-box contains some parameters that can be used to fit the model to the system it is intended to describe. If the modeling is done by an artificial neural network or other machine learning, the optimization of parameters is called training, while the optimization of model hyperparameters is called tuning and often uses cross-validation. In more conventional modeling through explicitly given mathematical functions, parameters are often determined by curve fitting.
=== Evaluation and assessment ===
A crucial part of the modeling process is the evaluation of whether or not a given mathematical model describes a system accurately. This question can be difficult to answer as it involves several different types of evaluation.
==== Prediction of empirical data ====
Usually, the easiest part of model evaluation is checking whether a model predicts experimental measurements or other empirical data not used in the model development. In models with parameters, a common approach is to split the data into two disjoint subsets: training data and verification data. The training data are used to estimate the model parameters. An accurate model will closely match the verification data even though these data were not used to set the model's parameters. This practice is referred to as cross-validation in statistics.
Defining a metric to measure distances between observed and predicted data is a useful tool for assessing model fit. In statistics, decision theory, and some economic models, a loss function plays a similar role. While it is rather straightforward to test the appropriateness of parameters, it can be more difficult to test the validity of the general mathematical form of a model. In general, more mathematical tools have been developed to test the fit of statistical models than models involving differential equations. Tools from nonparametric statistics can sometimes be used to evaluate how well the data fit a known distribution or to come up with a general model that makes only minimal assumptions about the model's mathematical form.
==== Scope of the model ====
Assessing the scope of a model, that is, determining what situations the model is applicable to, can be less straightforward. If the model was constructed based on a set of data, one must determine for which systems or situations the known data is a "typical" set of data. The question of whether the model describes well the properties of the system between data points is called interpolation, and the same question for events or data points outside the observed data is called extrapolation.
As an example of the typical limitations of the scope of a model, in evaluating Newtonian classical mechanics, we can note that Newton made his measurements without advanced equipment, so he could not measure properties of particles traveling at speeds close to the speed of light. Likewise, he did not measure the movements of molecules and other small particles, but macro particles only. It is then not surprising that his model does not extrapolate well into these domains, even though his model is quite sufficient for ordinary life physics.
==== Philosophical considerations ====
Many types of modeling implicitly involve claims about causality. This is usually (but not always) true of models involving differential equations. As the purpose of modeling is to increase our understanding of the world, the validity of a model rests not only on its fit to empirical observations, but also on its ability to extrapolate to situations or data beyond those originally described in the model. One can think of this as the differentiation between qualitative and quantitative predictions. One can also argue that a model is worthless unless it provides some insight which goes beyond what is already known from direct investigation of the phenomenon being studied.
An example of such criticism is the argument that the mathematical models of optimal foraging theory do not offer insight that goes beyond the common-sense conclusions of evolution and other basic principles of ecology. It should also be noted that while mathematical modeling uses mathematical concepts and language, it is not itself a branch of mathematics and does not necessarily conform to any mathematical logic, but is typically a branch of some science or other technical subject, with corresponding concepts and standards of argumentation.
== Significance in the natural sciences ==
Mathematical models are of great importance in the natural sciences, particularly in physics. Physical theories are almost invariably expressed using mathematical models. Throughout history, more and more accurate mathematical models have been developed. Newton's laws accurately describe many everyday phenomena, but at certain limits theory of relativity and quantum mechanics must be used.
It is common to use idealized models in physics to simplify things. Massless ropes, point particles, ideal gases and the particle in a box are among the many simplified models used in physics. The laws of physics are represented with simple equations such as Newton's laws, Maxwell's equations and the Schrödinger equation. These laws are a basis for making mathematical models of real situations. Many real situations are very complex and thus modeled approximately on a computer, a model that is computationally feasible to compute is made from the basic laws or from approximate models made from the basic laws. For example, molecules can be modeled by molecular orbital models that are approximate solutions to the Schrödinger equation. In engineering, physics models are often made by mathematical methods such as finite element analysis.
Different mathematical models use different geometries that are not necessarily accurate descriptions of the geometry of the universe. Euclidean geometry is much used in classical physics, while special relativity and general relativity are examples of theories that use geometries which are not Euclidean.
== Some applications ==
Often when engineers analyze a system to be controlled or optimized, they use a mathematical model. In analysis, engineers can build a descriptive model of the system as a hypothesis of how the system could work, or try to estimate how an unforeseeable event could affect the system. Similarly, in control of a system, engineers can try out different control approaches in simulations.
A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. Variables may be of many types; real or integer numbers, Boolean values or strings, for example. The variables represent some properties of the system, for example, the measured system outputs often in the form of signals, timing data, counters, and event occurrence. The actual model is the set of functions that describe the relations between the different variables.
== Examples ==
One of the popular examples in computer science is the mathematical models of various machines, an example is the deterministic finite automaton (DFA) which is defined as an abstract mathematical concept, but due to the deterministic nature of a DFA, it is implementable in hardware and software for solving various specific problems. For example, the following is a DFA M with a binary alphabet, which requires that the input contains an even number of 0s:
M
=
(
Q
,
Σ
,
δ
,
q
0
,
F
)
{\displaystyle M=(Q,\Sigma ,\delta ,q_{0},F)}
where
Q
=
{
S
1
,
S
2
}
,
{\displaystyle Q=\{S_{1},S_{2}\},}
Σ
=
{
0
,
1
}
,
{\displaystyle \Sigma =\{0,1\},}
q
0
=
S
1
,
{\displaystyle q_{0}=S_{1},}
F
=
{
S
1
}
,
{\displaystyle F=\{S_{1}\},}
and
δ
{\displaystyle \delta }
is defined by the following state-transition table:
The state
S
1
{\displaystyle S_{1}}
represents that there has been an even number of 0s in the input so far, while
S
2
{\displaystyle S_{2}}
signifies an odd number. A 1 in the input does not change the state of the automaton. When the input ends, the state will show whether the input contained an even number of 0s or not. If the input did contain an even number of 0s,
M
{\displaystyle M}
will finish in state
S
1
,
{\displaystyle S_{1},}
an accepting state, so the input string will be accepted.
The language recognized by
M
{\displaystyle M}
is the regular language given by the regular expression 1*( 0 (1*) 0 (1*) )*, where "*" is the Kleene star, e.g., 1* denotes any non-negative number (possibly zero) of symbols "1".
Many everyday activities carried out without a thought are uses of mathematical models. A geographical map projection of a region of the earth onto a small, plane surface is a model which can be used for many purposes such as planning travel.
Another simple activity is predicting the position of a vehicle from its initial position, direction and speed of travel, using the equation that distance traveled is the product of time and speed. This is known as dead reckoning when used more formally. Mathematical modeling in this way does not necessarily require formal mathematics; animals have been shown to use dead reckoning.
Population Growth. A simple (though approximate) model of population growth is the Malthusian growth model. A slightly more realistic and largely used population growth model is the logistic function, and its extensions.
Model of a particle in a potential-field. In this model we consider a particle as being a point of mass which describes a trajectory in space which is modeled by a function giving its coordinates in space as a function of time. The potential field is given by a function
V
:
R
3
→
R
{\displaystyle V\!:\mathbb {R} ^{3}\!\to \mathbb {R} }
and the trajectory, that is a function
r
:
R
→
R
3
,
{\displaystyle \mathbf {r} \!:\mathbb {R} \to \mathbb {R} ^{3},}
is the solution of the differential equation:
−
d
2
r
(
t
)
d
t
2
m
=
∂
V
[
r
(
t
)
]
∂
x
x
^
+
∂
V
[
r
(
t
)
]
∂
y
y
^
+
∂
V
[
r
(
t
)
]
∂
z
z
^
,
{\displaystyle -{\frac {\mathrm {d} ^{2}\mathbf {r} (t)}{\mathrm {d} t^{2}}}m={\frac {\partial V[\mathbf {r} (t)]}{\partial x}}\mathbf {\hat {x}} +{\frac {\partial V[\mathbf {r} (t)]}{\partial y}}\mathbf {\hat {y}} +{\frac {\partial V[\mathbf {r} (t)]}{\partial z}}\mathbf {\hat {z}} ,}
that can be written also as
m
d
2
r
(
t
)
d
t
2
=
−
∇
V
[
r
(
t
)
]
.
{\displaystyle m{\frac {\mathrm {d} ^{2}\mathbf {r} (t)}{\mathrm {d} t^{2}}}=-\nabla V[\mathbf {r} (t)].}
Note this model assumes the particle is a point mass, which is certainly known to be false in many cases in which we use this model; for example, as a model of planetary motion.
Model of rational behavior for a consumer. In this model we assume a consumer faces a choice of
n
{\displaystyle n}
commodities labeled
1
,
2
,
…
,
n
{\displaystyle 1,2,\dots ,n}
each with a market price
p
1
,
p
2
,
…
,
p
n
.
{\displaystyle p_{1},p_{2},\dots ,p_{n}.}
The consumer is assumed to have an ordinal utility function
U
{\displaystyle U}
(ordinal in the sense that only the sign of the differences between two utilities, and not the level of each utility, is meaningful), depending on the amounts of commodities
x
1
,
x
2
,
…
,
x
n
{\displaystyle x_{1},x_{2},\dots ,x_{n}}
consumed. The model further assumes that the consumer has a budget
M
{\displaystyle M}
which is used to purchase a vector
x
1
,
x
2
,
…
,
x
n
{\displaystyle x_{1},x_{2},\dots ,x_{n}}
in such a way as to maximize
U
(
x
1
,
x
2
,
…
,
x
n
)
.
{\displaystyle U(x_{1},x_{2},\dots ,x_{n}).}
The problem of rational behavior in this model then becomes a mathematical optimization problem, that is:
max
U
(
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle \max \,U(x_{1},x_{2},\ldots ,x_{n})}
subject to:
∑
i
=
1
n
p
i
x
i
≤
M
,
{\displaystyle \sum _{i=1}^{n}p_{i}x_{i}\leq M,}
x
i
≥
0
for all
i
=
1
,
2
,
…
,
n
.
{\displaystyle x_{i}\geq 0\;\;\;{\text{ for all }}i=1,2,\dots ,n.}
This model has been used in a wide variety of economic contexts, such as in general equilibrium theory to show existence and Pareto efficiency of economic equilibria.
Neighbour-sensing model is a model that explains the mushroom formation from the initially chaotic fungal network.
In computer science, mathematical models may be used to simulate computer networks.
In mechanics, mathematical models may be used to analyze the movement of a rocket model.
== See also ==
== References ==
== Further reading ==
=== Books ===
Aris, Rutherford [ 1978 ] ( 1994 ). Mathematical Modelling Techniques, New York: Dover. ISBN 0-486-68131-9
Bender, E.A. [ 1978 ] ( 2000 ). An Introduction to Mathematical Modeling, New York: Dover. ISBN 0-486-41180-X
Gary Chartrand (1977) Graphs as Mathematical Models, Prindle, Webber & Schmidt ISBN 0871502364
Dubois, G. (2018) "Modeling and Simulation", Taylor & Francis, CRC Press.
Gershenfeld, N. (1998) The Nature of Mathematical Modeling, Cambridge University Press ISBN 0-521-57095-6 .
Lin, C.C. & Segel, L.A. ( 1988 ). Mathematics Applied to Deterministic Problems in the Natural Sciences, Philadelphia: SIAM. ISBN 0-89871-229-7
Models as Mediators: Perspectives on Natural and Social Science edited by Mary S. Morgan and Margaret Morrison, 1999.
Mary S. Morgan The World in the Model: How Economists Work and Think, 2012.
=== Specific applications ===
Papadimitriou, Fivos. (2010). Mathematical Modelling of Spatial-Ecological Complex Systems: an Evaluation. Geography, Environment, Sustainability 1(3), 67–80. doi:10.24057/2071-9388-2010-3-1-67-80
Peierls, R. (1980). "Model-making in physics". Contemporary Physics. 21: 3–17. Bibcode:1980ConPh..21....3P. doi:10.1080/00107518008210938.
An Introduction to Infectious Disease Modelling by Emilia Vynnycky and Richard G White.
== External links ==
General reference
Patrone, F. Introduction to modeling via differential equations, with critical remarks.
Plus teacher and student package: Mathematical Modelling. Brings together all articles on mathematical modeling from Plus Magazine, the online mathematics magazine produced by the Millennium Mathematics Project at the University of Cambridge.
Philosophical
Frigg, R. and S. Hartmann, Models in Science, in: The Stanford Encyclopedia of Philosophy, (Spring 2006 Edition)
Griffiths, E. C. (2010) What is a model? | Wikipedia/Mathematical_modelling |
In physics, equations of motion are equations that describe the behavior of a physical system in terms of its motion as a function of time. More specifically, the equations of motion describe the behavior of a physical system as a set of mathematical functions in terms of dynamic variables. These variables are usually spatial coordinates and time, but may include momentum components. The most general choice are generalized coordinates which can be any convenient variables characteristic of the physical system. The functions are defined in a Euclidean space in classical mechanics, but are replaced by curved spaces in relativity. If the dynamics of a system is known, the equations are the solutions for the differential equations describing the motion of the dynamics.
== Types ==
There are two main descriptions of motion: dynamics and kinematics. Dynamics is general, since the momenta, forces and energy of the particles are taken into account. In this instance, sometimes the term dynamics refers to the differential equations that the system satisfies (e.g., Newton's second law or Euler–Lagrange equations), and sometimes to the solutions to those equations.
However, kinematics is simpler. It concerns only variables derived from the positions of objects and time. In circumstances of constant acceleration, these simpler equations of motion are usually referred to as the SUVAT equations, arising from the definitions of kinematic quantities: displacement (s), initial velocity (u), final velocity (v), acceleration (a), and time (t).
A differential equation of motion, usually identified as some physical law (for example, F = ma), and applying definitions of physical quantities, is used to set up an equation to solve a kinematics problem. Solving the differential equation will lead to a general solution with arbitrary constants, the arbitrariness corresponding to a set of solutions. A particular solution can be obtained by setting the initial values, which fixes the values of the constants.
Stated formally, in general, an equation of motion M is a function of the position r of the object, its velocity (the first time derivative of r, v = dr/dt), and its acceleration (the second derivative of r, a = d2r/dt2), and time t. Euclidean vectors in 3D are denoted throughout in bold. This is equivalent to saying an equation of motion in r is a second-order ordinary differential equation (ODE) in r,
M
[
r
(
t
)
,
r
˙
(
t
)
,
r
¨
(
t
)
,
t
]
=
0
,
{\displaystyle M\left[\mathbf {r} (t),\mathbf {\dot {r}} (t),\mathbf {\ddot {r}} (t),t\right]=0\,,}
where t is time, and each overdot denotes one time derivative. The initial conditions are given by the constant values at t = 0,
r
(
0
)
,
r
˙
(
0
)
.
{\displaystyle \mathbf {r} (0)\,,\quad \mathbf {\dot {r}} (0)\,.}
The solution r(t) to the equation of motion, with specified initial values, describes the system for all times t after t = 0. Other dynamical variables like the momentum p of the object, or quantities derived from r and p like angular momentum, can be used in place of r as the quantity to solve for from some equation of motion, although the position of the object at time t is by far the most sought-after quantity.
Sometimes, the equation will be linear and is more likely to be exactly solvable. In general, the equation will be non-linear, and cannot be solved exactly so a variety of approximations must be used. The solutions to nonlinear equations may show chaotic behavior depending on how sensitive the system is to the initial conditions.
== History ==
Kinematics, dynamics and the mathematical models of the universe developed incrementally over three millennia, thanks to many thinkers, only some of whose names we know. In antiquity, priests, astrologers and astronomers predicted solar and lunar eclipses, the solstices and the equinoxes of the Sun and the period of the Moon. But they had nothing other than a set of algorithms to guide them. Equations of motion were not written down for another thousand years.
Medieval scholars in the thirteenth century — for example at the relatively new universities in Oxford and Paris — drew on ancient mathematicians (Euclid and Archimedes) and philosophers (Aristotle) to develop a new body of knowledge, now called physics.
At Oxford, Merton College sheltered a group of scholars devoted to natural science, mainly physics, astronomy and mathematics, who were of similar stature to the intellectuals at the University of Paris. Thomas Bradwardine extended Aristotelian quantities such as distance and velocity, and assigned intensity and extension to them. Bradwardine suggested an exponential law involving force, resistance, distance, velocity and time. Nicholas Oresme further extended Bradwardine's arguments. The Merton school proved that the quantity of motion of a body undergoing a uniformly accelerated motion is equal to the quantity of a uniform motion at the speed achieved halfway through the accelerated motion.
For writers on kinematics before Galileo, since small time intervals could not be measured, the affinity between time and motion was obscure. They used time as a function of distance, and in free fall, greater velocity as a result of greater elevation. Only Domingo de Soto, a Spanish theologian, in his commentary on Aristotle's Physics published in 1545, after defining "uniform difform" motion (which is uniformly accelerated motion) – the word velocity was not used – as proportional to time, declared correctly that this kind of motion was identifiable with freely falling bodies and projectiles, without his proving these propositions or suggesting a formula relating time, velocity and distance. De Soto's comments are remarkably correct regarding the definitions of acceleration (acceleration was a rate of change of motion (velocity) in time) and the observation that acceleration would be negative during ascent.
Discourses such as these spread throughout Europe, shaping the work of Galileo Galilei and others, and helped in laying the foundation of kinematics. Galileo deduced the equation s = 1/2gt2 in his work geometrically, using the Merton rule, now known as a special case of one of the equations of kinematics.
Galileo was the first to show that the path of a projectile is a parabola. Galileo had an understanding of centrifugal force and gave a correct definition of momentum. This emphasis of momentum as a fundamental quantity in dynamics is of prime importance. He measured momentum by the product of velocity and weight; mass is a later concept, developed by Huygens and Newton. In the swinging of a simple pendulum, Galileo says in Discourses that "every momentum acquired in the descent along an arc is equal to that which causes the same moving body to ascend through the same arc." His analysis on projectiles indicates that Galileo had grasped the first law and the second law of motion. He did not generalize and make them applicable to bodies not subject to the earth's gravitation. That step was Newton's contribution.
The term "inertia" was used by Kepler who applied it to bodies at rest. (The first law of motion is now often called the law of inertia.)
Galileo did not fully grasp the third law of motion, the law of the equality of action and reaction, though he corrected some errors of Aristotle. With Stevin and others Galileo also wrote on statics. He formulated the principle of the parallelogram of forces, but he did not fully recognize its scope.
Galileo also was interested by the laws of the pendulum, his first observations of which were as a young man. In 1583, while he was praying in the cathedral at Pisa, his attention was arrested by the motion of the great lamp lighted and left swinging, referencing his own pulse for time keeping. To him the period appeared the same, even after the motion had greatly diminished, discovering the isochronism of the pendulum.
More careful experiments carried out by him later, and described in his Discourses, revealed the period of oscillation varies with the square root of length but is independent of the mass the pendulum.
Thus we arrive at René Descartes, Isaac Newton, Gottfried Leibniz, et al.; and the evolved forms of the equations of motion that begin to be recognized as the modern ones.
Later the equations of motion also appeared in electrodynamics, when describing the motion of charged particles in electric and magnetic fields, the Lorentz force is the general equation which serves as the definition of what is meant by an electric field and magnetic field. With the advent of special relativity and general relativity, the theoretical modifications to spacetime meant the classical equations of motion were also modified to account for the finite speed of light, and curvature of spacetime. In all these cases the differential equations were in terms of a function describing the particle's trajectory in terms of space and time coordinates, as influenced by forces or energy transformations.
However, the equations of quantum mechanics can also be considered "equations of motion", since they are differential equations of the wavefunction, which describes how a quantum state behaves analogously using the space and time coordinates of the particles. There are analogs of equations of motion in other areas of physics, for collections of physical phenomena that can be considered waves, fluids, or fields.
== Kinematic equations for one particle ==
=== Kinematic quantities ===
From the instantaneous position r = r(t), instantaneous meaning at an instant value of time t, the instantaneous velocity v = v(t) and acceleration a = a(t) have the general, coordinate-independent definitions;
v
=
d
r
d
t
,
a
=
d
v
d
t
=
d
2
r
d
t
2
{\displaystyle \mathbf {v} ={\frac {d\mathbf {r} }{dt}}\,,\quad \mathbf {a} ={\frac {d\mathbf {v} }{dt}}={\frac {d^{2}\mathbf {r} }{dt^{2}}}}
Notice that velocity always points in the direction of motion, in other words for a curved path it is the tangent vector. Loosely speaking, first order derivatives are related to tangents of curves. Still for curved paths, the acceleration is directed towards the center of curvature of the path. Again, loosely speaking, second order derivatives are related to curvature.
The rotational analogues are the "angular vector" (angle the particle rotates about some axis) θ = θ(t), angular velocity ω = ω(t), and angular acceleration α = α(t):
θ
=
θ
n
^
,
ω
=
d
θ
d
t
,
α
=
d
ω
d
t
,
{\displaystyle {\boldsymbol {\theta }}=\theta {\hat {\mathbf {n} }}\,,\quad {\boldsymbol {\omega }}={\frac {d{\boldsymbol {\theta }}}{dt}}\,,\quad {\boldsymbol {\alpha }}={\frac {d{\boldsymbol {\omega }}}{dt}}\,,}
where n̂ is a unit vector in the direction of the axis of rotation, and θ is the angle the object turns through about the axis.
The following relation holds for a point-like particle, orbiting about some axis with angular velocity ω:
v
=
ω
×
r
{\displaystyle \mathbf {v} ={\boldsymbol {\omega }}\times \mathbf {r} }
where r is the position vector of the particle (radial from the rotation axis) and v the tangential velocity of the particle. For a rotating continuum rigid body, these relations hold for each point in the rigid body.
=== Uniform acceleration ===
The differential equation of motion for a particle of constant or uniform acceleration in a straight line is simple: the acceleration is constant, so the second derivative of the position of the object is constant. The results of this case are summarized below.
==== Constant translational acceleration in a straight line ====
These equations apply to a particle moving linearly, in three dimensions in a straight line with constant acceleration. Since the position, velocity, and acceleration are collinear (parallel, and lie on the same line) – only the magnitudes of these vectors are necessary, and because the motion is along a straight line, the problem effectively reduces from three dimensions to one.
v
=
v
0
+
a
t
[
1
]
r
=
r
0
+
v
0
t
+
1
2
a
t
2
[
2
]
r
=
r
0
+
1
2
(
v
+
v
0
)
t
[
3
]
v
2
=
v
0
2
+
2
a
(
r
−
r
0
)
[
4
]
r
=
r
0
+
v
t
−
1
2
a
t
2
[
5
]
{\displaystyle {\begin{aligned}v&=v_{0}+at&[1]\\r&=r_{0}+v_{0}t+{\tfrac {1}{2}}{a}t^{2}&[2]\\r&=r_{0}+{\tfrac {1}{2}}\left(v+v_{0}\right)t&[3]\\v^{2}&=v_{0}^{2}+2a\left(r-r_{0}\right)&[4]\\r&=r_{0}+vt-{\tfrac {1}{2}}{a}t^{2}&[5]\\\end{aligned}}}
where:
r0 is the particle's initial position
r is the particle's final position
v0 is the particle's initial velocity
v is the particle's final velocity
a is the particle's acceleration
t is the time interval
Here a is constant acceleration, or in the case of bodies moving under the influence of gravity, the standard gravity g is used. Note that each of the equations contains four of the five variables, so in this situation it is sufficient to know three out of the five variables to calculate the remaining two.
In some programs, such as the IGCSE Physics and IB DP Physics programs (international programs but especially popular in the UK and Europe), the same formulae would be written with a different set of preferred variables. There u replaces v0 and s replaces r - r0. They are often referred to as the SUVAT equations, where "SUVAT" is an acronym from the variables: s = displacement, u = initial velocity, v = final velocity, a = acceleration, t = time. In these variables, the equations of motion would be written
v
=
u
+
a
t
[
1
]
s
=
u
t
+
1
2
a
t
2
[
2
]
s
=
1
2
(
u
+
v
)
t
[
3
]
v
2
=
u
2
+
2
a
s
[
4
]
s
=
v
t
−
1
2
a
t
2
[
5
]
{\displaystyle {\begin{aligned}v&=u+at&[1]\\s&=ut+{\tfrac {1}{2}}at^{2}&[2]\\s&={\tfrac {1}{2}}(u+v)t&[3]\\v^{2}&=u^{2}+2as&[4]\\s&=vt-{\tfrac {1}{2}}at^{2}&[5]\\\end{aligned}}}
==== Constant linear acceleration in any direction ====
The initial position, initial velocity, and acceleration vectors need not be collinear, and the equations of motion take an almost identical form. The only difference is that the square magnitudes of the velocities require the dot product. The derivations are essentially the same as in the collinear case,
v
=
a
t
+
v
0
[
1
]
r
=
r
0
+
v
0
t
+
1
2
a
t
2
[
2
]
r
=
r
0
+
1
2
(
v
+
v
0
)
t
[
3
]
v
2
=
v
0
2
+
2
a
⋅
(
r
−
r
0
)
[
4
]
r
=
r
0
+
v
t
−
1
2
a
t
2
[
5
]
{\displaystyle {\begin{aligned}\mathbf {v} &=\mathbf {a} t+\mathbf {v} _{0}&[1]\\\mathbf {r} &=\mathbf {r} _{0}+\mathbf {v} _{0}t+{\tfrac {1}{2}}\mathbf {a} t^{2}&[2]\\\mathbf {r} &=\mathbf {r} _{0}+{\tfrac {1}{2}}\left(\mathbf {v} +\mathbf {v} _{0}\right)t&[3]\\\mathbf {v} ^{2}&=\mathbf {v} _{0}^{2}+2\mathbf {a} \cdot \left(\mathbf {r} -\mathbf {r} _{0}\right)&[4]\\\mathbf {r} &=\mathbf {r} _{0}+\mathbf {v} t-{\tfrac {1}{2}}\mathbf {a} t^{2}&[5]\\\end{aligned}}}
although the Torricelli equation [4] can be derived using the distributive property of the dot product as follows:
v
2
=
v
⋅
v
=
(
v
0
+
a
t
)
⋅
(
v
0
+
a
t
)
=
v
0
2
+
2
t
(
a
⋅
v
0
)
+
a
2
t
2
{\displaystyle v^{2}=\mathbf {v} \cdot \mathbf {v} =(\mathbf {v} _{0}+\mathbf {a} t)\cdot (\mathbf {v} _{0}+\mathbf {a} t)=v_{0}^{2}+2t(\mathbf {a} \cdot \mathbf {v} _{0})+a^{2}t^{2}}
(
2
a
)
⋅
(
r
−
r
0
)
=
(
2
a
)
⋅
(
v
0
t
+
1
2
a
t
2
)
=
2
t
(
a
⋅
v
0
)
+
a
2
t
2
=
v
2
−
v
0
2
{\displaystyle (2\mathbf {a} )\cdot (\mathbf {r} -\mathbf {r} _{0})=(2\mathbf {a} )\cdot \left(\mathbf {v} _{0}t+{\tfrac {1}{2}}\mathbf {a} t^{2}\right)=2t(\mathbf {a} \cdot \mathbf {v} _{0})+a^{2}t^{2}=v^{2}-v_{0}^{2}}
∴
v
2
=
v
0
2
+
2
(
a
⋅
(
r
−
r
0
)
)
{\displaystyle \therefore v^{2}=v_{0}^{2}+2(\mathbf {a} \cdot (\mathbf {r} -\mathbf {r} _{0}))}
==== Applications ====
Elementary and frequent examples in kinematics involve projectiles, for example a ball thrown upwards into the air. Given initial velocity u, one can calculate how high the ball will travel before it begins to fall. The acceleration is local acceleration of gravity g. While these quantities appear to be scalars, the direction of displacement, speed and acceleration is important. They could in fact be considered as unidirectional vectors. Choosing s to measure up from the ground, the acceleration a must be in fact −g, since the force of gravity acts downwards and therefore also the acceleration on the ball due to it.
At the highest point, the ball will be at rest: therefore v = 0. Using equation [4] in the set above, we have:
s
=
v
2
−
u
2
−
2
g
.
{\displaystyle s={\frac {v^{2}-u^{2}}{-2g}}.}
Substituting and cancelling minus signs gives:
s
=
u
2
2
g
.
{\displaystyle s={\frac {u^{2}}{2g}}.}
==== Constant circular acceleration ====
The analogues of the above equations can be written for rotation. Again these axial vectors must all be parallel to the axis of rotation, so only the magnitudes of the vectors are necessary,
ω
=
ω
0
+
α
t
θ
=
θ
0
+
ω
0
t
+
1
2
α
t
2
θ
=
θ
0
+
1
2
(
ω
0
+
ω
)
t
ω
2
=
ω
0
2
+
2
α
(
θ
−
θ
0
)
θ
=
θ
0
+
ω
t
−
1
2
α
t
2
{\displaystyle {\begin{aligned}\omega &=\omega _{0}+\alpha t\\\theta &=\theta _{0}+\omega _{0}t+{\tfrac {1}{2}}\alpha t^{2}\\\theta &=\theta _{0}+{\tfrac {1}{2}}(\omega _{0}+\omega )t\\\omega ^{2}&=\omega _{0}^{2}+2\alpha (\theta -\theta _{0})\\\theta &=\theta _{0}+\omega t-{\tfrac {1}{2}}\alpha t^{2}\\\end{aligned}}}
where α is the constant angular acceleration, ω is the angular velocity, ω0 is the initial angular velocity, θ is the angle turned through (angular displacement), θ0 is the initial angle, and t is the time taken to rotate from the initial state to the final state.
=== General planar motion ===
These are the kinematic equations for a particle traversing a path in a plane, described by position r = r(t). They are simply the time derivatives of the position vector in plane polar coordinates using the definitions of physical quantities above for angular velocity ω and angular acceleration α. These are instantaneous quantities which change with time.
The position of the particle is
r
=
r
(
r
(
t
)
,
θ
(
t
)
)
=
r
e
^
r
{\displaystyle \mathbf {r} =\mathbf {r} \left(r(t),\theta (t)\right)=r\mathbf {\hat {e}} _{r}}
where êr and êθ are the polar unit vectors. Differentiating with respect to time gives the velocity
v
=
e
^
r
d
r
d
t
+
r
ω
e
^
θ
{\displaystyle \mathbf {v} =\mathbf {\hat {e}} _{r}{\frac {dr}{dt}}+r\omega \mathbf {\hat {e}} _{\theta }}
with radial component dr/dt and an additional component rω due to the rotation. Differentiating with respect to time again obtains the acceleration
a
=
(
d
2
r
d
t
2
−
r
ω
2
)
e
^
r
+
(
r
α
+
2
ω
d
r
d
t
)
e
^
θ
{\displaystyle \mathbf {a} =\left({\frac {d^{2}r}{dt^{2}}}-r\omega ^{2}\right)\mathbf {\hat {e}} _{r}+\left(r\alpha +2\omega {\frac {dr}{dt}}\right)\mathbf {\hat {e}} _{\theta }}
which breaks into the radial acceleration d2r/dt2, centripetal acceleration –rω2, Coriolis acceleration 2ωdr/dt, and angular acceleration rα.
Special cases of motion described by these equations are summarized qualitatively in the table below. Two have already been discussed above, in the cases that either the radial components or the angular components are zero, and the non-zero component of motion describes uniform acceleration.
=== General 3D motions ===
In 3D space, the equations in spherical coordinates (r, θ, φ) with corresponding unit vectors êr, êθ and êφ, the position, velocity, and acceleration generalize respectively to
r
=
r
(
t
)
=
r
e
^
r
v
=
v
e
^
r
+
r
d
θ
d
t
e
^
θ
+
r
d
φ
d
t
sin
θ
e
^
φ
a
=
(
a
−
r
(
d
θ
d
t
)
2
−
r
(
d
φ
d
t
)
2
sin
2
θ
)
e
^
r
+
(
r
d
2
θ
d
t
2
+
2
v
d
θ
d
t
−
r
(
d
φ
d
t
)
2
sin
θ
cos
θ
)
e
^
θ
+
(
r
d
2
φ
d
t
2
sin
θ
+
2
v
d
φ
d
t
sin
θ
+
2
r
d
θ
d
t
d
φ
d
t
cos
θ
)
e
^
φ
{\displaystyle {\begin{aligned}\mathbf {r} &=\mathbf {r} \left(t\right)=r\mathbf {\hat {e}} _{r}\\\mathbf {v} &=v\mathbf {\hat {e}} _{r}+r\,{\frac {d\theta }{dt}}\mathbf {\hat {e}} _{\theta }+r\,{\frac {d\varphi }{dt}}\,\sin \theta \mathbf {\hat {e}} _{\varphi }\\\mathbf {a} &=\left(a-r\left({\frac {d\theta }{dt}}\right)^{2}-r\left({\frac {d\varphi }{dt}}\right)^{2}\sin ^{2}\theta \right)\mathbf {\hat {e}} _{r}\\&+\left(r{\frac {d^{2}\theta }{dt^{2}}}+2v{\frac {d\theta }{dt}}-r\left({\frac {d\varphi }{dt}}\right)^{2}\sin \theta \cos \theta \right)\mathbf {\hat {e}} _{\theta }\\&+\left(r{\frac {d^{2}\varphi }{dt^{2}}}\,\sin \theta +2v\,{\frac {d\varphi }{dt}}\,\sin \theta +2r\,{\frac {d\theta }{dt}}\,{\frac {d\varphi }{dt}}\,\cos \theta \right)\mathbf {\hat {e}} _{\varphi }\end{aligned}}\,\!}
In the case of a constant φ this reduces to the planar equations above.
== Dynamic equations of motion ==
=== Newtonian mechanics ===
The first general equation of motion developed was Newton's second law of motion. In its most general form it states the rate of change of momentum p = p(t) = mv(t) of an object equals the force F = F(x(t), v(t), t) acting on it,: 1112
F
=
d
p
d
t
{\displaystyle \mathbf {F} ={\frac {d\mathbf {p} }{dt}}}
The force in the equation is not the force the object exerts. Replacing momentum by mass times velocity, the law is also written more famously as
F
=
m
a
{\displaystyle \mathbf {F} =m\mathbf {a} }
since m is a constant in Newtonian mechanics.
Newton's second law applies to point-like particles, and to all points in a rigid body. They also apply to each point in a mass continuum, like deformable solids or fluids, but the motion of the system must be accounted for; see material derivative. In the case the mass is not constant, it is not sufficient to use the product rule for the time derivative on the mass and velocity, and Newton's second law requires some modification consistent with conservation of momentum; see variable-mass system.
It may be simple to write down the equations of motion in vector form using Newton's laws of motion, but the components may vary in complicated ways with spatial coordinates and time, and solving them is not easy. Often there is an excess of variables to solve for the problem completely, so Newton's laws are not always the most efficient way to determine the motion of a system. In simple cases of rectangular geometry, Newton's laws work fine in Cartesian coordinates, but in other coordinate systems can become dramatically complex.
The momentum form is preferable since this is readily generalized to more complex systems, such as special and general relativity (see four-momentum).: 112 It can also be used with the momentum conservation. However, Newton's laws are not more fundamental than momentum conservation, because Newton's laws are merely consistent with the fact that zero resultant force acting on an object implies constant momentum, while a resultant force implies the momentum is not constant. Momentum conservation is always true for an isolated system not subject to resultant forces.
For a number of particles (see many body problem), the equation of motion for one particle i influenced by other particles is
d
p
i
d
t
=
F
E
+
∑
i
≠
j
F
i
j
{\displaystyle {\frac {d\mathbf {p} _{i}}{dt}}=\mathbf {F} _{E}+\sum _{i\neq j}\mathbf {F} _{ij}}
where pi is the momentum of particle i, Fij is the force on particle i by particle j, and FE is the resultant external force due to any agent not part of system. Particle i does not exert a force on itself.
Euler's laws of motion are similar to Newton's laws, but they are applied specifically to the motion of rigid bodies. The Newton–Euler equations combine the forces and torques acting on a rigid body into a single equation.
Newton's second law for rotation takes a similar form to the translational case,
τ
=
d
L
d
t
,
{\displaystyle {\boldsymbol {\tau }}={\frac {d\mathbf {L} }{dt}}\,,}
by equating the torque acting on the body to the rate of change of its angular momentum L. Analogous to mass times acceleration, the moment of inertia tensor I depends on the distribution of mass about the axis of rotation, and the angular acceleration is the rate of change of angular velocity,
τ
=
I
α
.
{\displaystyle {\boldsymbol {\tau }}=\mathbf {I} {\boldsymbol {\alpha }}.}
Again, these equations apply to point like particles, or at each point of a rigid body.
Likewise, for a number of particles, the equation of motion for one particle i is
d
L
i
d
t
=
τ
E
+
∑
i
≠
j
τ
i
j
,
{\displaystyle {\frac {d\mathbf {L} _{i}}{dt}}={\boldsymbol {\tau }}_{E}+\sum _{i\neq j}{\boldsymbol {\tau }}_{ij}\,,}
where Li is the angular momentum of particle i, τij the torque on particle i by particle j, and τE is resultant external torque (due to any agent not part of system). Particle i does not exert a torque on itself.
=== Applications ===
Some examples of Newton's law include describing the motion of a simple pendulum,
−
m
g
sin
θ
=
m
d
2
(
ℓ
θ
)
d
t
2
⇒
d
2
θ
d
t
2
=
−
g
ℓ
sin
θ
,
{\displaystyle -mg\sin \theta =m{\frac {d^{2}(\ell \theta )}{dt^{2}}}\quad \Rightarrow \quad {\frac {d^{2}\theta }{dt^{2}}}=-{\frac {g}{\ell }}\sin \theta \,,}
and a damped, sinusoidally driven harmonic oscillator,
F
0
sin
(
ω
t
)
=
m
(
d
2
x
d
t
2
+
2
ζ
ω
0
d
x
d
t
+
ω
0
2
x
)
.
{\displaystyle F_{0}\sin(\omega t)=m\left({\frac {d^{2}x}{dt^{2}}}+2\zeta \omega _{0}{\frac {dx}{dt}}+\omega _{0}^{2}x\right)\,.}
For describing the motion of masses due to gravity, Newton's law of gravity can be combined with Newton's second law. For two examples, a ball of mass m thrown in the air, in air currents (such as wind) described by a vector field of resistive forces R = R(r, t),
−
G
m
M
|
r
|
2
e
^
r
+
R
=
m
d
2
r
d
t
2
+
0
⇒
d
2
r
d
t
2
=
−
G
M
|
r
|
2
e
^
r
+
A
{\displaystyle -{\frac {GmM}{|\mathbf {r} |^{2}}}\mathbf {\hat {e}} _{r}+\mathbf {R} =m{\frac {d^{2}\mathbf {r} }{dt^{2}}}+0\quad \Rightarrow \quad {\frac {d^{2}\mathbf {r} }{dt^{2}}}=-{\frac {GM}{|\mathbf {r} |^{2}}}\mathbf {\hat {e}} _{r}+\mathbf {A} }
where G is the gravitational constant, M the mass of the Earth, and A = R/m is the acceleration of the projectile due to the air currents at position r and time t.
The classical N-body problem for N particles each interacting with each other due to gravity is a set of N nonlinear coupled second order ODEs,
d
2
r
i
d
t
2
=
G
∑
i
≠
j
m
j
|
r
j
−
r
i
|
3
(
r
j
−
r
i
)
{\displaystyle {\frac {d^{2}\mathbf {r} _{i}}{dt^{2}}}=G\sum _{i\neq j}{\frac {m_{j}}{|\mathbf {r} _{j}-\mathbf {r} _{i}|^{3}}}(\mathbf {r} _{j}-\mathbf {r} _{i})}
where i = 1, 2, ..., N labels the quantities (mass, position, etc.) associated with each particle.
== Analytical mechanics ==
Using all three coordinates of 3D space is unnecessary if there are constraints on the system. If the system has N degrees of freedom, then one can use a set of N generalized coordinates q(t) = [q1(t), q2(t) ... qN(t)], to define the configuration of the system. They can be in the form of arc lengths or angles. They are a considerable simplification to describe motion, since they take advantage of the intrinsic constraints that limit the system's motion, and the number of coordinates is reduced to a minimum. The time derivatives of the generalized coordinates are the generalized velocities
q
˙
=
d
q
d
t
.
{\displaystyle \mathbf {\dot {q}} ={\frac {d\mathbf {q} }{dt}}\,.}
The Euler–Lagrange equations are
d
d
t
(
∂
L
∂
q
˙
)
=
∂
L
∂
q
,
{\displaystyle {\frac {d}{dt}}\left({\frac {\partial L}{\partial \mathbf {\dot {q}} }}\right)={\frac {\partial L}{\partial \mathbf {q} }}\,,}
where the Lagrangian is a function of the configuration q and its time rate of change dq/dt (and possibly time t)
L
=
L
[
q
(
t
)
,
q
˙
(
t
)
,
t
]
.
{\displaystyle L=L\left[\mathbf {q} (t),\mathbf {\dot {q}} (t),t\right]\,.}
Setting up the Lagrangian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled N second order ODEs in the coordinates are obtained.
Hamilton's equations are
p
˙
=
−
∂
H
∂
q
,
q
˙
=
+
∂
H
∂
p
,
{\displaystyle \mathbf {\dot {p}} =-{\frac {\partial H}{\partial \mathbf {q} }}\,,\quad \mathbf {\dot {q}} =+{\frac {\partial H}{\partial \mathbf {p} }}\,,}
where the Hamiltonian
H
=
H
[
q
(
t
)
,
p
(
t
)
,
t
]
,
{\displaystyle H=H\left[\mathbf {q} (t),\mathbf {p} (t),t\right]\,,}
is a function of the configuration q and conjugate "generalized" momenta
p
=
∂
L
∂
q
˙
,
{\displaystyle \mathbf {p} ={\frac {\partial L}{\partial \mathbf {\dot {q}} }}\,,}
in which ∂/∂q = (∂/∂q1, ∂/∂q2, …, ∂/∂qN) is a shorthand notation for a vector of partial derivatives with respect to the indicated variables (see for example matrix calculus for this denominator notation), and possibly time t,
Setting up the Hamiltonian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled 2N first order ODEs in the coordinates qi and momenta pi are obtained.
The Hamilton–Jacobi equation is
−
∂
S
(
q
,
t
)
∂
t
=
H
(
q
,
p
,
t
)
.
{\displaystyle -{\frac {\partial S(\mathbf {q} ,t)}{\partial t}}=H\left(\mathbf {q} ,\mathbf {p} ,t\right)\,.}
where
S
[
q
,
t
]
=
∫
t
1
t
2
L
(
q
,
q
˙
,
t
)
d
t
,
{\displaystyle S[\mathbf {q} ,t]=\int _{t_{1}}^{t_{2}}L(\mathbf {q} ,\mathbf {\dot {q}} ,t)\,dt\,,}
is Hamilton's principal function, also called the classical action is a functional of L. In this case, the momenta are given by
p
=
∂
S
∂
q
.
{\displaystyle \mathbf {p} ={\frac {\partial S}{\partial \mathbf {q} }}\,.}
Although the equation has a simple general form, for a given Hamiltonian it is actually a single first order non-linear PDE, in N + 1 variables. The action S allows identification of conserved quantities for mechanical systems, even when the mechanical problem itself cannot be solved fully, because any differentiable symmetry of the action of a physical system has a corresponding conservation law, a theorem due to Emmy Noether.
All classical equations of motion can be derived from the variational principle known as Hamilton's principle of least action
δ
S
=
0
,
{\displaystyle \delta S=0\,,}
stating the path the system takes through the configuration space is the one with the least action S.
== Electrodynamics ==
In electrodynamics, the force on a charged particle of charge q is the Lorentz force:
F
=
q
(
E
+
v
×
B
)
{\displaystyle \mathbf {F} =q\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)}
Combining with Newton's second law gives a first order differential equation of motion, in terms of position of the particle:
m
d
2
r
d
t
2
=
q
(
E
+
d
r
d
t
×
B
)
{\displaystyle m{\frac {d^{2}\mathbf {r} }{dt^{2}}}=q\left(\mathbf {E} +{\frac {d\mathbf {r} }{dt}}\times \mathbf {B} \right)}
or its momentum:
d
p
d
t
=
q
(
E
+
p
×
B
m
)
{\displaystyle {\frac {d\mathbf {p} }{dt}}=q\left(\mathbf {E} +{\frac {\mathbf {p} \times \mathbf {B} }{m}}\right)}
The same equation can be obtained using the Lagrangian (and applying Lagrange's equations above) for a charged particle of mass m and charge q:
L
=
1
2
m
r
˙
⋅
r
˙
+
q
A
⋅
r
˙
−
q
ϕ
{\displaystyle L={\tfrac {1}{2}}m\mathbf {\dot {r}} \cdot \mathbf {\dot {r}} +q\mathbf {A} \cdot {\dot {\mathbf {r} }}-q\phi }
where A and ϕ are the electromagnetic scalar and vector potential fields. The Lagrangian indicates an additional detail: the canonical momentum in Lagrangian mechanics is given by:
P
=
∂
L
∂
r
˙
=
m
r
˙
+
q
A
{\displaystyle \mathbf {P} ={\frac {\partial L}{\partial {\dot {\mathbf {r} }}}}=m{\dot {\mathbf {r} }}+q\mathbf {A} }
instead of just mv, implying the motion of a charged particle is fundamentally determined by the mass and charge of the particle. The Lagrangian expression was first used to derive the force equation.
Alternatively the Hamiltonian (and substituting into the equations):
H
=
(
P
−
q
A
)
2
2
m
+
q
ϕ
{\displaystyle H={\frac {\left(\mathbf {P} -q\mathbf {A} \right)^{2}}{2m}}+q\phi }
can derive the Lorentz force equation.
== General relativity ==
=== Geodesic equation of motion ===
The above equations are valid in flat spacetime. In curved spacetime, things become mathematically more complicated since there is no straight line; this is generalized and replaced by a geodesic of the curved spacetime (the shortest length of curve between two points). For curved manifolds with a metric tensor g, the metric provides the notion of arc length (see line element for details). The differential arc length is given by:: 1199
d
s
=
g
α
β
d
x
α
d
x
β
{\displaystyle ds={\sqrt {g_{\alpha \beta }dx^{\alpha }dx^{\beta }}}}
and the geodesic equation is a second-order differential equation in the coordinates. The general solution is a family of geodesics:: 1200
d
2
x
μ
d
s
2
=
−
Γ
μ
α
β
d
x
α
d
s
d
x
β
d
s
{\displaystyle {\frac {d^{2}x^{\mu }}{ds^{2}}}=-\Gamma ^{\mu }{}_{\alpha \beta }{\frac {dx^{\alpha }}{ds}}{\frac {dx^{\beta }}{ds}}}
where Γ μαβ is a Christoffel symbol of the second kind, which contains the metric (with respect to the coordinate system).
Given the mass-energy distribution provided by the stress–energy tensor T αβ, the Einstein field equations are a set of non-linear second-order partial differential equations in the metric, and imply the curvature of spacetime is equivalent to a gravitational field (see equivalence principle). Mass falling in curved spacetime is equivalent to a mass falling in a gravitational field - because gravity is a fictitious force. The relative acceleration of one geodesic to another in curved spacetime is given by the geodesic deviation equation:
D
2
ξ
α
d
s
2
=
−
R
α
β
γ
δ
d
x
α
d
s
ξ
γ
d
x
δ
d
s
{\displaystyle {\frac {D^{2}\xi ^{\alpha }}{ds^{2}}}=-R^{\alpha }{}_{\beta \gamma \delta }{\frac {dx^{\alpha }}{ds}}\xi ^{\gamma }{\frac {dx^{\delta }}{ds}}}
where ξα = x2α − x1α is the separation vector between two geodesics, D/ds (not just d/ds) is the covariant derivative, and Rαβγδ is the Riemann curvature tensor, containing the Christoffel symbols. In other words, the geodesic deviation equation is the equation of motion for masses in curved spacetime, analogous to the Lorentz force equation for charges in an electromagnetic field.: 34–35
For flat spacetime, the metric is a constant tensor so the Christoffel symbols vanish, and the geodesic equation has the solutions of straight lines. This is also the limiting case when masses move according to Newton's law of gravity.
=== Spinning objects ===
In general relativity, rotational motion is described by the relativistic angular momentum tensor, including the spin tensor, which enter the equations of motion under covariant derivatives with respect to proper time. The Mathisson–Papapetrou–Dixon equations describe the motion of spinning objects moving in a gravitational field.
== Analogues for waves and fields ==
Unlike the equations of motion for describing particle mechanics, which are systems of coupled ordinary differential equations, the analogous equations governing the dynamics of waves and fields are always partial differential equations, since the waves or fields are functions of space and time. For a particular solution, boundary conditions along with initial conditions need to be specified.
Sometimes in the following contexts, the wave or field equations are also called "equations of motion".
=== Field equations ===
Equations that describe the spatial dependence and time evolution of fields are called field equations. These include
Maxwell's equations for the electromagnetic field,
Poisson's equation for Newtonian gravitational or electrostatic field potentials,
the Einstein field equation for gravitation (Newton's law of gravity is a special case for weak gravitational fields and low velocities of particles).
This terminology is not universal: for example although the Navier–Stokes equations govern the velocity field of a fluid, they are not usually called "field equations", since in this context they represent the momentum of the fluid and are called the "momentum equations" instead.
=== Wave equations ===
Equations of wave motion are called wave equations. The solutions to a wave equation give the time-evolution and spatial dependence of the amplitude. Boundary conditions determine if the solutions describe traveling waves or standing waves.
From classical equations of motion and field equations; mechanical, gravitational wave, and electromagnetic wave equations can be derived. The general linear wave equation in 3D is:
1
v
2
∂
2
X
∂
t
2
=
∇
2
X
{\displaystyle {\frac {1}{v^{2}}}{\frac {\partial ^{2}X}{\partial t^{2}}}=\nabla ^{2}X}
where X = X(r, t) is any mechanical or electromagnetic field amplitude, say:
the transverse or longitudinal displacement of a vibrating rod, wire, cable, membrane etc.,
the fluctuating pressure of a medium, sound pressure,
the electric fields E or D, or the magnetic fields B or H,
the voltage V or current I in an alternating current circuit,
and v is the phase velocity. Nonlinear equations model the dependence of phase velocity on amplitude, replacing v by v(X). There are other linear and nonlinear wave equations for very specific applications, see for example the Korteweg–de Vries equation.
=== Quantum theory ===
In quantum theory, the wave and field concepts both appear.
In quantum mechanics the analogue of the classical equations of motion (Newton's law, Euler–Lagrange equation, Hamilton–Jacobi equation, etc.) is the Schrödinger equation in its most general form:
i
ℏ
∂
Ψ
∂
t
=
H
^
Ψ
,
{\displaystyle i\hbar {\frac {\partial \Psi }{\partial t}}={\hat {H}}\Psi \,,}
where Ψ is the wavefunction of the system, Ĥ is the quantum Hamiltonian operator, rather than a function as in classical mechanics, and ħ is the Planck constant divided by 2π. Setting up the Hamiltonian and inserting it into the equation results in a wave equation, the solution is the wavefunction as a function of space and time. The Schrödinger equation itself reduces to the Hamilton–Jacobi equation when one considers the correspondence principle, in the limit that ħ becomes zero. To compare to measurements, operators for observables must be applied the quantum wavefunction according to the experiment performed, leading to either wave-like or particle-like results.
Throughout all aspects of quantum theory, relativistic or non-relativistic, there are various formulations alternative to the Schrödinger equation that govern the time evolution and behavior of a quantum system, for instance:
the Heisenberg equation of motion resembles the time evolution of classical observables as functions of position, momentum, and time, if one replaces dynamical observables by their quantum operators and the classical Poisson bracket by the commutator,
the phase space formulation closely follows classical Hamiltonian mechanics, placing position and momentum on equal footing,
the Feynman path integral formulation extends the principle of least action to quantum mechanics and field theory, placing emphasis on the use of a Lagrangians rather than Hamiltonians.
== See also ==
== References == | Wikipedia/Equations_of_motion |
In physics, electromagnetism is an interaction that occurs between particles with electric charge via electromagnetic fields. The electromagnetic force is one of the four fundamental forces of nature. It is the dominant force in the interactions of atoms and molecules. Electromagnetism can be thought of as a combination of electrostatics and magnetism, which are distinct but closely intertwined phenomena. Electromagnetic forces occur between any two charged particles. Electric forces cause an attraction between particles with opposite charges and repulsion between particles with the same charge, while magnetism is an interaction that occurs between charged particles in relative motion. These two forces are described in terms of electromagnetic fields. Macroscopic charged objects are described in terms of Coulomb's law for electricity and Ampère's force law for magnetism; the Lorentz force describes microscopic charged particles.
The electromagnetic force is responsible for many of the chemical and physical phenomena observed in daily life. The electrostatic attraction between atomic nuclei and their electrons holds atoms together. Electric forces also allow different atoms to combine into molecules, including the macromolecules such as proteins that form the basis of life. Meanwhile, magnetic interactions between the spin and angular momentum magnetic moments of electrons also play a role in chemical reactivity; such relationships are studied in spin chemistry. Electromagnetism also plays several crucial roles in modern technology: electrical energy production, transformation and distribution; light, heat, and sound production and detection; fiber optic and wireless communication; sensors; computation; electrolysis; electroplating; and mechanical motors and actuators.
Electromagnetism has been studied since ancient times. Many ancient civilizations, including the Greeks and the Mayans, created wide-ranging theories to explain lightning, static electricity, and the attraction between magnetized pieces of iron ore. However, it was not until the late 18th century that scientists began to develop a mathematical basis for understanding the nature of electromagnetic interactions. In the 18th and 19th centuries, prominent scientists and mathematicians such as Coulomb, Gauss and Faraday developed namesake laws which helped to explain the formation and interaction of electromagnetic fields. This process culminated in the 1860s with the discovery of Maxwell's equations, a set of four partial differential equations which provide a complete description of classical electromagnetic fields. Maxwell's equations provided a sound mathematical basis for the relationships between electricity and magnetism that scientists had been exploring for centuries, and predicted the existence of self-sustaining electromagnetic waves. Maxwell postulated that such waves make up visible light, which was later shown to be true. Gamma-rays, x-rays, ultraviolet, visible, infrared radiation, microwaves and radio waves were all determined to be electromagnetic radiation differing only in their range of frequencies.
In the modern era, scientists continue to refine the theory of electromagnetism to account for the effects of modern physics, including quantum mechanics and relativity. The theoretical implications of electromagnetism, particularly the requirement that observations remain consistent when viewed from various moving frames of reference (relativistic electromagnetism) and the establishment of the speed of light based on properties of the medium of propagation (permeability and permittivity), helped inspire Einstein's theory of special relativity in 1905. Quantum electrodynamics (QED) modifies Maxwell's equations to be consistent with the quantized nature of matter. In QED, changes in the electromagnetic field are expressed in terms of discrete excitations, particles known as photons, the quanta of light.
== History ==
=== Ancient world ===
Investigation into electromagnetic phenomena began about 5,000 years ago. There is evidence that the ancient Chinese, Mayan, and potentially even Egyptian civilizations knew that the naturally magnetic mineral magnetite had attractive properties, and many incorporated it into their art and architecture. Ancient people were also aware of lightning and static electricity, although they had no idea of the mechanisms behind these phenomena. The Greek philosopher Thales of Miletus discovered around 600 B.C.E. that amber could acquire an electric charge when it was rubbed with cloth, which allowed it to pick up light objects such as pieces of straw. Thales also experimented with the ability of magnetic rocks to attract one other, and hypothesized that this phenomenon might be connected to the attractive power of amber, foreshadowing the deep connections between electricity and magnetism that would be discovered over 2,000 years later. Despite all this investigation, ancient civilizations had no understanding of the mathematical basis of electromagnetism, and often analyzed its impacts through the lens of religion rather than science (lightning, for instance, was considered to be a creation of the gods in many cultures).
=== 19th century ===
Electricity and magnetism were originally considered to be two separate forces. This view changed with the publication of James Clerk Maxwell's 1873 A Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be mediated by one force. There are four main effects resulting from these interactions, all of which have been clearly demonstrated by experiments:
Electric charges attract or repel one another with a force inversely proportional to the square of the distance between them: opposite charges attract, like charges repel.
Magnetic poles (or states of polarization at individual points) attract or repel one another in a manner similar to positive and negative charges and always exist as pairs: every north pole is yoked to a south pole.
An electric current inside a wire creates a corresponding circumferential magnetic field outside the wire. Its direction (clockwise or counter-clockwise) depends on the direction of the current in the wire.
A current is induced in a loop of wire when it is moved toward or away from a magnetic field, or a magnet is moved towards or away from it; the direction of current depends on that of the movement.
In April 1820, Hans Christian Ørsted observed that an electrical current in a wire caused a nearby compass needle to move. At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months later he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire. The CGS unit of magnetic induction (oersted) is named in honor of his contributions to the field of electromagnetism.
His findings resulted in intensive research throughout the scientific community in electrodynamics. They influenced French physicist André-Marie Ampère's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery also represented a major step toward a unified concept of energy.
This unification, which was observed by Michael Faraday, extended by James Clerk Maxwell, and partially reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th-century mathematical physics. It has had far-reaching consequences, one of which was the understanding of the nature of light. Unlike what was proposed by the electromagnetic theory of that time, light and other electromagnetic waves are at present seen as taking the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies.
Ørsted was not the only person to examine the relationship between electricity and magnetism. In 1802, Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle using a Voltaic pile. The factual setup of the experiment is not completely clear, nor if current flowed across the needle or not. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community, because Romagnosi seemingly did not belong to this community.
An earlier (1735), and often neglected, connection between electricity and magnetism was reported by a Dr. Cookson. The account stated:A tradesman at Wakefield in Yorkshire, having put up a great number of knives and forks in a large box ... and having placed the box in the corner of a large room, there happened a sudden storm of thunder, lightning, &c. ... The owner emptying the box on a counter where some nails lay, the persons who took up the knives, that lay on the nails, observed that the knives took up the nails. On this the whole number was tried, and found to do the same, and that, to such a degree as to take up large nails, packing needles, and other iron things of considerable weight ... E. T. Whittaker suggested in 1910 that this particular event was responsible for lightning to be "credited with the power of magnetizing steel; and it was doubtless this which led Franklin in 1751 to attempt to magnetize a sewing-needle by means of the discharge of Leyden jars."
== A fundamental force ==
The electromagnetic force is the second strongest of the four known fundamental forces and has unlimited range.
All other forces, known as non-fundamental forces. (e.g., friction, contact forces) are derived from the four fundamental forces. At high energy, the weak force and electromagnetic force are unified as a single interaction called the electroweak interaction.
Most of the forces involved in interactions between atoms are explained by electromagnetic forces between electrically charged atomic nuclei and electrons. The electromagnetic force is also involved in all forms of chemical phenomena.
Electromagnetism explains how materials carry momentum despite being composed of individual particles and empty space. The forces we experience when "pushing" or "pulling" ordinary material objects result from intermolecular forces between individual molecules in our bodies and in the objects.
The effective forces generated by the momentum of electrons' movement is a necessary part of understanding atomic and intermolecular interactions. As electrons move between interacting atoms, they carry momentum with them. As a collection of electrons becomes more confined, their minimum momentum necessarily increases due to the Pauli exclusion principle. The behavior of matter at the molecular scale, including its density, is determined by the balance between the electromagnetic force and the force generated by the exchange of momentum carried by the electrons themselves.
== Classical electrodynamics ==
In 1600, William Gilbert proposed, in his De Magnete, that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle. The link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752 were conducted on 10 May 1752 by Thomas-François Dalibard of France using a 40-foot-tall (12 m) iron rod instead of a kite and he successfully extracted electrical sparks from a cloud.
One of the first to discover and publish a link between human-made electric current and magnetism was Gian Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment. Ørsted's work influenced Ampère to conduct further experiments, which eventually gave rise to a new area of physics: electrodynamics. By determining a force law for the interaction between elements of electric current, Ampère placed the subject on a solid mathematical foundation.
A theory of electromagnetism, known as classical electromagnetism, was developed by several physicists during the period between 1820 and 1873, when James Clerk Maxwell's treatise was published, which unified previous developments into a single theory, proposing that light was an electromagnetic wave propagating in the luminiferous ether. In classical electromagnetism, the behavior of the electromagnetic field is described by a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law.
One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in vacuum is a universal constant that is dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories (electromagnetism and classical mechanics) is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincaré, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaced classical kinematics with a new theory of kinematics compatible with classical electromagnetism. (For more information, see History of special relativity.)
In addition, relativity theory implies that in moving frames of reference, a magnetic field transforms to a field with a nonzero electric component and conversely, a moving electric field transforms to a nonzero magnetic component, thus firmly showing that the phenomena are two sides of the same coin. Hence the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity and Covariant formulation of classical electromagnetism.)
Today few problems in electromagnetism remain unsolved. These include: the lack of magnetic monopoles, Abraham–Minkowski controversy, the location in space of the electromagnetic field energy, and the mechanism by which some organisms can sense electric and magnetic fields.
== Extension to nonlinear phenomena ==
The Maxwell equations are linear, in that a change in the sources (the charges and currents) results in a proportional change of the fields. Nonlinear dynamics can occur when electromagnetic fields couple to matter that follows nonlinear dynamical laws. This is studied, for example, in the subject of magnetohydrodynamics, which combines Maxwell theory with the Navier–Stokes equations. Another branch of electromagnetism dealing with nonlinearity is nonlinear optics.
== Quantities and units ==
Here is a list of common units related to electromagnetism:
In the electromagnetic CGS system, electric current is a fundamental quantity defined via Ampère's law and takes the permeability as a dimensionless quantity (relative permeability) whose value in vacuum is unity. As a consequence, the square of the speed of light appears explicitly in some of the equations interrelating quantities in this system.
Formulas for physical laws of electromagnetism (such as Maxwell's equations) need to be adjusted depending on what system of units one uses. This is because there is no one-to-one correspondence between electromagnetic units in SI and those in CGS, as is the case for mechanical units. Furthermore, within CGS, there are several plausible choices of electromagnetic units, leading to different unit "sub-systems", including Gaussian, "ESU", "EMU", and Heaviside–Lorentz. Among these choices, Gaussian units are the most common today, and in fact the phrase "CGS units" is often used to refer specifically to CGS-Gaussian units.
== Applications ==
The study of electromagnetism informs electric circuits, magnetic circuits, and semiconductor devices' construction.
== See also ==
== References ==
== Further reading ==
=== Web sources ===
=== Textbooks ===
=== General coverage ===
== External links ==
Magnetic Field Strength Converter
Electromagnetic Force – from Eric Weisstein's World of Physics | Wikipedia/Electrodynamics |
Finite element method (FEM) is a popular method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential. Computers are usually used to perform the calculations required. With high-speed supercomputers, better solutions can be achieved and are often required to solve the largest and most complex problems.
FEM is a general numerical method for solving partial differential equations in two- or three-space variables (i.e., some boundary value problems). There are also studies about using FEM to solve high-dimensional problems. To solve a problem, FEM subdivides a large system into smaller, simpler parts called finite elements. This is achieved by a particular space discretization in the space dimensions, which is implemented by the construction of a mesh of the object: the numerical domain for the solution that has a finite number of points. FEM formulation of a boundary value problem finally results in a system of algebraic equations. The method approximates the unknown function over the domain. The simple equations that model these finite elements are then assembled into a larger system of equations that models the entire problem. FEM then approximates a solution by minimizing an associated error function via the calculus of variations.
Studying or analyzing a phenomenon with FEM is often referred to as finite element analysis (FEA).
== Basic concepts ==
The subdivision of a whole domain into simpler parts has several advantages:
Accurate representation of complex geometry;
Inclusion of dissimilar material properties;
Easy representation of the total solution; and
Capture of local effects.
A typical approach using the method involves the following steps:
Dividing the domain of the problem into a collection of subdomains, with each subdomain represented by a set of element equations for the original problem.
Systematically recombining all sets of element equations into a global system of equations for the final calculation.
The global system of equations uses known solution techniques and can be calculated from the initial values of the original problem to obtain a numerical answer.
In the first step above, the element equations are simple equations that locally approximate the original complex equations to be studied, where the original equations are often partial differential equations (PDEs). To explain the approximation of this process, FEM is commonly introduced as a special case of the Galerkin method. The process, in mathematical language, is to construct an integral of the inner product of the residual and the weight functions; then, set the integral to zero. In simple terms, it is a procedure that minimizes the approximation error by fitting trial functions into the PDE. The residual is the error caused by the trial functions, and the weight functions are polynomial approximation functions that project the residual. The process eliminates all the spatial derivatives from the PDE, thus approximating the PDE locally using the following:
a set of algebraic equations for steady-state problems; and
a set of ordinary differential equations for transient problems.
These equation sets are element equations. They are linear if the underlying PDE is linear and vice versa. Algebraic equation sets that arise in the steady-state problems are solved using numerical linear algebraic methods. In contrast, ordinary differential equation sets that occur in the transient problems are solved by numerical integrations using standard techniques such as Euler's method or the Runge–Kutta method.
In the second step above, a global system of equations is generated from the element equations by transforming coordinates from the subdomains' local nodes to the domain's global nodes. This spatial transformation includes appropriate orientation adjustments as applied in relation to the reference coordinate system. The process is often carried out using FEM software with coordinate data generated from the subdomains.
The practical application of FEM is known as finite element analysis (FEA). FEA, as applied in engineering, is a computational tool for performing engineering analysis. It includes the use of mesh generation techniques for dividing a complex problem into smaller elements, as well as the use of software coded with a FEM algorithm. When applying FEA, the complex problem is usually a physical system with the underlying physics, such as the Euler–Bernoulli beam equation, the heat equation, or the Navier–Stokes equations, expressed in either PDEs or integral equations, while the divided, smaller elements of the complex problem represent different areas in the physical system.
FEA may be used for analyzing problems over complicated domains (e.g., cars and oil pipelines) when the domain changes (e.g., during a solid-state reaction with a moving boundary), when the desired precision varies over the entire domain, or when the solution lacks smoothness. FEA simulations provide a valuable resource, as they remove multiple instances of creating and testing complex prototypes for various high-fidelity situations. For example, in a frontal crash simulation, it is possible to increase prediction accuracy in important areas, like the front of the car, and reduce it in the rear of the car, thus reducing the cost of the simulation. Another example would be in numerical weather prediction, where it is more important to have accurate predictions over developing highly nonlinear phenomena, such as tropical cyclones in the atmosphere or eddies in the ocean, rather than relatively calm areas.
A clear, detailed, and practical presentation of this approach can be found in the textbook The Finite Element Method for Engineers.
== History ==
While it is difficult to quote the date of the invention of FEM, the method originated from the need to solve complex elasticity and structural analysis problems in civil and aeronautical engineering. Its development can be traced back to work by Alexander Hrennikoff and Richard Courant in the early 1940s. Another pioneer was Ioannis Argyris. In the USSR, the introduction of the practical application of FEM is usually connected with Leonard Oganesyan. It was also independently rediscovered in China by Feng Kang in the late 1950s and early 1960s, based on the computations of dam constructions, where it was called the "finite difference method" based on variation principles. Although the approaches used by these pioneers are different, they share one essential characteristic: the mesh discretization of a continuous domain into a set of discrete sub-domains, usually called elements.
Hrennikoff's work discretizes the domain by using a lattice analogy, while Courant's approach divides the domain into finite triangular sub-regions to solve second-order elliptic partial differential equations that arise from the problem of the torsion of a cylinder. Courant's contribution was evolutionary, drawing on a large body of earlier results for PDEs developed by Lord Rayleigh, Walther Ritz, and Boris Galerkin.
The application of FEM gained momentum in the 1960s and 1970s due to the developments of J. H. Argyris and his co-workers at the University of Stuttgart; R. W. Clough and his co-workers at University of California Berkeley; O. C. Zienkiewicz and his co-workers Ernest Hinton, Bruce Irons, and others at Swansea University; Philippe G. Ciarlet at the University of Paris 6; and Richard Gallagher and his co-workers at Cornell University. During this period, additional impetus was provided by the available open-source FEM programs. NASA sponsored the original version of NASTRAN. University of California Berkeley made the finite element programs SAP IV and, later, OpenSees widely available. In Norway, the ship classification society Det Norske Veritas (now DNV GL) developed Sesam in 1969 for use in the analysis of ships. A rigorous mathematical basis for FEM was provided in 1973 with a publication by Gilbert Strang and George Fix. The method has since been generalized for the numerical modeling of physical systems in a wide variety of engineering disciplines, such as electromagnetism, heat transfer, and fluid dynamics.
== Technical discussion ==
=== The structure of finite element methods ===
A finite element method is characterized by a variational formulation, a discretization strategy, one or more solution algorithms, and post-processing procedures.
Examples of the variational formulation are the Galerkin method, the discontinuous Galerkin method, mixed methods, etc.
A discretization strategy is understood to mean a clearly defined set of procedures that cover (a) the creation of finite element meshes, (b) the definition of basis function on reference elements (also called shape functions), and (c) the mapping of reference elements onto the elements of the mesh. Examples of discretization strategies are the h-version, p-version, hp-version, x-FEM, isogeometric analysis, etc. Each discretization strategy has certain advantages and disadvantages. A reasonable criterion in selecting a discretization strategy is to realize nearly optimal performance for the broadest set of mathematical models in a particular model class.
Various numerical solution algorithms can be classified into two broad categories; direct and iterative solvers. These algorithms are designed to exploit the sparsity of matrices that depend on the variational formulation and discretization strategy choices.
Post-processing procedures are designed to extract the data of interest from a finite element solution. To meet the requirements of solution verification, postprocessors need to provide for a posteriori error estimation in terms of the quantities of interest. When the errors of approximation are larger than what is considered acceptable, then the discretization has to be changed either by an automated adaptive process or by the action of the analyst. Some very efficient postprocessors provide for the realization of superconvergence.
=== Illustrative problems P1 and P2 ===
The following two problems demonstrate the finite element method.
P1 is a one-dimensional problem
P1
:
{
u
″
(
x
)
=
f
(
x
)
in
(
0
,
1
)
,
u
(
0
)
=
u
(
1
)
=
0
,
{\displaystyle {\text{ P1 }}:{\begin{cases}u''(x)=f(x){\text{ in }}(0,1),\\u(0)=u(1)=0,\end{cases}}}
where
f
{\displaystyle f}
is given,
u
{\displaystyle u}
is an unknown function of
x
{\displaystyle x}
, and
u
″
{\displaystyle u''}
is the second derivative of
u
{\displaystyle u}
with respect to
x
{\displaystyle x}
.
P2 is a two-dimensional problem (Dirichlet problem)
P2
:
{
u
x
x
(
x
,
y
)
+
u
y
y
(
x
,
y
)
=
f
(
x
,
y
)
in
Ω
,
u
=
0
on
∂
Ω
,
{\displaystyle {\text{P2 }}:{\begin{cases}u_{xx}(x,y)+u_{yy}(x,y)=f(x,y)&{\text{ in }}\Omega ,\\u=0&{\text{ on }}\partial \Omega ,\end{cases}}}
where
Ω
{\displaystyle \Omega }
is a connected open region in the
(
x
,
y
)
{\displaystyle (x,y)}
plane whose boundary
∂
Ω
{\displaystyle \partial \Omega }
is nice (e.g., a smooth manifold or a polygon), and
u
x
x
{\displaystyle u_{xx}}
and
u
y
y
{\displaystyle u_{yy}}
denote the second derivatives with respect to
x
{\displaystyle x}
and
y
{\displaystyle y}
, respectively.
The problem P1 can be solved directly by computing antiderivatives. However, this method of solving the boundary value problem (BVP) works only when there is one spatial dimension. It does not generalize to higher-dimensional problems or problems like
u
+
V
″
=
f
{\displaystyle u+V''=f}
. For this reason, we will develop the finite element method for P1 and outline its generalization to P2.
Our explanation will proceed in two steps, which mirror two essential steps one must take to solve a boundary value problem (BVP) using the FEM.
In the first step, one rephrases the original BVP in its weak form. Little to no computation is usually required for this step. The transformation is done by hand on paper.
The second step is discretization, where the weak form is discretized in a finite-dimensional space.
After this second step, we have concrete formulae for a large but finite-dimensional linear problem whose solution will approximately solve the original BVP. This finite-dimensional problem is then implemented on a computer.
=== Weak formulation ===
The first step is to convert P1 and P2 into their equivalent weak formulations.
==== The weak form of P1 ====
If
u
{\displaystyle u}
solves P1, then for any smooth function
v
{\displaystyle v}
that satisfies the displacement boundary conditions, i.e.
v
=
0
{\displaystyle v=0}
at
x
=
0
{\displaystyle x=0}
and
x
=
1
{\displaystyle x=1}
, we have
Conversely, if
u
{\displaystyle u}
with
u
(
0
)
=
u
(
1
)
=
0
{\displaystyle u(0)=u(1)=0}
satisfies (1) for every smooth function
v
(
x
)
{\displaystyle v(x)}
then one may show that this
u
{\displaystyle u}
will solve P1. The proof is easier for twice continuously differentiable
u
{\displaystyle u}
(mean value theorem) but may be proved in a distributional sense as well.
We define a new operator or map
ϕ
(
u
,
v
)
{\displaystyle \phi (u,v)}
by using integration by parts on the right-hand-side of (1):
where we have used the assumption that
v
(
0
)
=
v
(
1
)
=
0
{\displaystyle v(0)=v(1)=0}
.
==== The weak form of P2 ====
If we integrate by parts using a form of Green's identities, we see that if
u
{\displaystyle u}
solves P2, then we may define
ϕ
(
u
,
v
)
{\displaystyle \phi (u,v)}
for any
v
{\displaystyle v}
by
∫
Ω
f
v
d
s
=
−
∫
Ω
∇
u
⋅
∇
v
d
s
≡
−
ϕ
(
u
,
v
)
,
{\displaystyle \int _{\Omega }fv\,ds=-\int _{\Omega }\nabla u\cdot \nabla v\,ds\equiv -\phi (u,v),}
where
∇
{\displaystyle \nabla }
denotes the gradient and
⋅
{\displaystyle \cdot }
denotes the dot product in the two-dimensional plane. Once more
ϕ
{\displaystyle \,\!\phi }
can be turned into an inner product on a suitable space
H
0
1
(
Ω
)
{\displaystyle H_{0}^{1}(\Omega )}
of once differentiable functions of
Ω
{\displaystyle \Omega }
that are zero on
∂
Ω
{\displaystyle \partial \Omega }
. We have also assumed that
v
∈
H
0
1
(
Ω
)
{\displaystyle v\in H_{0}^{1}(\Omega )}
(see Sobolev spaces). The existence and uniqueness of the solution can also be shown.
==== A proof outline of the existence and uniqueness of the solution ====
We can loosely think of
H
0
1
(
0
,
1
)
{\displaystyle H_{0}^{1}(0,1)}
to be the absolutely continuous functions of
(
0
,
1
)
{\displaystyle (0,1)}
that are
0
{\displaystyle 0}
at
x
=
0
{\displaystyle x=0}
and
x
=
1
{\displaystyle x=1}
(see Sobolev spaces). Such functions are (weakly) once differentiable, and it turns out that the symmetric bilinear map
ϕ
{\displaystyle \!\,\phi }
then defines an inner product which turns
H
0
1
(
0
,
1
)
{\displaystyle H_{0}^{1}(0,1)}
into a Hilbert space (a detailed proof is nontrivial). On the other hand, the left-hand-side
∫
0
1
f
(
x
)
v
(
x
)
d
x
{\displaystyle \int _{0}^{1}f(x)v(x)dx}
is also an inner product, this time on the Lp space
L
2
(
0
,
1
)
{\displaystyle L^{2}(0,1)}
. An application of the Riesz representation theorem for Hilbert spaces shows that there is a unique
u
{\displaystyle u}
solving (2) and, therefore, P1. This solution is a-priori only a member of
H
0
1
(
0
,
1
)
{\displaystyle H_{0}^{1}(0,1)}
, but using elliptic regularity, will be smooth if
f
{\displaystyle f}
is.
== Discretization ==
P1 and P2 are ready to be discretized, which leads to a common sub-problem (3). The basic idea is to replace the infinite-dimensional linear problem:
Find
u
∈
H
0
1
{\displaystyle u\in H_{0}^{1}}
such that
∀
v
∈
H
0
1
,
−
ϕ
(
u
,
v
)
=
∫
f
v
{\displaystyle \forall v\in H_{0}^{1},\;-\phi (u,v)=\int fv}
with a finite-dimensional version:
where
V
{\displaystyle V}
is a finite-dimensional subspace of
H
0
1
{\displaystyle H_{0}^{1}}
. There are many possible choices for
V
{\displaystyle V}
(one possibility leads to the spectral method). However, we take
V
{\displaystyle V}
as a space of piecewise polynomial functions for the finite element method.
=== For problem P1 ===
We take the interval
(
0
,
1
)
{\displaystyle (0,1)}
, choose
n
{\displaystyle n}
values of
x
{\displaystyle x}
with
0
=
x
0
<
x
1
<
⋯
<
x
n
<
x
n
+
1
=
1
{\displaystyle 0=x_{0}<x_{1}<\cdots <x_{n}<x_{n+1}=1}
and we define
V
{\displaystyle V}
by:
V
=
{
v
:
[
0
,
1
]
→
R
:
v
is continuous,
v
|
[
x
k
,
x
k
+
1
]
is linear for
k
=
0
,
…
,
n
, and
v
(
0
)
=
v
(
1
)
=
0
}
{\displaystyle V=\{v:[0,1]\to \mathbb {R} \;:v{\text{ is continuous, }}v|_{[x_{k},x_{k+1}]}{\text{ is linear for }}k=0,\dots ,n{\text{, and }}v(0)=v(1)=0\}}
where we define
x
0
=
0
{\displaystyle x_{0}=0}
and
x
n
+
1
=
1
{\displaystyle x_{n+1}=1}
. Observe that functions in
V
{\displaystyle V}
are not differentiable according to the elementary definition of calculus. Indeed, if
v
∈
V
{\displaystyle v\in V}
then the derivative is typically not defined at any
x
=
x
k
{\displaystyle x=x_{k}}
,
k
=
1
,
…
,
n
{\displaystyle k=1,\ldots ,n}
. However, the derivative exists at every other value of
x
{\displaystyle x}
, and one can use this derivative for integration by parts.
=== For problem P2 ===
We need
V
{\displaystyle V}
to be a set of functions of
Ω
{\displaystyle \Omega }
. In the figure on the right, we have illustrated a triangulation of a 15-sided polygonal region
Ω
{\displaystyle \Omega }
in the plane (below), and a piecewise linear function (above, in color) of this polygon which is linear on each triangle of the triangulation; the space
V
{\displaystyle V}
would consist of functions that are linear on each triangle of the chosen triangulation.
One hopes that as the underlying triangular mesh becomes finer and finer, the solution of the discrete problem (3) will, in some sense, converge to the solution of the original boundary value problem P2. To measure this mesh fineness, the triangulation is indexed by a real-valued parameter
h
>
0
{\displaystyle h>0}
which one takes to be very small. This parameter will be related to the largest or average triangle size in the triangulation. As we refine the triangulation, the space of piecewise linear functions
V
{\displaystyle V}
must also change with
h
{\displaystyle h}
. For this reason, one often reads
V
h
{\displaystyle V_{h}}
instead of
V
{\displaystyle V}
in the literature. Since we do not perform such an analysis, we will not use this notation.
=== Choosing a basis ===
To complete the discretization, we must select a basis of
V
{\displaystyle V}
. In the one-dimensional case, for each control point
x
k
{\displaystyle x_{k}}
we will choose the piecewise linear function
v
k
{\displaystyle v_{k}}
in
V
{\displaystyle V}
whose value is
1
{\displaystyle 1}
at
x
k
{\displaystyle x_{k}}
and zero at every
x
j
,
j
≠
k
{\displaystyle x_{j},\;j\neq k}
, i.e.,
v
k
(
x
)
=
{
x
−
x
k
−
1
x
k
−
x
k
−
1
if
x
∈
[
x
k
−
1
,
x
k
]
,
x
k
+
1
−
x
x
k
+
1
−
x
k
if
x
∈
[
x
k
,
x
k
+
1
]
,
0
otherwise
,
{\displaystyle v_{k}(x)={\begin{cases}{x-x_{k-1} \over x_{k}\,-x_{k-1}}&{\text{ if }}x\in [x_{k-1},x_{k}],\\{x_{k+1}\,-x \over x_{k+1}\,-x_{k}}&{\text{ if }}x\in [x_{k},x_{k+1}],\\0&{\text{ otherwise}},\end{cases}}}
for
k
=
1
,
…
,
n
{\displaystyle k=1,\dots ,n}
; this basis is a shifted and scaled tent function. For the two-dimensional case, we choose again one basis function
v
k
{\displaystyle v_{k}}
per vertex
x
k
{\displaystyle x_{k}}
of the triangulation of the planar region
Ω
{\displaystyle \Omega }
. The function
v
k
{\displaystyle v_{k}}
is the unique function of
V
{\displaystyle V}
whose value is
1
{\displaystyle 1}
at
x
k
{\displaystyle x_{k}}
and zero at every
x
j
,
j
≠
k
{\displaystyle x_{j},\;j\neq k}
.
Depending on the author, the word "element" in the "finite element method" refers to the domain's triangles, the piecewise linear basis function, or both. So, for instance, an author interested in curved domains might replace the triangles with curved primitives and so might describe the elements as being curvilinear. On the other hand, some authors replace "piecewise linear" with "piecewise quadratic" or even "piecewise polynomial". The author might then say "higher order element" instead of "higher degree polynomial". The finite element method is not restricted to triangles (tetrahedra in 3-d or higher-order simplexes in multidimensional spaces). Still, it can be defined on quadrilateral subdomains (hexahedra, prisms, or pyramids in 3-d, and so on). Higher-order shapes (curvilinear elements) can be defined with polynomial and even non-polynomial shapes (e.g., ellipse or circle).
Examples of methods that use higher degree piecewise polynomial basis functions are the hp-FEM and spectral FEM.
More advanced implementations (adaptive finite element methods) utilize a method to assess the quality of the results (based on error estimation theory) and modify the mesh during the solution aiming to achieve an approximate solution within some bounds from the exact solution of the continuum problem. Mesh adaptivity may utilize various techniques; the most popular are:
moving nodes (r-adaptivity)
refining (and unrefined) elements (h-adaptivity)
changing order of base functions (p-adaptivity)
combinations of the above (hp-adaptivity).
=== Small support of the basis ===
The primary advantage of this choice of basis is that the inner products
⟨
v
j
,
v
k
⟩
=
∫
0
1
v
j
v
k
d
x
{\displaystyle \langle v_{j},v_{k}\rangle =\int _{0}^{1}v_{j}v_{k}\,dx}
and
ϕ
(
v
j
,
v
k
)
=
∫
0
1
v
j
′
v
k
′
d
x
{\displaystyle \phi (v_{j},v_{k})=\int _{0}^{1}v_{j}'v_{k}'\,dx}
will be zero for almost all
j
,
k
{\displaystyle j,k}
.
(The matrix containing
⟨
v
j
,
v
k
⟩
{\displaystyle \langle v_{j},v_{k}\rangle }
in the
(
j
,
k
)
{\displaystyle (j,k)}
location is known as the Gramian matrix.)
In the one dimensional case, the support of
v
k
{\displaystyle v_{k}}
is the interval
[
x
k
−
1
,
x
k
+
1
]
{\displaystyle [x_{k-1},x_{k+1}]}
. Hence, the integrands of
⟨
v
j
,
v
k
⟩
{\displaystyle \langle v_{j},v_{k}\rangle }
and
ϕ
(
v
j
,
v
k
)
{\displaystyle \phi (v_{j},v_{k})}
are identically zero whenever
|
j
−
k
|
>
1
{\displaystyle |j-k|>1}
.
Similarly, in the planar case, if
x
j
{\displaystyle x_{j}}
and
x
k
{\displaystyle x_{k}}
do not share an edge of the triangulation, then the integrals
∫
Ω
v
j
v
k
d
s
{\displaystyle \int _{\Omega }v_{j}v_{k}\,ds}
and
∫
Ω
∇
v
j
⋅
∇
v
k
d
s
{\displaystyle \int _{\Omega }\nabla v_{j}\cdot \nabla v_{k}\,ds}
are both zero.
=== Matrix form of the problem ===
If we write
u
(
x
)
=
∑
k
=
1
n
u
k
v
k
(
x
)
{\displaystyle u(x)=\sum _{k=1}^{n}u_{k}v_{k}(x)}
and
f
(
x
)
=
∑
k
=
1
n
f
k
v
k
(
x
)
{\displaystyle f(x)=\sum _{k=1}^{n}f_{k}v_{k}(x)}
then problem (3), taking
v
(
x
)
=
v
j
(
x
)
{\displaystyle v(x)=v_{j}(x)}
for
j
=
1
,
…
,
n
{\displaystyle j=1,\dots ,n}
, becomes
If we denote by
u
{\displaystyle \mathbf {u} }
and
f
{\displaystyle \mathbf {f} }
the column vectors
(
u
1
,
…
,
u
n
)
t
{\displaystyle (u_{1},\dots ,u_{n})^{t}}
and
(
f
1
,
…
,
f
n
)
t
{\displaystyle (f_{1},\dots ,f_{n})^{t}}
, and if we let
L
=
(
L
i
j
)
{\displaystyle L=(L_{ij})}
and
M
=
(
M
i
j
)
{\displaystyle M=(M_{ij})}
be matrices whose entries are
L
i
j
=
ϕ
(
v
i
,
v
j
)
{\displaystyle L_{ij}=\phi (v_{i},v_{j})}
and
M
i
j
=
∫
v
i
v
j
d
x
{\displaystyle M_{ij}=\int v_{i}v_{j}dx}
then we may rephrase (4) as
It is not necessary to assume
f
(
x
)
=
∑
k
=
1
n
f
k
v
k
(
x
)
{\displaystyle f(x)=\sum _{k=1}^{n}f_{k}v_{k}(x)}
. For a general function
f
(
x
)
{\displaystyle f(x)}
, problem (3) with
v
(
x
)
=
v
j
(
x
)
{\displaystyle v(x)=v_{j}(x)}
for
j
=
1
,
…
,
n
{\displaystyle j=1,\dots ,n}
becomes actually simpler, since no matrix
M
{\displaystyle M}
is used,
where
b
=
(
b
1
,
…
,
b
n
)
t
{\displaystyle \mathbf {b} =(b_{1},\dots ,b_{n})^{t}}
and
b
j
=
∫
f
v
j
d
x
{\displaystyle b_{j}=\int fv_{j}dx}
for
j
=
1
,
…
,
n
{\displaystyle j=1,\dots ,n}
.
As we have discussed before, most of the entries of
L
{\displaystyle L}
and
M
{\displaystyle M}
are zero because the basis functions
v
k
{\displaystyle v_{k}}
have small support. So we now have to solve a linear system in the unknown
u
{\displaystyle \mathbf {u} }
where most of the entries of the matrix
L
{\displaystyle L}
, which we need to invert, are zero.
Such matrices are known as sparse matrices, and there are efficient solvers for such problems (much more efficient than actually inverting the matrix.) In addition,
L
{\displaystyle L}
is symmetric and positive definite, so a technique such as the conjugate gradient method is favored. For problems that are not too large, sparse LU decompositions and Cholesky decompositions still work well. For instance, MATLAB's backslash operator (which uses sparse LU, sparse Cholesky, and other factorization methods) can be sufficient for meshes with a hundred thousand vertices.
The matrix
L
{\displaystyle L}
is usually referred to as the stiffness matrix, while the matrix
M
{\displaystyle M}
is dubbed the mass matrix.
=== General form of the finite element method ===
In general, the finite element method is characterized by the following process.
One chooses a grid for
Ω
{\displaystyle \Omega }
. In the preceding treatment, the grid consisted of triangles, but one can also use squares or curvilinear polygons.
Then, one chooses basis functions. We used piecewise linear basis functions in our discussion, but it is common to use piecewise polynomial basis functions.
Separate consideration is the smoothness of the basis functions. For second-order elliptic boundary value problems, piecewise polynomial basis function that is merely continuous suffice (i.e., the derivatives are discontinuous.) For higher-order partial differential equations, one must use smoother basis functions. For instance, for a fourth-order problem such as
u
x
x
x
x
+
u
y
y
y
y
=
f
{\displaystyle u_{xxxx}+u_{yyyy}=f}
, one may use piecewise quadratic basis functions that are
C
1
{\displaystyle C^{1}}
.
Another consideration is the relation of the finite-dimensional space
V
{\displaystyle V}
to its infinite-dimensional counterpart in the examples above
H
0
1
{\displaystyle H_{0}^{1}}
. A conforming element method is one in which space
V
{\displaystyle V}
is a subspace of the element space for the continuous problem. The example above is such a method. If this condition is not satisfied, we obtain a nonconforming element method, an example of which is the space of piecewise linear functions over the mesh, which are continuous at each edge midpoint. Since these functions are generally discontinuous along the edges, this finite-dimensional space is not a subspace of the original
H
0
1
{\displaystyle H_{0}^{1}}
.
Typically, one has an algorithm for subdividing a given mesh. If the primary method for increasing precision is to subdivide the mesh, one has an h-method (h is customarily the diameter of the largest element in the mesh.) In this manner, if one shows that the error with a grid
h
{\displaystyle h}
is bounded above by
C
h
p
{\displaystyle Ch^{p}}
, for some
C
<
∞
{\displaystyle C<\infty }
and
p
>
0
{\displaystyle p>0}
, then one has an order p method. Under specific hypotheses (for instance, if the domain is convex), a piecewise polynomial of order
d
{\displaystyle d}
method will have an error of order
p
=
d
+
1
{\displaystyle p=d+1}
.
If instead of making h smaller, one increases the degree of the polynomials used in the basis function, one has a p-method. If one combines these two refinement types, one obtains an hp-method (hp-FEM). In the hp-FEM, the polynomial degrees can vary from element to element. High-order methods with large uniform p are called spectral finite element methods (SFEM). These are not to be confused with spectral methods.
For vector partial differential equations, the basis functions may take values in
R
n
{\displaystyle \mathbb {R} ^{n}}
.
== Various types of finite element methods ==
=== AEM ===
The Applied Element Method or AEM combines features of both FEM and Discrete element method or (DEM).
=== A-FEM ===
Yang and Lui introduced the Augmented-Finite Element Method, whose goal was to model the weak and strong discontinuities without needing extra DoFs, as PuM stated.
=== CutFEM ===
The Cut Finite Element Approach was developed in 2014. The approach is "to make the discretization as independent as possible of the geometric description and minimize the complexity of mesh generation, while retaining the accuracy and robustness of a standard finite element method."
=== Generalized finite element method ===
The generalized finite element method (GFEM) uses local spaces consisting of functions, not necessarily polynomials, that reflect the available information on the unknown solution and thus ensure good local approximation. Then a partition of unity is used to “bond” these spaces together to form the approximating subspace. The effectiveness of GFEM has been shown when applied to problems with domains having complicated boundaries, problems with micro-scales, and problems with boundary layers.
=== Mixed finite element method ===
The mixed finite element method is a type of finite element method in which extra independent variables are introduced as nodal variables during the discretization of a partial differential equation problem.
=== Variable – polynomial ===
The hp-FEM combines adaptively elements with variable size h and polynomial degree p to achieve exceptionally fast, exponential convergence rates.
=== hpk-FEM ===
The hpk-FEM combines adaptively elements with variable size h, polynomial degree of the local approximations p, and global differentiability of the local approximations (k-1) to achieve the best convergence rates.
=== XFEM ===
The extended finite element method (XFEM) is a numerical technique based on the generalized finite element method (GFEM) and the partition of unity method (PUM). It extends the classical finite element method by enriching the solution space for solutions to differential equations with discontinuous functions. Extended finite element methods enrich the approximation space to naturally reproduce the challenging feature associated with the problem of interest: the discontinuity, singularity, boundary layer, etc. It was shown that for some problems, such an embedding of the problem's feature into the approximation space can significantly improve convergence rates and accuracy. Moreover, treating problems with discontinuities with XFEMs suppresses the need to mesh and re-mesh the discontinuity surfaces, thus alleviating the computational costs and projection errors associated with conventional finite element methods at the cost of restricting the discontinuities to mesh edges.
Several research codes implement this technique to various degrees:
GetFEM++
xfem++
openxfem++
XFEM has also been implemented in codes like Altair Radios, ASTER, Morfeo, and Abaqus. It is increasingly being adopted by other commercial finite element software, with a few plugins and actual core implementations available (ANSYS, SAMCEF, OOFELIE, etc.).
=== Scaled boundary finite element method (SBFEM) ===
The introduction of the scaled boundary finite element method (SBFEM) came from Song and Wolf (1997). The SBFEM has been one of the most profitable contributions in the area of numerical analysis of fracture mechanics problems. It is a semi-analytical fundamental-solutionless method combining the advantages of finite element formulations and procedures and boundary element discretization. However, unlike the boundary element method, no fundamental differential solution is required.
=== S-FEM ===
The S-FEM, Smoothed Finite Element Methods, is a particular class of numerical simulation algorithms for the simulation of physical phenomena. It was developed by combining mesh-free methods with the finite element method.
=== Spectral element method ===
Spectral element methods combine the geometric flexibility of finite elements and the acute accuracy of spectral methods. Spectral methods are the approximate solution of weak-form partial equations based on high-order Lagrangian interpolants and used only with certain quadrature rules.
=== Meshfree methods ===
=== Discontinuous Galerkin methods ===
=== Finite element limit analysis ===
=== Stretched grid method ===
=== Loubignac iteration ===
Loubignac iteration is an iterative method in finite element methods.
=== Crystal plasticity finite element method (CPFEM) ===
The crystal plasticity finite element method (CPFEM) is an advanced numerical tool developed by Franz Roters. Metals can be regarded as crystal aggregates, which behave anisotropy under deformation, such as abnormal stress and strain localization. CPFEM, based on the slip (shear strain rate), can calculate dislocation, crystal orientation, and other texture information to consider crystal anisotropy during the routine. It has been applied in the numerical study of material deformation, surface roughness, fractures, etc.
=== Virtual element method (VEM) ===
The virtual element method (VEM), introduced by Beirão da Veiga et al. (2013) as an extension of mimetic finite difference (MFD) methods, is a generalization of the standard finite element method for arbitrary element geometries. This allows admission of general polygons (or polyhedra in 3D) that are highly irregular and non-convex in shape. The name virtual derives from the fact that knowledge of the local shape function basis is not required and is, in fact, never explicitly calculated.
== Link with the gradient discretization method ==
Some types of finite element methods (conforming, nonconforming, mixed finite element methods) are particular cases of the gradient discretization method (GDM). Hence the convergence properties of the GDM, which are established for a series of problems (linear and nonlinear elliptic problems, linear, nonlinear, and degenerate parabolic problems), hold as well for these particular FEMs.
== Comparison to the finite difference method ==
The finite difference method (FDM) is an alternative way of approximating solutions of PDEs. The differences between FEM and FDM are:
The most attractive feature of the FEM is its ability to handle complicated geometries (and boundaries) with relative ease. While FDM in its basic form is restricted to handle rectangular shapes and simple alterations thereof, the handling of geometries in FEM is theoretically straightforward.
FDM is not usually used for irregular CAD geometries but more often for rectangular or block-shaped models.
FEM generally allows for more flexible mesh adaptivity than FDM.
The most attractive feature of finite differences is that it is straightforward to implement.
One could consider the FDM a particular case of the FEM approach in several ways. E.g., first-order FEM is identical to FDM for Poisson's equation if the problem is discretized by a regular rectangular mesh with each rectangle divided into two triangles.
There are reasons to consider the mathematical foundation of the finite element approximation more sound, for instance, because the quality of the approximation between grid points is poor in FDM.
The quality of a FEM approximation is often higher than in the corresponding FDM approach, but this is highly problem-dependent, and several examples to the contrary can be provided.
Generally, FEM is the method of choice in all types of analysis in structural mechanics (i.e., solving for deformation and stresses in solid bodies or dynamics of structures). In contrast, computational fluid dynamics (CFD) tend to use FDM or other methods like finite volume method (FVM). CFD problems usually require discretization of the problem into a large number of cells/gridpoints (millions and more). Therefore the cost of the solution favors simpler, lower-order approximation within each cell. This is especially true for 'external flow' problems, like airflow around the car, airplane, or weather simulation.
== Finite element and fast fourier transform (FFT) methods ==
Another method used for approximating solutions to a partial differential equation is the Fast Fourier Transform (FFT), where the solution is approximated by a fourier series computed using the FFT. For approximating the mechanical response of materials under stress, FFT is often much faster, but FEM may be more accurate. One example of the respective advantages of the two methods is in simulation of rolling a sheet of aluminum (an FCC metal), and drawing a wire of tungsten (a BCC metal). This simulation did not have a sophisticated shape update algorithm for the FFT method. In both cases, the FFT method was more than 10 times as fast as FEM, but in the wire drawing simulation, where there were large deformations in grains, the FEM method was much more accurate. In the sheet rolling simulation, the results of the two methods were similar. FFT has a larger speed advantage in cases where the boundary conditions are given in the materials strain, and loses some of its efficiency in cases where the stress is used to apply the boundary conditions, as more iterations of the method are needed.
The FE and FFT methods can also be combined in a voxel based method (2) to simulate deformation in materials, where the FE method is used for the macroscale stress and deformation, and the FFT method is used on the microscale to deal with the effects of microscale on the mechanical response. Unlike FEM, FFT methods’ similarities to image processing methods means that an actual image of the microstructure from a microscope can be input to the solver to get a more accurate stress response. Using a real image with FFT avoids meshing the microstructure, which would be required if using FEM simulation of the microstructure, and might be difficult. Because fourier approximations are inherently periodic, FFT can only be used in cases of periodic microstructure, but this is common in real materials. FFT can also be combined with FEM methods by using fourier components as the variational basis for approximating the fields inside an element, which can take advantage of the speed of FFT based solvers.
== Application ==
Various specializations under the umbrella of the mechanical engineering discipline (such as aeronautical, biomechanical, and automotive industries) commonly use integrated FEM in the design and development of their products. Several modern FEM packages include specific components such as thermal, electromagnetic, fluid, and structural working environments. In a structural simulation, FEM helps tremendously in producing stiffness and strength visualizations and minimizing weight, materials, and costs.
This powerful design tool has significantly improved both the standard of engineering designs and the design process methodology in many industrial applications. The introduction of FEM has substantially decreased the time to take products from concept to the production line. Testing and development have been accelerated primarily through improved initial prototype designs using FEM. In summary, benefits of FEM include increased accuracy, enhanced design and better insight into critical design parameters, virtual prototyping, fewer hardware prototypes, a faster and less expensive design cycle, increased productivity, and increased revenue.
In the 1990s FEM was proposed for use in stochastic modeling for numerically solving probability models and later for reliability assessment.
FEM is widely applied for approximating differential equations that describe physical systems. This method is very popular in the community of Computational fluid dynamics, and there are many applications for solving Navier–Stokes equations with FEM. Recently, the application of FEM has been increasing in the researches of computational plasma. Promising numerical results using FEM for Magnetohydrodynamics, Vlasov equation, and Schrödinger equation have been proposed.
== See also ==
== References ==
== Further reading ==
G. Allaire and A. Craig: Numerical Analysis and Optimization: An Introduction to Mathematical Modelling and Numerical Simulation.
K. J. Bathe: Numerical methods in finite element analysis, Prentice-Hall (1976).
Thomas J.R. Hughes: The Finite Element Method: Linear Static and Dynamic Finite Element Analysis, Prentice-Hall (1987).
J. Chaskalovic: Finite Elements Methods for Engineering Sciences, Springer Verlag, (2008).
Endre Süli: Finite Element Methods for Partial Differential Equations.
O. C. Zienkiewicz, R. L. Taylor, J. Z. Zhu : The Finite Element Method: Its Basis and Fundamentals, Butterworth-Heinemann (2005).
N. Ottosen, H. Petersson: Introduction to the Finite Element Method, Prentice-Hall (1992).
Susanne C. Brenner, L. Ridgway Scott: The Mathematical Theory of Finite Element Methods, Springer-Verlag New York, ISBN 978-0-387-75933-3 (2008).
Zohdi, T. I. (2018) A finite element primer for beginners-extended version including sample tests and projects. Second Edition https://link.springer.com/book/10.1007/978-3-319-70428-9
Leszek F. Demkowicz: Mathematical Theory of Finite Elements, SIAM, ISBN 978-1-61197-772-1 (2024). | Wikipedia/Finite_element_method |
Stochastic partial differential equations (SPDEs) generalize partial differential equations via random force terms and coefficients, in the same way ordinary stochastic differential equations generalize ordinary differential equations.
They have relevance to quantum field theory, statistical mechanics, and spatial modeling.
== Examples ==
One of the most studied SPDEs is the stochastic heat equation, which may formally be written as
∂
t
u
=
Δ
u
+
ξ
,
{\displaystyle \partial _{t}u=\Delta u+\xi \;,}
where
Δ
{\displaystyle \Delta }
is the Laplacian and
ξ
{\displaystyle \xi }
denotes space-time white noise. Other examples also include stochastic versions of famous linear equations, such as the wave equation and the Schrödinger equation.
== Discussion ==
One difficulty is their lack of regularity. In one dimensional space, solutions to the stochastic heat equation are only almost 1/2-Hölder continuous in space and 1/4-Hölder continuous in time. For dimensions two and higher, solutions are not even function-valued, but can be made sense of as random distributions.
For linear equations, one can usually find a mild solution via semigroup techniques.
However, problems start to appear when considering non-linear equations. For example
∂
t
u
=
Δ
u
+
P
(
u
)
+
ξ
,
{\displaystyle \partial _{t}u=\Delta u+P(u)+\xi ,}
where
P
{\displaystyle P}
is a polynomial. In this case it is not even clear how one should make sense of the equation. Such an equation will also not have a function-valued solution in dimension larger than one, and hence no pointwise meaning. It is well known that the space of distributions has no product structure. This is the core problem of such a theory. This leads to the need of some form of renormalization.
An early attempt to circumvent such problems for some specific equations was the so called da Prato–Debussche trick which involved studying such non-linear equations as perturbations of linear ones. However, this can only be used in very restrictive settings, as it depends on both the non-linear factor and on the regularity of the driving noise term. In recent years, the field has drastically expanded, and now there exists a large machinery to guarantee local existence for a variety of sub-critical SPDEs.
== See also ==
== References ==
== Further reading ==
Bain, A.; Crisan, D. (2009). Fundamentals of Stochastic Filtering. Stochastic Modelling and Applied Probability. Vol. 60. New York: Springer. ISBN 978-0387768953.
Holden, H.; Øksendal, B.; Ubøe, J.; Zhang, T. (2010). Stochastic Partial Differential Equations: A Modeling, White Noise Functional Approach. Universitext (2nd ed.). New York: Springer. doi:10.1007/978-0-387-89488-1. ISBN 978-0-387-89487-4.
Lindgren, F.; Rue, H.; Lindström, J. (2011). "An Explicit Link between Gaussian Fields and Gaussian Markov Random Fields: The Stochastic Partial Differential Equation Approach". Journal of the Royal Statistical Society Series B: Statistical Methodology. 73 (4): 423–498. doi:10.1111/j.1467-9868.2011.00777.x. hdl:20.500.11820/1084d335-e5b4-4867-9245-ec9c4f6f4645. ISSN 1369-7412.
Xiu, D. (2010). Numerical Methods for Stochastic Computations: A Spectral Method Approach. Princeton University Press. ISBN 978-0-691-14212-8.
== External links ==
"A Minicourse on Stochastic Partial Differential Equations" (PDF). 2006.
Hairer, Martin (2009). "An Introduction to Stochastic PDEs". arXiv:0907.4178 [math.PR]. | Wikipedia/Stochastic_partial_differential_equations |
A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The development of the computer algebra systems in the second half of the 20th century is part of the discipline of "computer algebra" or "symbolic computation", which has spurred work in algorithms over mathematical objects such as polynomials.
Computer algebra systems may be divided into two classes: specialized and general-purpose. The specialized ones are devoted to a specific part of mathematics, such as number theory, group theory, or teaching of elementary mathematics.
General-purpose computer algebra systems aim to be useful to a user working in any scientific field that requires manipulation of mathematical expressions. To be useful, a general-purpose computer algebra system must include various features such as:
a user interface allowing a user to enter and display mathematical formulas, typically from a keyboard, menu selections, mouse or stylus.
a programming language and an interpreter (the result of a computation commonly has an unpredictable form and an unpredictable size; therefore user intervention is frequently needed),
a simplifier, which is a rewrite system for simplifying mathematics formulas,
a memory manager, including a garbage collector, needed by the huge size of the intermediate data, which may appear during a computation,
an arbitrary-precision arithmetic, needed by the huge size of the integers that may occur,
a large library of mathematical algorithms and special functions.
The library must not only provide for the needs of the users, but also the needs of the simplifier. For example, the computation of polynomial greatest common divisors is systematically used for the simplification of expressions involving fractions.
This large amount of required computer capabilities explains the small number of general-purpose computer algebra systems. Significant systems include Axiom, GAP, Maxima, Magma, Maple, Mathematica, and SageMath.
== History ==
In the 1950s, while computers were mainly used for numerical computations, there were some research projects into using them for symbolic manipulation. Computer algebra systems began to appear in the 1960s and evolved out of two quite different sources—the requirements of theoretical physicists and research into artificial intelligence.
A prime example for the first development was the pioneering work conducted by the later Nobel Prize laureate in physics Martinus Veltman, who designed a program for symbolic mathematics, especially high-energy physics, called Schoonschip (Dutch for "clean ship") in 1963. Other early systems include FORMAC.
Using Lisp as the programming basis, Carl Engelman created MATHLAB in 1964 at MITRE within an artificial-intelligence research environment. Later MATHLAB was made available to users on PDP-6 and PDP-10 systems running TOPS-10 or TENEX in universities. Today it can still be used on SIMH emulations of the PDP-10. MATHLAB ("mathematical laboratory") should not be confused with MATLAB ("matrix laboratory"), which is a system for numerical computation built 15 years later at the University of New Mexico.
In 1987, Hewlett-Packard introduced the first hand-held calculator CAS with the HP-28 series. Other early handheld calculators with symbolic algebra capabilities included the Texas Instruments TI-89 series and TI-92 calculator, and the Casio CFX-9970G.
The first popular computer algebra systems were muMATH, Reduce, Derive (based on muMATH), and Macsyma; a copyleft version of Macsyma is called Maxima. Reduce became free software in 2008. Commercial systems include Mathematica and Maple, which are commonly used by research mathematicians, scientists, and engineers. Freely available alternatives include SageMath (which can act as a front-end to several other free and nonfree CAS). Other significant systems include Axiom, GAP, Maxima and Magma.
The movement to web-based applications in the early 2000s saw the release of WolframAlpha, an online search engine and CAS which includes the capabilities of Mathematica.
More recently, computer algebra systems have been implemented using artificial neural networks, though as of 2020 they are not commercially available.
== Symbolic manipulations ==
The symbolic manipulations supported typically include:
simplification to a smaller expression or some standard form, including automatic simplification with assumptions and simplification with constraints
substitution of symbols or numeric values for certain expressions
change of form of expressions: expanding products and powers, partial and full factorization, rewriting as partial fractions, constraint satisfaction, rewriting trigonometric functions as exponentials, transforming logic expressions, etc.
partial and total differentiation
some indefinite and definite integration (see symbolic integration), including multidimensional integrals
symbolic constrained and unconstrained global optimization
solution of linear and some non-linear equations over various domains
solution of some differential and difference equations
taking some limits
integral transforms
series operations such as expansion, summation and products
matrix operations including products, inverses, etc.
statistical computation
theorem proving and verification which is very useful in the area of experimental mathematics
optimized code generation
In the above, the word some indicates that the operation cannot always be performed.
== Additional capabilities ==
Many also include:
a programming language, allowing users to implement their own algorithms
arbitrary-precision numeric operations
exact integer arithmetic and number theory functionality
Editing of mathematical expressions in two-dimensional form
plotting graphs and parametric plots of functions in two and three dimensions, and animating them
drawing charts and diagrams
APIs for linking it on an external program such as a database, or using in a programming language to use the computer algebra system
string manipulation such as matching and searching
add-ons for use in applied mathematics such as physics, bioinformatics, computational chemistry and packages for physical computation
solvers for differential equations
Some include:
graphic production and editing such as computer-generated imagery and signal processing as image processing
sound synthesis
Some computer algebra systems focus on specialized disciplines; these are typically developed in academia and are free. They can be inefficient for numeric operations as compared to numeric systems.
== Types of expressions ==
The expressions manipulated by the CAS typically include polynomials in multiple variables; standard functions of expressions (sine, exponential, etc.); various special functions (Γ, ζ, erf, Bessel functions, etc.); arbitrary functions of expressions; optimization; derivatives, integrals, simplifications, sums, and products of expressions; truncated series with expressions as coefficients, matrices of expressions, and so on. Numeric domains supported typically include floating-point representation of real numbers, integers (of unbounded size), complex (floating-point representation), interval representation of reals, rational number (exact representation) and algebraic numbers.
== Use in education ==
There have been many advocates for increasing the use of computer algebra systems in primary and secondary-school classrooms. The primary reason for such advocacy is that computer algebra systems represent real-world math more than do paper-and-pencil or hand calculator based mathematics.
This push for increasing computer usage in mathematics classrooms has been supported by some boards of education. It has even been mandated in the curriculum of some regions.
Computer algebra systems have been extensively used in higher education. Many universities offer either specific courses on developing their use, or they implicitly expect students to use them for their course work. The companies that develop computer algebra systems have pushed to increase their prevalence among university and college programs.
CAS-equipped calculators are not permitted on the ACT, the PLAN, and in some classrooms though it may be permitted on all of College Board's calculator-permitted tests, including the SAT, some SAT Subject Tests and the AP Calculus, Chemistry, Physics, and Statistics exams.
== Mathematics used in computer algebra systems ==
Knuth–Bendix completion algorithm
Root-finding algorithms
Symbolic integration via e.g. Risch algorithm or Risch–Norman algorithm
Hypergeometric summation via e.g. Gosper's algorithm
Limit computation via e.g. Gruntz's algorithm
Polynomial factorization via e.g., over finite fields, Berlekamp's algorithm or Cantor–Zassenhaus algorithm.
Greatest common divisor via e.g. Euclidean algorithm
Gaussian elimination
Gröbner basis via e.g. Buchberger's algorithm; generalization of Euclidean algorithm and Gaussian elimination
Padé approximant
Schwartz–Zippel lemma and testing polynomial identities
Chinese remainder theorem
Diophantine equations
Landau's algorithm (nested radicals)
Derivatives of elementary functions and special functions. (e.g. See derivatives of the incomplete gamma function.)
Cylindrical algebraic decomposition
Quantifier elimination over real numbers via cylindrical algebraic decomposition
== See also ==
List of computer algebra systems
Scientific computation
Statistical package
Automated theorem proving
Algebraic modeling language
Constraint-logic programming
Satisfiability modulo theories
== References ==
== External links ==
Curriculum and Assessment in an Age of Computer Algebra Systems Archived 2009-12-01 at the Wayback Machine - From the Education Resources Information Center Clearinghouse for Science, Mathematics, and Environmental Education, Columbus, Ohio.
Richard J. Fateman. "Essays in algebraic simplification." Technical report MIT-LCS-TR-095, 1972. (Of historical interest in showing the direction of research in computer algebra. At the MIT LCS website: [1]) | Wikipedia/Computer_algebra_system |
In mathematics, a Green's function (or Green function) is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.
This means that if
L
{\displaystyle L}
is a linear differential operator, then
the Green's function
G
{\displaystyle G}
is the solution of the equation
L
G
=
δ
{\displaystyle LG=\delta }
, where
δ
{\displaystyle \delta }
is Dirac's delta function;
the solution of the initial-value problem
L
y
=
f
{\displaystyle Ly=f}
is the convolution (
G
∗
f
{\displaystyle G\ast f}
).
Through the superposition principle, given a linear ordinary differential equation (ODE),
L
y
=
f
{\displaystyle Ly=f}
, one can first solve
L
G
=
δ
s
{\displaystyle LG=\delta _{s}}
, for each s, and realizing that, since the source is a sum of delta functions, the solution is a sum of Green's functions as well, by linearity of L.
Green's functions are named after the British mathematician George Green, who first developed the concept in the 1820s. In the modern study of linear partial differential equations, Green's functions are studied largely from the point of view of fundamental solutions instead.
Under many-body theory, the term is also used in physics, specifically in quantum field theory, aerodynamics, aeroacoustics, electrodynamics, seismology and statistical field theory, to refer to various types of correlation functions, even those that do not fit the mathematical definition. In quantum field theory, Green's functions take the roles of propagators.
== Definition and uses ==
A Green's function, G(x,s), of a linear differential operator L = L(x) acting on distributions over a subset of the Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
, at a point s, is any solution of
where δ is the Dirac delta function. This property of a Green's function can be exploited to solve differential equations of the form
If the kernel of L is non-trivial, then the Green's function is not unique. However, in practice, some combination of symmetry, boundary conditions and/or other externally imposed criteria will give a unique Green's function. Green's functions may be categorized by a Green's function number according to the type of boundary conditions being satisfied. Green's functions are not necessarily functions of a real variable but are generally understood in the sense of distributions.
Green's functions are also useful tools in solving wave equations and diffusion equations. In quantum mechanics, Green's function of the Hamiltonian is a key concept with important links to the concept of density of states.
The Green's function as used in physics is usually defined with the opposite sign, instead. That is,
L
G
(
x
,
s
)
=
δ
(
x
−
s
)
.
{\displaystyle LG(x,s)=\delta (x-s)\,.}
This definition does not significantly change any of the properties of Green's function due to the evenness of the Dirac delta function.
If the operator is translation invariant, that is, when
L
{\displaystyle L}
has constant coefficients with respect to x, then the Green's function can be taken to be a convolution kernel, that is,
G
(
x
,
s
)
=
G
(
x
−
s
)
.
{\displaystyle G(x,s)=G(x-s)\,.}
In this case, Green's function is the same as the impulse response of linear time-invariant system theory.
== Motivation ==
Loosely speaking, if such a function G can be found for the operator L, then, if we multiply equation 1 for the Green's function by f(s), and then integrate with respect to s, we obtain,
∫
L
G
(
x
,
s
)
f
(
s
)
d
s
=
∫
δ
(
x
−
s
)
f
(
s
)
d
s
=
f
(
x
)
.
{\displaystyle \int LG(x,s)\,f(s)\,ds=\int \delta (x-s)\,f(s)\,ds=f(x)\,.}
Because the operator
L
=
L
(
x
)
{\displaystyle L=L(x)}
is linear and acts only on the variable x (and not on the variable of integration s), one may take the operator
L
{\displaystyle L}
outside of the integration, yielding
L
(
∫
G
(
x
,
s
)
f
(
s
)
d
s
)
=
f
(
x
)
.
{\displaystyle L\left(\int G(x,s)\,f(s)\,ds\right)=f(x)\,.}
This means that
is a solution to the equation
L
u
(
x
)
=
f
(
x
)
.
{\displaystyle Lu(x)=f(x)\,.}
Thus, one may obtain the function u(x) through knowledge of the Green's function in equation 1 and the source term on the right-hand side in equation 2. This process relies upon the linearity of the operator L.
In other words, the solution of equation 2, u(x), can be determined by the integration given in equation 3. Although f(x) is known, this integration cannot be performed unless G is also known. The problem now lies in finding the Green's function G that satisfies equation 1. For this reason, the Green's function is also sometimes called the fundamental solution associated to the operator L.
Not every operator
L
{\displaystyle L}
admits a Green's function. A Green's function can also be thought of as a right inverse of L. Aside from the difficulties of finding a Green's function for a particular operator, the integral in equation 3 may be quite difficult to evaluate. However the method gives a theoretically exact result.
This can be thought of as an expansion of f according to a Dirac delta function basis (projecting f over
δ
(
x
−
s
)
{\displaystyle \delta (x-s)}
; and a superposition of the solution on each projection. Such an integral equation is known as a Fredholm integral equation, the study of which constitutes Fredholm theory.
== Green's functions for solving non-homogeneous boundary value problems ==
The primary use of Green's functions in mathematics is to solve non-homogeneous boundary value problems. In modern theoretical physics, Green's functions are also usually used as propagators in Feynman diagrams; the term Green's function is often further used for any correlation function.
=== Framework ===
Let
L
{\displaystyle L}
be the Sturm–Liouville operator, a linear differential operator of the form
L
=
d
d
x
[
p
(
x
)
d
d
x
]
+
q
(
x
)
{\displaystyle L={\dfrac {d}{dx}}\left[p(x){\dfrac {d}{dx}}\right]+q(x)}
and let
D
{\displaystyle \mathbf {D} }
be the vector-valued boundary conditions operator
D
u
=
[
α
1
u
′
(
0
)
+
β
1
u
(
0
)
α
2
u
′
(
ℓ
)
+
β
2
u
(
ℓ
)
]
.
{\displaystyle \mathbf {D} u={\begin{bmatrix}\alpha _{1}u'(0)+\beta _{1}u(0)\\\alpha _{2}u'(\ell )+\beta _{2}u(\ell )\end{bmatrix}}\,.}
Let
f
(
x
)
{\displaystyle f(x)}
be a continuous function in
[
0
,
ℓ
]
{\displaystyle [0,\ell ]\,}
. Further suppose that the problem
L
u
=
f
D
u
=
0
{\displaystyle {\begin{aligned}Lu&=f\\\mathbf {D} u&=\mathbf {0} \end{aligned}}}
is "regular", i.e., the only solution for
f
(
x
)
=
0
{\displaystyle f(x)=0}
for all x is
u
(
x
)
=
0
{\displaystyle u(x)=0}
.
=== Theorem ===
There is one and only one solution
u
(
x
)
{\displaystyle u(x)}
that satisfies
L
u
=
f
D
u
=
0
{\displaystyle {\begin{aligned}Lu&=f\\\mathbf {D} u&=\mathbf {0} \end{aligned}}}
and it is given by
u
(
x
)
=
∫
0
ℓ
f
(
s
)
G
(
x
,
s
)
d
s
,
{\displaystyle u(x)=\int _{0}^{\ell }f(s)\,G(x,s)\,ds\,,}
where
G
(
x
,
s
)
{\displaystyle G(x,s)}
is a Green's function satisfying the following conditions:
G
(
x
,
s
)
{\displaystyle G(x,s)}
is continuous in
x
{\displaystyle x}
and
s
{\displaystyle s}
.
For
x
≠
s
{\displaystyle x\neq s\,}
,
L
G
(
x
,
s
)
=
0
{\displaystyle LG(x,s)=0}
.
For
s
≠
0
{\displaystyle s\neq 0\,}
,
D
G
(
x
,
s
)
=
0
{\displaystyle \mathbf {D} G(x,s)=\mathbf {0} }
.
Derivative "jump":
G
′
(
s
0
+
,
s
)
−
G
′
(
s
0
−
,
s
)
=
1
/
p
(
s
)
{\displaystyle G'(s_{0+},s)-G'(s_{0-},s)=1/p(s)\,}
.
Symmetry:
G
(
x
,
s
)
=
G
(
s
,
x
)
{\displaystyle G(x,s)=G(s,x)\,}
.
=== Advanced and retarded Green's functions ===
Green's function is not necessarily unique since the addition of any solution of the homogeneous equation to one Green's function results in another Green's function. Therefore, if the homogeneous equation has nontrivial solutions, multiple Green's functions exist. Certain boundary value or initial value problems involve finding a Green's function that is nonvanishing only for
s
≤
x
{\displaystyle s\leq x}
; in this case, the solution is sometimes called a retarded Green's function. Similarly, a Green's function that is nonvanishing only for
s
≥
x
{\displaystyle s\geq x}
is called an advanced Green's function. In such cases, any linear combination of the two Green's functions is also a valid Green's function. Both the advanced and retarded Green's functions are called one-sided, while a Green's function that is nonvanishing for all
x
{\displaystyle x}
in the domain of definition is called two-sided.
The terminology advanced and retarded is especially useful when the variable x corresponds to time. In such cases, the solution provided by the use of the retarded Green's function depends only on the past sources and is causal whereas the solution provided by the use of the advanced Green's function depends only on the future sources and is acausal. In these problems, it is often the case that the causal solution is the physically important one. However, the advanced Green's function is useful in finding solutions to certain inverse problems where sources are to be found from boundary data. The use of advanced and retarded Green's function is especially common for the analysis of solutions of the inhomogeneous electromagnetic wave equation.
== Finding Green's functions ==
=== Units ===
While it does not uniquely fix the form the Green's function will take, performing a dimensional analysis to find the units a Green's function must have is an important sanity check on any Green's function found through other means. A quick examination of the defining equation,
L
G
(
x
,
s
)
=
δ
(
x
−
s
)
,
{\displaystyle LG(x,s)=\delta (x-s),}
shows that the units of
G
{\displaystyle G}
depend not only on the units of
L
{\displaystyle L}
but also on the number and units of the space of which the position vectors
x
{\displaystyle x}
and
s
{\displaystyle s}
are elements. This leads to the relationship:
[
[
G
]
]
=
[
[
L
]
]
−
1
[
[
d
x
]
]
−
1
,
{\displaystyle [[G]]=[[L]]^{-1}[[dx]]^{-1},}
where
[
[
G
]
]
{\displaystyle [[G]]}
is defined as, "the physical units of
G
{\displaystyle G}
", and
d
x
{\displaystyle dx}
is the volume element of the space (or spacetime).
For example, if
L
=
∂
t
2
{\displaystyle L=\partial _{t}^{2}}
and time is the only variable then:
[
[
L
]
]
=
[
[
time
]
]
−
2
,
[
[
d
x
]
]
=
[
[
time
]
]
,
and
[
[
G
]
]
=
[
[
time
]
]
.
{\displaystyle {\begin{aligned}[][[L]]&=[[{\text{time}}]]^{-2},\\[1ex][[dx]]&=[[{\text{time}}]],\ {\text{and}}\\[1ex][[G]]&=[[{\text{time}}]].\end{aligned}}}
If
L
=
◻
=
1
c
2
∂
t
2
−
∇
2
{\displaystyle L=\square ={\tfrac {1}{c^{2}}}\partial _{t}^{2}-\nabla ^{2}}
, the d'Alembert operator, and space has 3 dimensions then:
[
[
L
]
]
=
[
[
length
]
]
−
2
,
[
[
d
x
]
]
=
[
[
time
]
]
[
[
length
]
]
3
,
and
[
[
G
]
]
=
[
[
time
]
]
−
1
[
[
length
]
]
−
1
.
{\displaystyle {\begin{aligned}[][[L]]&=[[{\text{length}}]]^{-2},\\[1ex][[dx]]&=[[{\text{time}}]][[{\text{length}}]]^{3},\ {\text{and}}\\[1ex][[G]]&=[[{\text{time}}]]^{-1}[[{\text{length}}]]^{-1}.\end{aligned}}}
=== Eigenvalue expansions ===
If a differential operator L admits a set of eigenvectors Ψn(x) (i.e., a set of functions Ψn and scalars λn such that LΨn = λn Ψn ) that is complete, then it is possible to construct a Green's function from these eigenvectors and eigenvalues.
"Complete" means that the set of functions {Ψn} satisfies the following completeness relation,
δ
(
x
−
x
′
)
=
∑
n
=
0
∞
Ψ
n
†
(
x
′
)
Ψ
n
(
x
)
.
{\displaystyle \delta (x-x')=\sum _{n=0}^{\infty }\Psi _{n}^{\dagger }(x')\Psi _{n}(x).}
Then the following holds,
where
†
{\displaystyle \dagger }
represents complex conjugation.
Applying the operator L to each side of this equation results in the completeness relation, which was assumed.
The general study of Green's function written in the above form, and its relationship to the function spaces formed by the eigenvectors, is known as Fredholm theory.
There are several other methods for finding Green's functions, including the method of images, separation of variables, and Laplace transforms.
=== Combining Green's functions ===
If the differential operator
L
{\displaystyle L}
can be factored as
L
=
L
1
L
2
{\displaystyle L=L_{1}L_{2}}
then the Green's function of
L
{\displaystyle L}
can be constructed from the Green's functions for
L
1
{\displaystyle L_{1}}
and
L
2
{\displaystyle L_{2}}
:
G
(
x
,
s
)
=
∫
G
2
(
x
,
s
1
)
G
1
(
s
1
,
s
)
d
s
1
.
{\displaystyle G(x,s)=\int G_{2}(x,s_{1})\,G_{1}(s_{1},s)\,ds_{1}.}
The above identity follows immediately from taking
G
(
x
,
s
)
{\displaystyle G(x,s)}
to be the representation of the right operator inverse of
L
{\displaystyle L}
, analogous to how for the invertible linear operator
C
{\displaystyle C}
, defined by
C
=
(
A
B
)
−
1
=
B
−
1
A
−
1
{\displaystyle C=(AB)^{-1}=B^{-1}A^{-1}}
, is represented by its matrix elements
C
i
,
j
{\displaystyle C_{i,j}}
.
A further identity follows for differential operators that are scalar polynomials of the derivative,
L
=
P
N
(
∂
x
)
{\displaystyle L=P_{N}(\partial _{x})}
. The fundamental theorem of algebra, combined with the fact that
∂
x
{\displaystyle \partial _{x}}
commutes with itself, guarantees that the polynomial can be factored, putting
L
{\displaystyle L}
in the form:
L
=
∏
i
=
1
N
(
∂
x
−
z
i
)
,
{\displaystyle L=\prod _{i=1}^{N}\left(\partial _{x}-z_{i}\right),}
where
z
i
{\displaystyle z_{i}}
are the zeros of
P
N
(
z
)
{\displaystyle P_{N}(z)}
. Taking the Fourier transform of
L
G
(
x
,
s
)
=
δ
(
x
−
s
)
{\displaystyle LG(x,s)=\delta (x-s)}
with respect to both
x
{\displaystyle x}
and
s
{\displaystyle s}
gives:
G
^
(
k
x
,
k
s
)
=
δ
(
k
x
−
k
s
)
∏
i
=
1
N
(
i
k
x
−
z
i
)
.
{\displaystyle {\widehat {G}}(k_{x},k_{s})={\frac {\delta (k_{x}-k_{s})}{\prod _{i=1}^{N}(ik_{x}-z_{i})}}.}
The fraction can then be split into a sum using a partial fraction decomposition before Fourier transforming back to
x
{\displaystyle x}
and
s
{\displaystyle s}
space. This process yields identities that relate integrals of Green's functions and sums of the same. For example, if
L
=
(
∂
x
+
γ
)
(
∂
x
+
α
)
2
{\displaystyle L=\left(\partial _{x}+\gamma \right)\left(\partial _{x}+\alpha \right)^{2}}
then one form for its Green's function is:
G
(
x
,
s
)
=
1
(
γ
−
α
)
2
Θ
(
x
−
s
)
e
−
γ
(
x
−
s
)
−
1
(
γ
−
α
)
2
Θ
(
x
−
s
)
e
−
α
(
x
−
s
)
+
1
γ
−
α
Θ
(
x
−
s
)
(
x
−
s
)
e
−
α
(
x
−
s
)
=
∫
Θ
(
x
−
s
1
)
(
x
−
s
1
)
e
−
α
(
x
−
s
1
)
Θ
(
s
1
−
s
)
e
−
γ
(
s
1
−
s
)
d
s
1
.
{\displaystyle {\begin{aligned}G(x,s)&={\frac {1}{\left(\gamma -\alpha \right)^{2}}}\Theta (x-s)e^{-\gamma (x-s)}-{\frac {1}{\left(\gamma -\alpha \right)^{2}}}\Theta (x-s)e^{-\alpha (x-s)}+{\frac {1}{\gamma -\alpha }}\Theta (x-s)\left(x-s\right)e^{-\alpha (x-s)}\\[1ex]&=\int \Theta (x-s_{1})\left(x-s_{1}\right)e^{-\alpha (x-s_{1})}\Theta (s_{1}-s)e^{-\gamma (s_{1}-s)}\,ds_{1}.\end{aligned}}}
While the example presented is tractable analytically, it illustrates a process that works when the integral is not trivial (for example, when
∇
2
{\displaystyle \nabla ^{2}}
is the operator in the polynomial).
=== Table of Green's functions ===
The following table gives an overview of Green's functions of frequently appearing differential operators, where
r
=
x
2
+
y
2
+
z
2
{\textstyle r={\sqrt {x^{2}+y^{2}+z^{2}}}}
,
ρ
=
x
2
+
y
2
{\textstyle \rho ={\sqrt {x^{2}+y^{2}}}}
,
Θ
(
t
)
{\textstyle \Theta (t)}
is the Heaviside step function,
J
ν
(
z
)
{\textstyle J_{\nu }(z)}
is a Bessel function,
I
ν
(
z
)
{\textstyle I_{\nu }(z)}
is a modified Bessel function of the first kind, and
K
ν
(
z
)
{\textstyle K_{\nu }(z)}
is a modified Bessel function of the second kind. Where time (t) appears in the first column, the retarded (causal) Green's function is listed.
== Green's functions for the Laplacian ==
Green's functions for linear differential operators involving the Laplacian may be readily put to use using the second of Green's identities.
To derive Green's theorem, begin with the divergence theorem (otherwise known as Gauss's theorem),
∫
V
∇
⋅
A
d
V
=
∫
S
A
⋅
d
σ
^
.
{\displaystyle \int _{V}\nabla \cdot \mathbf {A} \,dV=\int _{S}\mathbf {A} \cdot d{\hat {\boldsymbol {\sigma }}}\,.}
Let
A
=
φ
∇
ψ
−
ψ
∇
φ
{\displaystyle \mathbf {A} =\varphi \,\nabla \psi -\psi \,\nabla \varphi }
and substitute into Gauss' law.
Compute
∇
⋅
A
{\displaystyle \nabla \cdot \mathbf {A} }
and apply the product rule for the ∇ operator,
∇
⋅
A
=
∇
⋅
(
φ
∇
ψ
−
ψ
∇
φ
)
=
(
∇
φ
)
⋅
(
∇
ψ
)
+
φ
∇
2
ψ
−
(
∇
φ
)
⋅
(
∇
ψ
)
−
ψ
∇
2
φ
=
φ
∇
2
ψ
−
ψ
∇
2
φ
.
{\displaystyle {\begin{aligned}\nabla \cdot \mathbf {A} &=\nabla \cdot \left(\varphi \,\nabla \psi \;-\;\psi \,\nabla \varphi \right)\\&=(\nabla \varphi )\cdot (\nabla \psi )\;+\;\varphi \,\nabla ^{2}\psi \;-\;(\nabla \varphi )\cdot (\nabla \psi )\;-\;\psi \nabla ^{2}\varphi \\&=\varphi \,\nabla ^{2}\psi \;-\;\psi \,\nabla ^{2}\varphi .\end{aligned}}}
Plugging this into the divergence theorem produces Green's theorem,
∫
V
(
φ
∇
2
ψ
−
ψ
∇
2
φ
)
d
V
=
∫
S
(
φ
∇
ψ
−
ψ
∇
φ
)
⋅
d
σ
^
.
{\displaystyle \int _{V}\left(\varphi \,\nabla ^{2}\psi -\psi \,\nabla ^{2}\varphi \right)dV=\int _{S}\left(\varphi \,\nabla \psi -\psi \nabla \,\varphi \right)\cdot d{\hat {\boldsymbol {\sigma }}}.}
Suppose that the linear differential operator L is the Laplacian, ∇2, and that there is a Green's function G for the Laplacian. The defining property of the Green's function still holds,
L
G
(
x
,
x
′
)
=
∇
2
G
(
x
,
x
′
)
=
δ
(
x
−
x
′
)
.
{\displaystyle LG(\mathbf {x} ,\mathbf {x} ')=\nabla ^{2}G(\mathbf {x} ,\mathbf {x} ')=\delta (\mathbf {x} -\mathbf {x} ').}
Let
ψ
=
G
{\displaystyle \psi =G}
in Green's second identity, see Green's identities. Then,
∫
V
[
φ
(
x
′
)
δ
(
x
−
x
′
)
−
G
(
x
,
x
′
)
∇
′
2
φ
(
x
′
)
]
d
3
x
′
=
∫
S
[
φ
(
x
′
)
∇
′
G
(
x
,
x
′
)
−
G
(
x
,
x
′
)
∇
′
φ
(
x
′
)
]
⋅
d
σ
^
′
.
{\displaystyle \int _{V}\left[\varphi (\mathbf {x} ')\delta (\mathbf {x} -\mathbf {x} ')-G(\mathbf {x} ,\mathbf {x} ')\,{\nabla '}^{2}\,\varphi (\mathbf {x} ')\right]d^{3}\mathbf {x} '=\int _{S}\left[\varphi (\mathbf {x} ')\,{\nabla '}G(\mathbf {x} ,\mathbf {x} ')-G(\mathbf {x} ,\mathbf {x} ')\,{\nabla '}\varphi (\mathbf {x} ')\right]\cdot d{\hat {\boldsymbol {\sigma }}}'.}
Using this expression, it is possible to solve Laplace's equation ∇2φ(x) = 0 or Poisson's equation ∇2φ(x) = −ρ(x), subject to either Neumann or Dirichlet boundary conditions. In other words, we can solve for φ(x) everywhere inside a volume where either (1) the value of φ(x) is specified on the bounding surface of the volume (Dirichlet boundary conditions), or (2) the normal derivative of φ(x) is specified on the bounding surface (Neumann boundary conditions).
Suppose the problem is to solve for φ(x) inside the region. Then the integral
∫
V
φ
(
x
′
)
δ
(
x
−
x
′
)
d
3
x
′
{\displaystyle \int _{V}\varphi (\mathbf {x} ')\,\delta (\mathbf {x} -\mathbf {x} ')\,d^{3}\mathbf {x} '}
reduces to simply φ(x) due to the defining property of the Dirac delta function and we have
φ
(
x
)
=
−
∫
V
G
(
x
,
x
′
)
ρ
(
x
′
)
d
3
x
′
+
∫
S
[
φ
(
x
′
)
∇
′
G
(
x
,
x
′
)
−
G
(
x
,
x
′
)
∇
′
φ
(
x
′
)
]
⋅
d
σ
^
′
.
{\displaystyle \varphi (\mathbf {x} )=-\int _{V}G(\mathbf {x} ,\mathbf {x} ')\,\rho (\mathbf {x} ')\,d^{3}\mathbf {x} '+\int _{S}\left[\varphi (\mathbf {x} ')\,\nabla 'G(\mathbf {x} ,\mathbf {x} ')-G(\mathbf {x} ,\mathbf {x} ')\,\nabla '\varphi (\mathbf {x} ')\right]\cdot d{\hat {\boldsymbol {\sigma }}}'.}
This form expresses the well-known property of harmonic functions, that if the value or normal derivative is known on a bounding surface, then the value of the function inside the volume is known everywhere.
In electrostatics, φ(x) is interpreted as the electric potential, ρ(x) as electric charge density, and the normal derivative
∇
φ
(
x
′
)
⋅
d
σ
^
′
{\displaystyle \nabla \varphi (\mathbf {x} ')\cdot d{\hat {\boldsymbol {\sigma }}}'}
as the normal component of the electric field.
If the problem is to solve a Dirichlet boundary value problem, the Green's function should be chosen such that G(x,x′) vanishes when either x or x′ is on the bounding surface. Thus only one of the two terms in the surface integral remains. If the problem is to solve a Neumann boundary value problem, it might seem logical to choose Green's function so that its normal derivative vanishes on the bounding surface. However, application of Gauss's theorem to the differential equation defining the Green's function yields
∫
S
∇
′
G
(
x
,
x
′
)
⋅
d
σ
^
′
=
∫
V
∇
′
2
G
(
x
,
x
′
)
d
3
x
′
=
∫
V
δ
(
x
−
x
′
)
d
3
x
′
=
1
,
{\displaystyle \int _{S}\nabla 'G(\mathbf {x} ,\mathbf {x} ')\cdot d{\hat {\boldsymbol {\sigma }}}'=\int _{V}\nabla '^{2}G(\mathbf {x} ,\mathbf {x} ')\,d^{3}\mathbf {x} '=\int _{V}\delta (\mathbf {x} -\mathbf {x} ')\,d^{3}\mathbf {x} '=1\,,}
meaning the normal derivative of G(x,x′) cannot vanish on the surface, because it must integrate to 1 on the surface.
The simplest form the normal derivative can take is that of a constant, namely 1/S, where S is the surface area of the surface. The surface term in the solution becomes
∫
S
φ
(
x
′
)
∇
′
G
(
x
,
x
′
)
⋅
d
σ
^
′
=
⟨
φ
⟩
S
{\displaystyle \int _{S}\varphi (\mathbf {x} ')\,\nabla 'G(\mathbf {x} ,\mathbf {x} ')\cdot d{\hat {\boldsymbol {\sigma }}}'=\langle \varphi \rangle _{S}}
where
⟨
φ
⟩
S
{\displaystyle \langle \varphi \rangle _{S}}
is the average value of the potential on the surface. This number is not known in general, but is often unimportant, as the goal is often to obtain the electric field given by the gradient of the potential, rather than the potential itself.
With no boundary conditions, the Green's function for the Laplacian (Green's function for the three-variable Laplace equation) is
G
(
x
,
x
′
)
=
−
1
4
π
|
x
−
x
′
|
.
{\displaystyle G(\mathbf {x} ,\mathbf {x} ')=-{\frac {1}{4\pi \left|\mathbf {x} -\mathbf {x} '\right|}}.}
Supposing that the bounding surface goes out to infinity and plugging in this expression for the Green's function finally yields the standard expression for electric potential in terms of electric charge density as
== Example ==
Find the Green function for the following problem, whose Green's function number is X11:
L
u
=
u
″
+
k
2
u
=
f
(
x
)
u
(
0
)
=
0
,
u
(
π
2
k
)
=
0.
{\displaystyle {\begin{aligned}Lu&=u''+k^{2}u=f(x)\\u(0)&=0,\quad u{\left({\tfrac {\pi }{2k}}\right)}=0.\end{aligned}}}
First step: The Green's function for the linear operator at hand is defined as the solution to
If
x
≠
s
{\displaystyle x\neq s}
, then the delta function gives zero, and the general solution is
G
(
x
,
s
)
=
c
1
cos
k
x
+
c
2
sin
k
x
.
{\displaystyle G(x,s)=c_{1}\cos kx+c_{2}\sin kx.}
For
x
<
s
{\displaystyle x<s}
, the boundary condition at
x
=
0
{\displaystyle x=0}
implies
G
(
0
,
s
)
=
c
1
⋅
1
+
c
2
⋅
0
=
0
,
c
1
=
0
{\displaystyle G(0,s)=c_{1}\cdot 1+c_{2}\cdot 0=0,\quad c_{1}=0}
if
x
<
s
{\displaystyle x<s}
and
s
≠
π
2
k
{\displaystyle s\neq {\tfrac {\pi }{2k}}}
.
For
x
>
s
{\displaystyle x>s}
, the boundary condition at
x
=
π
2
k
{\displaystyle x={\tfrac {\pi }{2k}}}
implies
G
(
π
2
k
,
s
)
=
c
3
⋅
0
+
c
4
⋅
1
=
0
,
c
4
=
0
{\displaystyle G{\left({\tfrac {\pi }{2k}},s\right)}=c_{3}\cdot 0+c_{4}\cdot 1=0,\quad c_{4}=0}
The equation of
G
(
0
,
s
)
=
0
{\displaystyle G(0,s)=0}
is skipped for similar reasons.
To summarize the results thus far:
G
(
x
,
s
)
=
{
c
2
sin
k
x
,
for
x
<
s
,
c
3
cos
k
x
,
for
s
<
x
.
{\displaystyle G(x,s)={\begin{cases}c_{2}\sin kx,&{\text{for }}x<s,\\[0.4ex]c_{3}\cos kx,&{\text{for }}s<x.\end{cases}}}
Second step: The next task is to determine
c
2
{\displaystyle c_{2}}
and
c
3
{\displaystyle c_{3}}
.
Ensuring continuity in the Green's function at
x
=
s
{\displaystyle x=s}
implies
c
2
sin
k
s
=
c
3
cos
k
s
{\displaystyle c_{2}\sin ks=c_{3}\cos ks}
One can ensure proper discontinuity in the first derivative by integrating the defining differential equation (i.e., Eq. *) from
x
=
s
−
ε
{\displaystyle x=s-\varepsilon }
to
x
=
s
+
ε
{\displaystyle x=s+\varepsilon }
and taking the limit as
ε
{\displaystyle \varepsilon }
goes to zero. Note that we only integrate the second derivative as the remaining term will be continuous by construction.
c
3
⋅
(
−
k
sin
k
s
)
−
c
2
⋅
(
k
cos
k
s
)
=
1
{\displaystyle c_{3}\cdot (-k\sin ks)-c_{2}\cdot (k\cos ks)=1}
The two (dis)continuity equations can be solved for
c
2
{\displaystyle c_{2}}
and
c
3
{\displaystyle c_{3}}
to obtain
c
2
=
−
cos
k
s
k
;
c
3
=
−
sin
k
s
k
{\displaystyle c_{2}=-{\frac {\cos ks}{k}}\quad ;\quad c_{3}=-{\frac {\sin ks}{k}}}
So Green's function for this problem is:
G
(
x
,
s
)
=
{
−
cos
k
s
k
sin
k
x
,
x
<
s
,
−
sin
k
s
k
cos
k
x
,
s
<
x
.
{\displaystyle G(x,s)={\begin{cases}-{\frac {\cos ks}{k}}\sin kx,&x<s,\\-{\frac {\sin ks}{k}}\cos kx,&s<x.\end{cases}}}
== Further examples ==
Let n = 1 and let the subset be all of R. Let L be
d
d
x
{\textstyle {\frac {d}{dx}}}
. Then, the Heaviside step function Θ(x − x0) is a Green's function of L at x0.
Let n = 2 and let the subset be the quarter-plane {(x, y) : x, y ≥ 0} and L be the Laplacian. Also, assume a Dirichlet boundary condition is imposed at x = 0 and a Neumann boundary condition is imposed at y = 0. Then the X10Y20 Green's function is
G
(
x
,
y
,
x
0
,
y
0
)
=
1
2
π
[
ln
(
x
−
x
0
)
2
+
(
y
−
y
0
)
2
−
ln
(
x
+
x
0
)
2
+
(
y
−
y
0
)
2
+
ln
(
x
−
x
0
)
2
+
(
y
+
y
0
)
2
−
ln
(
x
+
x
0
)
2
+
(
y
+
y
0
)
2
]
.
{\displaystyle {\begin{aligned}G(x,y,x_{0},y_{0})={\dfrac {1}{2\pi }}&\left[\ln {\sqrt {\left(x-x_{0}\right)^{2}+\left(y-y_{0}\right)^{2}}}-\ln {\sqrt {\left(x+x_{0}\right)^{2}+\left(y-y_{0}\right)^{2}}}\right.\\[5pt]&\left.{}+\ln {\sqrt {\left(x-x_{0}\right)^{2}+\left(y+y_{0}\right)^{2}}}-\ln {\sqrt {\left(x+x_{0}\right)^{2}+\left(y+y_{0}\right)^{2}}}\,\right].\end{aligned}}}
Let
a
<
x
<
b
{\displaystyle a<x<b}
, and all three are elements of the real numbers. Then, for any function
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
with an
n
{\displaystyle n}
-th derivative that is integrable over the interval
[
a
,
b
]
{\displaystyle [a,b]}
:
f
(
x
)
=
∑
m
=
0
n
−
1
(
x
−
a
)
m
m
!
[
d
m
f
d
x
m
]
x
=
a
+
∫
a
b
[
(
x
−
s
)
n
−
1
(
n
−
1
)
!
Θ
(
x
−
s
)
]
[
d
n
f
d
x
n
]
x
=
s
d
s
.
{\displaystyle f(x)=\sum _{m=0}^{n-1}{\frac {(x-a)^{m}}{m!}}\left[{\frac {d^{m}f}{dx^{m}}}\right]_{x=a}+\int _{a}^{b}\left[{\frac {(x-s)^{n-1}}{(n-1)!}}\Theta (x-s)\right]\left[{\frac {d^{n}f}{dx^{n}}}\right]_{x=s}ds\,.}
The Green's function in the above equation,
G
(
x
,
s
)
=
(
x
−
s
)
n
−
1
(
n
−
1
)
!
Θ
(
x
−
s
)
{\displaystyle G(x,s)={\frac {(x-s)^{n-1}}{(n-1)!}}\Theta (x-s)}
, is not unique. How is the equation modified if
g
(
x
−
s
)
{\displaystyle g(x-s)}
is added to
G
(
x
,
s
)
{\displaystyle G(x,s)}
, where
g
(
x
)
{\displaystyle g(x)}
satisfies
d
n
g
d
x
n
=
0
{\textstyle {\frac {d^{n}g}{dx^{n}}}=0}
for all
x
∈
[
a
,
b
]
{\displaystyle x\in [a,b]}
(for example,
g
(
x
)
=
−
x
/
2
{\displaystyle g(x)=-x/2}
with
n
=
2
{\displaystyle n=2}
)? Also, compare the above equation to the form of a Taylor series centered at
x
=
a
{\displaystyle x=a}
.
== See also ==
== Footnotes ==
== References ==
=== Cited works ===
== External links ==
"Green function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Green's Function". MathWorld.
Green's function for differential operator at PlanetMath.
Green's function at PlanetMath.
Green functions and conformal mapping at PlanetMath.
Introduction to the Keldysh Nonequilibrium Green Function Technique by A. P. Jauho
Green's Function Library
Tutorial on Green's functions
Boundary Element Method (for some idea on how Green's functions may be used with the boundary element method for solving potential problems numerically) Archived 2012-02-07 at the Wayback Machine
At Citizendium
MIT video lecture on Green's function
Bowley, Roger. "George Green & Green's Functions". Sixty Symbols. Brady Haran for the University of Nottingham. | Wikipedia/Green's_function |
In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange.
Because a differentiable functional is stationary at its local extrema, the Euler–Lagrange equation is useful for solving optimization problems in which, given some functional, one seeks the function minimizing or maximizing it. This is analogous to Fermat's theorem in calculus, stating that at any point where a differentiable function attains a local extremum its derivative is zero.
In Lagrangian mechanics, according to Hamilton's principle of stationary action, the evolution of a physical system is described by the solutions to the Euler equation for the action of the system. In this context Euler equations are usually called Lagrange equations. In classical mechanics, it is equivalent to Newton's laws of motion; indeed, the Euler-Lagrange equations will produce the same equations as Newton's Laws. This is particularly useful when analyzing systems whose force vectors are particularly complicated. It has the advantage that it takes the same form in any system of generalized coordinates, and it is better suited to generalizations. In classical field theory there is an analogous equation to calculate the dynamics of a field.
== History ==
The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point.
Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Their correspondence ultimately led to the calculus of variations, a term coined by Euler himself in 1766.
== Statement ==
Let
(
X
,
L
)
{\displaystyle (X,L)}
be a real dynamical system with
n
{\displaystyle n}
degrees of freedom. Here
X
{\displaystyle X}
is the configuration space and
L
=
L
(
t
,
q
(
t
)
,
v
(
t
)
)
{\displaystyle L=L(t,{\boldsymbol {q}}(t),{\boldsymbol {v}}(t))}
the Lagrangian, i.e. a smooth real-valued function such that
q
(
t
)
∈
X
,
{\displaystyle {\boldsymbol {q}}(t)\in X,}
and
v
(
t
)
{\displaystyle {\boldsymbol {v}}(t)}
is an
n
{\displaystyle n}
-dimensional "vector of speed". (For those familiar with differential geometry,
X
{\displaystyle X}
is a smooth manifold, and
L
:
R
t
×
X
×
T
X
→
R
,
{\displaystyle L:{\mathbb {R} }_{t}\times X\times TX\to {\mathbb {R} },}
where
T
X
{\displaystyle TX}
is the tangent bundle of
X
)
.
{\displaystyle X).}
Let
P
(
a
,
b
,
x
a
,
x
b
)
{\displaystyle {\cal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})}
be the set of smooth paths
q
:
[
a
,
b
]
→
X
{\displaystyle {\boldsymbol {q}}:[a,b]\to X}
for which
q
(
a
)
=
x
a
{\displaystyle {\boldsymbol {q}}(a)={\boldsymbol {x}}_{a}}
and
q
(
b
)
=
x
b
.
{\displaystyle {\boldsymbol {q}}(b)={\boldsymbol {x}}_{b}.}
The action functional
S
:
P
(
a
,
b
,
x
a
,
x
b
)
→
R
{\displaystyle S:{\cal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})\to \mathbb {R} }
is defined via
S
[
q
]
=
∫
a
b
L
(
t
,
q
(
t
)
,
q
˙
(
t
)
)
d
t
.
{\displaystyle S[{\boldsymbol {q}}]=\int _{a}^{b}L(t,{\boldsymbol {q}}(t),{\dot {\boldsymbol {q}}}(t))\,dt.}
A path
q
∈
P
(
a
,
b
,
x
a
,
x
b
)
{\displaystyle {\boldsymbol {q}}\in {\cal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})}
is a stationary point of
S
{\displaystyle S}
if and only if
Here,
q
˙
(
t
)
{\displaystyle {\dot {\boldsymbol {q}}}(t)}
is the time derivative of
q
(
t
)
.
{\displaystyle {\boldsymbol {q}}(t).}
When we say stationary point, we mean a stationary point of
S
{\displaystyle S}
with respect to any small perturbation in
q
{\displaystyle {\boldsymbol {q}}}
. See proofs below for more rigorous detail.
== Example ==
A standard example is finding the real-valued function y(x) on the interval [a, b], such that y(a) = c and y(b) = d, for which the path length along the curve traced by y is as short as possible.
s
=
∫
a
b
d
x
2
+
d
y
2
=
∫
a
b
1
+
y
′
2
d
x
,
{\displaystyle {\text{s}}=\int _{a}^{b}{\sqrt {\mathrm {d} x^{2}+\mathrm {d} y^{2}}}=\int _{a}^{b}{\sqrt {1+y'^{2}}}\,\mathrm {d} x,}
the integrand function being
L
(
x
,
y
,
y
′
)
=
1
+
y
′
2
{\textstyle L(x,y,y')={\sqrt {1+y'^{2}}}}
.
The partial derivatives of L are:
∂
L
(
x
,
y
,
y
′
)
∂
y
′
=
y
′
1
+
y
′
2
and
∂
L
(
x
,
y
,
y
′
)
∂
y
=
0.
{\displaystyle {\frac {\partial L(x,y,y')}{\partial y'}}={\frac {y'}{\sqrt {1+y'^{2}}}}\quad {\text{and}}\quad {\frac {\partial L(x,y,y')}{\partial y}}=0.}
By substituting these into the Euler–Lagrange equation, we obtain
d
d
x
y
′
(
x
)
1
+
(
y
′
(
x
)
)
2
=
0
y
′
(
x
)
1
+
(
y
′
(
x
)
)
2
=
C
=
constant
⇒
y
′
(
x
)
=
C
1
−
C
2
=:
A
⇒
y
(
x
)
=
A
x
+
B
{\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} x}}{\frac {y'(x)}{\sqrt {1+(y'(x))^{2}}}}&=0\\{\frac {y'(x)}{\sqrt {1+(y'(x))^{2}}}}&=C={\text{constant}}\\\Rightarrow y'(x)&={\frac {C}{\sqrt {1-C^{2}}}}=:A\\\Rightarrow y(x)&=Ax+B\end{aligned}}}
that is, the function must have a constant first derivative, and thus its graph is a straight line.
== Generalizations ==
=== Single function of single variable with higher derivatives ===
The stationary values of the functional
I
[
f
]
=
∫
x
0
x
1
L
(
x
,
f
,
f
′
,
f
″
,
…
,
f
(
k
)
)
d
x
;
f
′
:=
d
f
d
x
,
f
″
:=
d
2
f
d
x
2
,
f
(
k
)
:=
d
k
f
d
x
k
{\displaystyle I[f]=\int _{x_{0}}^{x_{1}}{\mathcal {L}}(x,f,f',f'',\dots ,f^{(k)})~\mathrm {d} x~;~~f':={\cfrac {\mathrm {d} f}{\mathrm {d} x}},~f'':={\cfrac {\mathrm {d} ^{2}f}{\mathrm {d} x^{2}}},~f^{(k)}:={\cfrac {\mathrm {d} ^{k}f}{\mathrm {d} x^{k}}}}
can be obtained from the Euler–Lagrange equation
∂
L
∂
f
−
d
d
x
(
∂
L
∂
f
′
)
+
d
2
d
x
2
(
∂
L
∂
f
″
)
−
⋯
+
(
−
1
)
k
d
k
d
x
k
(
∂
L
∂
f
(
k
)
)
=
0
{\displaystyle {\cfrac {\partial {\mathcal {L}}}{\partial f}}-{\cfrac {\mathrm {d} }{\mathrm {d} x}}\left({\cfrac {\partial {\mathcal {L}}}{\partial f'}}\right)+{\cfrac {\mathrm {d} ^{2}}{\mathrm {d} x^{2}}}\left({\cfrac {\partial {\mathcal {L}}}{\partial f''}}\right)-\dots +(-1)^{k}{\cfrac {\mathrm {d} ^{k}}{\mathrm {d} x^{k}}}\left({\cfrac {\partial {\mathcal {L}}}{\partial f^{(k)}}}\right)=0}
under fixed boundary conditions for the function itself as well as for the first
k
−
1
{\displaystyle k-1}
derivatives (i.e. for all
f
(
i
)
,
i
∈
{
0
,
.
.
.
,
k
−
1
}
{\displaystyle f^{(i)},i\in \{0,...,k-1\}}
). The endpoint values of the highest derivative
f
(
k
)
{\displaystyle f^{(k)}}
remain flexible.
=== Several functions of single variable with single derivative ===
If the problem involves finding several functions (
f
1
,
f
2
,
…
,
f
m
{\displaystyle f_{1},f_{2},\dots ,f_{m}}
) of a single independent variable (
x
{\displaystyle x}
) that define an extremum of the functional
I
[
f
1
,
f
2
,
…
,
f
m
]
=
∫
x
0
x
1
L
(
x
,
f
1
,
f
2
,
…
,
f
m
,
f
1
′
,
f
2
′
,
…
,
f
m
′
)
d
x
;
f
i
′
:=
d
f
i
d
x
{\displaystyle I[f_{1},f_{2},\dots ,f_{m}]=\int _{x_{0}}^{x_{1}}{\mathcal {L}}(x,f_{1},f_{2},\dots ,f_{m},f_{1}',f_{2}',\dots ,f_{m}')~\mathrm {d} x~;~~f_{i}':={\cfrac {\mathrm {d} f_{i}}{\mathrm {d} x}}}
then the corresponding Euler–Lagrange equations are
∂
L
∂
f
i
−
d
d
x
(
∂
L
∂
f
i
′
)
=
0
;
i
=
1
,
2
,
.
.
.
,
m
{\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {L}}}{\partial f_{i}}}-{\frac {\mathrm {d} }{\mathrm {d} x}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{i}'}}\right)=0;\quad i=1,2,...,m\end{aligned}}}
=== Single function of several variables with single derivative ===
A multi-dimensional generalization comes from considering a function on n variables. If
Ω
{\displaystyle \Omega }
is some surface, then
I
[
f
]
=
∫
Ω
L
(
x
1
,
…
,
x
n
,
f
,
f
1
,
…
,
f
n
)
d
x
;
f
j
:=
∂
f
∂
x
j
{\displaystyle I[f]=\int _{\Omega }{\mathcal {L}}(x_{1},\dots ,x_{n},f,f_{1},\dots ,f_{n})\,\mathrm {d} \mathbf {x} \,\!~;~~f_{j}:={\cfrac {\partial f}{\partial x_{j}}}}
is extremized only if f satisfies the partial differential equation
∂
L
∂
f
−
∑
j
=
1
n
∂
∂
x
j
(
∂
L
∂
f
j
)
=
0.
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial f}}-\sum _{j=1}^{n}{\frac {\partial }{\partial x_{j}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{j}}}\right)=0.}
When n = 2 and functional
I
{\displaystyle {\mathcal {I}}}
is the energy functional, this leads to the soap-film minimal surface problem.
=== Several functions of several variables with single derivative ===
If there are several unknown functions to be determined and several variables such that
I
[
f
1
,
f
2
,
…
,
f
m
]
=
∫
Ω
L
(
x
1
,
…
,
x
n
,
f
1
,
…
,
f
m
,
f
1
,
1
,
…
,
f
1
,
n
,
…
,
f
m
,
1
,
…
,
f
m
,
n
)
d
x
;
f
i
,
j
:=
∂
f
i
∂
x
j
{\displaystyle I[f_{1},f_{2},\dots ,f_{m}]=\int _{\Omega }{\mathcal {L}}(x_{1},\dots ,x_{n},f_{1},\dots ,f_{m},f_{1,1},\dots ,f_{1,n},\dots ,f_{m,1},\dots ,f_{m,n})\,\mathrm {d} \mathbf {x} \,\!~;~~f_{i,j}:={\cfrac {\partial f_{i}}{\partial x_{j}}}}
the system of Euler–Lagrange equations is
∂
L
∂
f
1
−
∑
j
=
1
n
∂
∂
x
j
(
∂
L
∂
f
1
,
j
)
=
0
1
∂
L
∂
f
2
−
∑
j
=
1
n
∂
∂
x
j
(
∂
L
∂
f
2
,
j
)
=
0
2
⋮
⋮
⋮
∂
L
∂
f
m
−
∑
j
=
1
n
∂
∂
x
j
(
∂
L
∂
f
m
,
j
)
=
0
m
.
{\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {L}}}{\partial f_{1}}}-\sum _{j=1}^{n}{\frac {\partial }{\partial x_{j}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{1,j}}}\right)&=0_{1}\\{\frac {\partial {\mathcal {L}}}{\partial f_{2}}}-\sum _{j=1}^{n}{\frac {\partial }{\partial x_{j}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{2,j}}}\right)&=0_{2}\\\vdots \qquad \vdots \qquad &\quad \vdots \\{\frac {\partial {\mathcal {L}}}{\partial f_{m}}}-\sum _{j=1}^{n}{\frac {\partial }{\partial x_{j}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{m,j}}}\right)&=0_{m}.\end{aligned}}}
=== Single function of two variables with higher derivatives ===
If there is a single unknown function f to be determined that is dependent on two variables x1 and x2 and if the functional depends on higher derivatives of f up to n-th order such that
I
[
f
]
=
∫
Ω
L
(
x
1
,
x
2
,
f
,
f
1
,
f
2
,
f
11
,
f
12
,
f
22
,
…
,
f
22
…
2
)
d
x
f
i
:=
∂
f
∂
x
i
,
f
i
j
:=
∂
2
f
∂
x
i
∂
x
j
,
…
{\displaystyle {\begin{aligned}I[f]&=\int _{\Omega }{\mathcal {L}}(x_{1},x_{2},f,f_{1},f_{2},f_{11},f_{12},f_{22},\dots ,f_{22\dots 2})\,\mathrm {d} \mathbf {x} \\&\qquad \quad f_{i}:={\cfrac {\partial f}{\partial x_{i}}}\;,\quad f_{ij}:={\cfrac {\partial ^{2}f}{\partial x_{i}\partial x_{j}}}\;,\;\;\dots \end{aligned}}}
then the Euler–Lagrange equation is
∂
L
∂
f
−
∂
∂
x
1
(
∂
L
∂
f
1
)
−
∂
∂
x
2
(
∂
L
∂
f
2
)
+
∂
2
∂
x
1
2
(
∂
L
∂
f
11
)
+
∂
2
∂
x
1
∂
x
2
(
∂
L
∂
f
12
)
+
∂
2
∂
x
2
2
(
∂
L
∂
f
22
)
−
⋯
+
(
−
1
)
n
∂
n
∂
x
2
n
(
∂
L
∂
f
22
…
2
)
=
0
{\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {L}}}{\partial f}}&-{\frac {\partial }{\partial x_{1}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{1}}}\right)-{\frac {\partial }{\partial x_{2}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{2}}}\right)+{\frac {\partial ^{2}}{\partial x_{1}^{2}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{11}}}\right)+{\frac {\partial ^{2}}{\partial x_{1}\partial x_{2}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{12}}}\right)+{\frac {\partial ^{2}}{\partial x_{2}^{2}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{22}}}\right)\\&-\dots +(-1)^{n}{\frac {\partial ^{n}}{\partial x_{2}^{n}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{22\dots 2}}}\right)=0\end{aligned}}}
which can be represented shortly as:
∂
L
∂
f
+
∑
j
=
1
n
∑
μ
1
≤
…
≤
μ
j
(
−
1
)
j
∂
j
∂
x
μ
1
…
∂
x
μ
j
(
∂
L
∂
f
μ
1
…
μ
j
)
=
0
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial f}}+\sum _{j=1}^{n}\sum _{\mu _{1}\leq \ldots \leq \mu _{j}}(-1)^{j}{\frac {\partial ^{j}}{\partial x_{\mu _{1}}\dots \partial x_{\mu _{j}}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{\mu _{1}\dots \mu _{j}}}}\right)=0}
wherein
μ
1
…
μ
j
{\displaystyle \mu _{1}\dots \mu _{j}}
are indices that span the number of variables, that is, here they go from 1 to 2. Here summation over the
μ
1
…
μ
j
{\displaystyle \mu _{1}\dots \mu _{j}}
indices is only over
μ
1
≤
μ
2
≤
…
≤
μ
j
{\displaystyle \mu _{1}\leq \mu _{2}\leq \ldots \leq \mu _{j}}
in order to avoid counting the same partial derivative multiple times, for example
f
12
=
f
21
{\displaystyle f_{12}=f_{21}}
appears only once in the previous equation.
=== Several functions of several variables with higher derivatives ===
If there are p unknown functions fi to be determined that are dependent on m variables x1 ... xm and if the functional depends on higher derivatives of the fi up to n-th order such that
I
[
f
1
,
…
,
f
p
]
=
∫
Ω
L
(
x
1
,
…
,
x
m
;
f
1
,
…
,
f
p
;
f
1
,
1
,
…
,
f
p
,
m
;
f
1
,
11
,
…
,
f
p
,
m
m
;
…
;
f
p
,
1
…
1
,
…
,
f
p
,
m
…
m
)
d
x
f
i
,
μ
:=
∂
f
i
∂
x
μ
,
f
i
,
μ
1
μ
2
:=
∂
2
f
i
∂
x
μ
1
∂
x
μ
2
,
…
{\displaystyle {\begin{aligned}I[f_{1},\ldots ,f_{p}]&=\int _{\Omega }{\mathcal {L}}(x_{1},\ldots ,x_{m};f_{1},\ldots ,f_{p};f_{1,1},\ldots ,f_{p,m};f_{1,11},\ldots ,f_{p,mm};\ldots ;f_{p,1\ldots 1},\ldots ,f_{p,m\ldots m})\,\mathrm {d} \mathbf {x} \\&\qquad \quad f_{i,\mu }:={\cfrac {\partial f_{i}}{\partial x_{\mu }}}\;,\quad f_{i,\mu _{1}\mu _{2}}:={\cfrac {\partial ^{2}f_{i}}{\partial x_{\mu _{1}}\partial x_{\mu _{2}}}}\;,\;\;\dots \end{aligned}}}
where
μ
1
…
μ
j
{\displaystyle \mu _{1}\dots \mu _{j}}
are indices that span the number of variables, that is they go from 1 to m. Then the Euler–Lagrange equation is
∂
L
∂
f
i
+
∑
j
=
1
n
∑
μ
1
≤
…
≤
μ
j
(
−
1
)
j
∂
j
∂
x
μ
1
…
∂
x
μ
j
(
∂
L
∂
f
i
,
μ
1
…
μ
j
)
=
0
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial f_{i}}}+\sum _{j=1}^{n}\sum _{\mu _{1}\leq \ldots \leq \mu _{j}}(-1)^{j}{\frac {\partial ^{j}}{\partial x_{\mu _{1}}\dots \partial x_{\mu _{j}}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{i,\mu _{1}\dots \mu _{j}}}}\right)=0}
where the summation over the
μ
1
…
μ
j
{\displaystyle \mu _{1}\dots \mu _{j}}
is avoiding counting the same derivative
f
i
,
μ
1
μ
2
=
f
i
,
μ
2
μ
1
{\displaystyle f_{i,\mu _{1}\mu _{2}}=f_{i,\mu _{2}\mu _{1}}}
several times, just as in the previous subsection. This can be expressed more compactly as
∑
j
=
0
n
∑
μ
1
≤
…
≤
μ
j
(
−
1
)
j
∂
μ
1
…
μ
j
j
(
∂
L
∂
f
i
,
μ
1
…
μ
j
)
=
0
{\displaystyle \sum _{j=0}^{n}\sum _{\mu _{1}\leq \ldots \leq \mu _{j}}(-1)^{j}\partial _{\mu _{1}\ldots \mu _{j}}^{j}\left({\frac {\partial {\mathcal {L}}}{\partial f_{i,\mu _{1}\dots \mu _{j}}}}\right)=0}
=== Field theories ===
== Generalization to manifolds ==
Let
M
{\displaystyle M}
be a smooth manifold, and let
C
∞
(
[
a
,
b
]
)
{\displaystyle C^{\infty }([a,b])}
denote the space of smooth functions
f
:
[
a
,
b
]
→
M
{\displaystyle f\colon [a,b]\to M}
. Then, for functionals
S
:
C
∞
(
[
a
,
b
]
)
→
R
{\displaystyle S\colon C^{\infty }([a,b])\to \mathbb {R} }
of the form
S
[
f
]
=
∫
a
b
(
L
∘
f
˙
)
(
t
)
d
t
{\displaystyle S[f]=\int _{a}^{b}(L\circ {\dot {f}})(t)\,\mathrm {d} t}
where
L
:
T
M
→
R
{\displaystyle L\colon TM\to \mathbb {R} }
is the Lagrangian, the statement
d
S
f
=
0
{\displaystyle \mathrm {d} S_{f}=0}
is equivalent to the statement that, for all
t
∈
[
a
,
b
]
{\displaystyle t\in [a,b]}
, each coordinate frame trivialization
(
x
i
,
X
i
)
{\displaystyle (x^{i},X^{i})}
of a neighborhood of
f
˙
(
t
)
{\displaystyle {\dot {f}}(t)}
yields the following
dim
M
{\displaystyle \dim M}
equations:
∀
i
:
d
d
t
∂
L
∂
X
i
|
f
˙
(
t
)
=
∂
L
∂
x
i
|
f
˙
(
t
)
.
{\displaystyle \forall i:{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial X^{i}}}{\bigg |}_{{\dot {f}}(t)}={\frac {\partial L}{\partial x^{i}}}{\bigg |}_{{\dot {f}}(t)}.}
Euler-Lagrange equations can also be written in a coordinate-free form as
L
Δ
θ
L
=
d
L
{\displaystyle {\mathcal {L}}_{\Delta }\theta _{L}=dL}
where
θ
L
{\displaystyle \theta _{L}}
is the canonical momenta 1-form corresponding to the Lagrangian
L
{\displaystyle L}
. The vector field generating time translations is denoted by
Δ
{\displaystyle \Delta }
and the Lie derivative is denoted by
L
{\displaystyle {\mathcal {L}}}
. One can use local charts
(
q
α
,
q
˙
α
)
{\displaystyle (q^{\alpha },{\dot {q}}^{\alpha })}
in which
θ
L
=
∂
L
∂
q
˙
α
d
q
α
{\displaystyle \theta _{L}={\frac {\partial L}{\partial {\dot {q}}^{\alpha }}}dq^{\alpha }}
and
Δ
:=
d
d
t
=
q
˙
α
∂
∂
q
α
+
q
¨
α
∂
∂
q
˙
α
{\displaystyle \Delta :={\frac {d}{dt}}={\dot {q}}^{\alpha }{\frac {\partial }{\partial q^{\alpha }}}+{\ddot {q}}^{\alpha }{\frac {\partial }{\partial {\dot {q}}^{\alpha }}}}
and use coordinate expressions for the Lie derivative to see equivalence with coordinate expressions of the Euler Lagrange equation. The coordinate free form is particularly suitable for geometrical interpretation of the Euler Lagrange equations.
== See also ==
Lagrangian mechanics
Hamiltonian mechanics
Analytical mechanics
Beltrami identity
Functional derivative
== Notes ==
== References ==
"Lagrange equations (in mechanics)", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Euler-Lagrange Differential Equation". MathWorld.
Calculus of Variations at PlanetMath.
Gelfand, Izrail Moiseevich (1963). Calculus of Variations. Dover. ISBN 0-486-41448-5. {{cite book}}: ISBN / Date incompatibility (help)
Roubicek, T.: Calculus of variations. Chap.17 in: Mathematical Tools for Physicists. (Ed. M. Grinfeld) J. Wiley, Weinheim, 2014, ISBN 978-3-527-41188-7, pp. 551–588. | Wikipedia/Euler–Lagrange_equation |
In mathematics, a differential-algebraic system of equations (DAE) is a system of equations that either contains differential equations and algebraic equations, or is equivalent to such a system.
The set of the solutions of such a system is a differential algebraic variety, and corresponds to an ideal in a differential algebra of differential polynomials.
In the univariate case, a DAE in the variable t can be written as a single equation of the form
F
(
x
˙
,
x
,
t
)
=
0
,
{\displaystyle F({\dot {x}},x,t)=0,}
where
x
(
t
)
{\displaystyle x(t)}
is a vector of unknown functions and the overdot denotes the time derivative, i.e.,
x
˙
=
d
x
d
t
{\displaystyle {\dot {x}}={\frac {dx}{dt}}}
.
They are distinct from ordinary differential equation (ODE) in that a DAE is not completely solvable for the derivatives of all components of the function x because these may not all appear (i.e. some equations are algebraic); technically the distinction between an implicit ODE system [that may be rendered explicit] and a DAE system is that the Jacobian matrix
∂
F
(
x
˙
,
x
,
t
)
∂
x
˙
{\displaystyle {\frac {\partial F({\dot {x}},x,t)}{\partial {\dot {x}}}}}
is a singular matrix for a DAE system. This distinction between ODEs and DAEs is made because DAEs have different characteristics and are generally more difficult to solve.
In practical terms, the distinction between DAEs and ODEs is often that the solution of a DAE system depends on the derivatives of the input signal and not just the signal itself as in the case of ODEs; this issue is commonly encountered in nonlinear systems with hysteresis, such as the Schmitt trigger.
This difference is more clearly visible if the system may be rewritten so that instead of x we consider a pair
(
x
,
y
)
{\displaystyle (x,y)}
of vectors of dependent variables and the DAE has the form
x
˙
(
t
)
=
f
(
x
(
t
)
,
y
(
t
)
,
t
)
,
0
=
g
(
x
(
t
)
,
y
(
t
)
,
t
)
.
{\displaystyle {\begin{aligned}{\dot {x}}(t)&=f(x(t),y(t),t),\\0&=g(x(t),y(t),t).\end{aligned}}}
where
x
(
t
)
∈
R
n
{\displaystyle x(t)\in \mathbb {R} ^{n}}
,
y
(
t
)
∈
R
m
{\displaystyle y(t)\in \mathbb {R} ^{m}}
,
f
:
R
n
+
m
+
1
→
R
n
{\displaystyle f:\mathbb {R} ^{n+m+1}\to \mathbb {R} ^{n}}
and
g
:
R
n
+
m
+
1
→
R
m
.
{\displaystyle g:\mathbb {R} ^{n+m+1}\to \mathbb {R} ^{m}.}
A DAE system of this form is called semi-explicit. Every solution of the second half g of the equation defines a unique direction for x via the first half f of the equations, while the direction for y is arbitrary. But not every point (x,y,t) is a solution of g. The variables in x and the first half f of the equations get the attribute differential. The components of y and the second half g of the equations are called the algebraic variables or equations of the system. [The term algebraic in the context of DAEs only means free of derivatives and is not related to (abstract) algebra.]
The solution of a DAE consists of two parts, first the search for consistent initial values and second the computation of a trajectory. To find consistent initial values it is often necessary to consider the derivatives of some of the component functions of the DAE. The highest order of a derivative that is necessary for this process is called the differentiation index. The equations derived in computing the index and consistent initial values may also be of use in the computation of the trajectory. A semi-explicit DAE system can be converted to an implicit one by decreasing the differentiation index by one, and vice versa.
== Other forms of DAEs ==
The distinction of DAEs to ODEs becomes apparent if some of the dependent variables occur without their derivatives. The vector of dependent variables may then be written as pair
(
x
,
y
)
{\displaystyle (x,y)}
and the system of differential equations of the DAE appears in the form
F
(
x
˙
,
x
,
y
,
t
)
=
0
{\displaystyle F\left({\dot {x}},x,y,t\right)=0}
where
x
{\displaystyle x}
, a vector in
R
n
{\displaystyle \mathbb {R} ^{n}}
, are dependent variables for which derivatives are present (differential variables),
y
{\displaystyle y}
, a vector in
R
m
{\displaystyle \mathbb {R} ^{m}}
, are dependent variables for which no derivatives are present (algebraic variables),
t
{\displaystyle t}
, a scalar (usually time) is an independent variable.
F
{\displaystyle F}
is a vector of
n
+
m
{\displaystyle n+m}
functions that involve subsets of these
n
+
m
+
1
{\displaystyle n+m+1}
variables and
n
{\displaystyle n}
derivatives.
As a whole, the set of DAEs is a function
F
:
R
(
2
n
+
m
+
1
)
→
R
(
n
+
m
)
.
{\displaystyle F:\mathbb {R} ^{(2n+m+1)}\to \mathbb {R} ^{(n+m)}.}
Initial conditions must be a solution of the system of equations of the form
F
(
x
˙
(
t
0
)
,
x
(
t
0
)
,
y
(
t
0
)
,
t
0
)
=
0.
{\displaystyle F\left({\dot {x}}(t_{0}),\,x(t_{0}),y(t_{0}),t_{0}\right)=0.}
== Examples ==
The behaviour of a pendulum of length L with center in (0,0) in Cartesian coordinates (x,y) is described by the Euler–Lagrange equations
x
˙
=
u
,
y
˙
=
v
,
u
˙
=
λ
x
,
v
˙
=
λ
y
−
g
,
x
2
+
y
2
=
L
2
,
{\displaystyle {\begin{aligned}{\dot {x}}&=u,&{\dot {y}}&=v,\\{\dot {u}}&=\lambda x,&{\dot {v}}&=\lambda y-g,\\x^{2}+y^{2}&=L^{2},\end{aligned}}}
where
λ
{\displaystyle \lambda }
is a Lagrange multiplier. The momentum variables u and v should be constrained by the law of conservation of energy and their direction should point along the circle. Neither condition is explicit in those equations. Differentiation of the last equation leads to
x
˙
x
+
y
˙
y
=
0
⇒
u
x
+
v
y
=
0
,
{\displaystyle {\begin{aligned}&&{\dot {x}}\,x+{\dot {y}}\,y&=0\\\Rightarrow &&u\,x+v\,y&=0,\end{aligned}}}
restricting the direction of motion to the tangent of the circle. The next derivative of this equation implies
u
˙
x
+
v
˙
y
+
u
x
˙
+
v
y
˙
=
0
,
⇒
λ
(
x
2
+
y
2
)
−
g
y
+
u
2
+
v
2
=
0
,
⇒
L
2
λ
−
g
y
+
u
2
+
v
2
=
0
,
{\displaystyle {\begin{aligned}&&{\dot {u}}\,x+{\dot {v}}\,y+u\,{\dot {x}}+v\,{\dot {y}}&=0,\\\Rightarrow &&\lambda (x^{2}+y^{2})-gy+u^{2}+v^{2}&=0,\\\Rightarrow &&L^{2}\,\lambda -gy+u^{2}+v^{2}&=0,\end{aligned}}}
and the derivative of that last identity simplifies to
L
2
λ
˙
−
3
g
v
=
0
{\displaystyle L^{2}{\dot {\lambda }}-3gv=0}
which implies the conservation of energy since after integration the constant
E
=
3
2
g
y
−
1
2
L
2
λ
=
1
2
(
u
2
+
v
2
)
+
g
y
{\displaystyle E={\tfrac {3}{2}}gy-{\tfrac {1}{2}}L^{2}\lambda ={\frac {1}{2}}(u^{2}+v^{2})+gy}
is the sum of kinetic and potential energy.
To obtain unique derivative values for all dependent variables the last equation was three times differentiated. This gives a differentiation index of 3, which is typical for constrained mechanical systems.
If initial values
(
x
0
,
u
0
)
{\displaystyle (x_{0},u_{0})}
and a sign for y are given, the other variables are determined via
y
=
±
L
2
−
x
2
{\displaystyle y=\pm {\sqrt {L^{2}-x^{2}}}}
, and if
y
≠
0
{\displaystyle y\neq 0}
then
v
=
−
u
x
/
y
{\displaystyle v=-ux/y}
and
λ
=
(
g
y
−
u
2
−
v
2
)
/
L
2
{\displaystyle \lambda =(gy-u^{2}-v^{2})/L^{2}}
. To proceed to the next point it is sufficient to get the derivatives of x and u, that is, the system to solve is now
x
˙
=
u
,
u
˙
=
λ
x
,
0
=
x
2
+
y
2
−
L
2
,
0
=
u
x
+
v
y
,
0
=
u
2
−
g
y
+
v
2
+
L
2
λ
.
{\displaystyle {\begin{aligned}{\dot {x}}&=u,\\{\dot {u}}&=\lambda x,\\[0.3em]0&=x^{2}+y^{2}-L^{2},\\0&=ux+vy,\\0&=u^{2}-gy+v^{2}+L^{2}\,\lambda .\end{aligned}}}
This is a semi-explicit DAE of index 1. Another set of similar equations may be obtained starting from
(
y
0
,
v
0
)
{\displaystyle (y_{0},v_{0})}
and a sign for x.
DAEs also naturally occur in the modelling of circuits with non-linear devices. Modified nodal analysis employing DAEs is used for example in the ubiquitous SPICE family of numeric circuit simulators. Similarly, Fraunhofer's Analog Insydes Mathematica package can be used to derive DAEs from a netlist and then simplify or even solve the equations symbolically in some cases. It is worth noting that the index of a DAE (of a circuit) can be made arbitrarily high by cascading/coupling via capacitors operational amplifiers with positive feedback.
== Semi-explicit DAE of index 1 ==
DAE of the form
x
˙
=
f
(
x
,
y
,
t
)
,
0
=
g
(
x
,
y
,
t
)
.
{\displaystyle {\begin{aligned}{\dot {x}}&=f(x,y,t),\\0&=g(x,y,t).\end{aligned}}}
are called semi-explicit. The index-1 property requires that g is solvable for y. In other words, the differentiation index is 1 if by differentiation of the algebraic equations for t an implicit ODE system results,
x
˙
=
f
(
x
,
y
,
t
)
0
=
∂
x
g
(
x
,
y
,
t
)
x
˙
+
∂
y
g
(
x
,
y
,
t
)
y
˙
+
∂
t
g
(
x
,
y
,
t
)
,
{\displaystyle {\begin{aligned}{\dot {x}}&=f(x,y,t)\\0&=\partial _{x}g(x,y,t){\dot {x}}+\partial _{y}g(x,y,t){\dot {y}}+\partial _{t}g(x,y,t),\end{aligned}}}
which is solvable for
(
x
˙
,
y
˙
)
{\displaystyle ({\dot {x}},\,{\dot {y}})}
if
det
(
∂
y
g
(
x
,
y
,
t
)
)
≠
0.
{\displaystyle \det \left(\partial _{y}g(x,y,t)\right)\neq 0.}
Every sufficiently smooth DAE is almost everywhere reducible to this semi-explicit index-1 form.
== Numerical treatment of DAE and applications ==
Two major problems in solving DAEs are index reduction and consistent initial conditions. Most numerical solvers require ordinary differential equations and algebraic equations of the form
d
x
d
t
=
f
(
x
,
y
,
t
)
,
0
=
g
(
x
,
y
,
t
)
.
{\displaystyle {\begin{aligned}{\frac {dx}{dt}}&=f\left(x,y,t\right),\\0&=g\left(x,y,t\right).\end{aligned}}}
It is a non-trivial task to convert arbitrary DAE systems into ODEs for solution by pure ODE solvers. Techniques which can be employed include Pantelides algorithm and dummy derivative index reduction method. Alternatively, a direct solution of high-index DAEs with inconsistent initial conditions is also possible. This solution approach involves a transformation of the derivative elements through orthogonal collocation on finite elements or direct transcription into algebraic expressions. This allows DAEs of any index to be solved without rearrangement in the open equation form
0
=
f
(
d
x
d
t
,
x
,
y
,
t
)
,
0
=
g
(
x
,
y
,
t
)
.
{\displaystyle {\begin{aligned}0&=f\left({\frac {dx}{dt}},x,y,t\right),\\0&=g\left(x,y,t\right).\end{aligned}}}
Once the model has been converted to algebraic equation form, it is solvable by large-scale nonlinear programming solvers (see APMonitor).
=== Tractability ===
Several measures of DAEs tractability in terms of numerical methods have developed, such as differentiation index, perturbation index, tractability index, geometric index, and the Kronecker index.
== Structural analysis for DAEs ==
We use the
Σ
{\displaystyle \Sigma }
-method to analyze a DAE. We construct for the DAE a signature matrix
Σ
=
(
σ
i
,
j
)
{\displaystyle \Sigma =(\sigma _{i,j})}
, where each row corresponds to each equation
f
i
{\displaystyle f_{i}}
and each column corresponds to each variable
x
j
{\displaystyle x_{j}}
. The entry in position
(
i
,
j
)
{\displaystyle (i,j)}
is
σ
i
,
j
{\displaystyle \sigma _{i,j}}
, which denotes the highest order of derivative to which
x
j
{\displaystyle x_{j}}
occurs in
f
i
{\displaystyle f_{i}}
, or
−
∞
{\displaystyle -\infty }
if
x
j
{\displaystyle x_{j}}
does not occur in
f
i
{\displaystyle f_{i}}
.
For the pendulum DAE above, the variables are
(
x
1
,
x
2
,
x
3
,
x
4
,
x
5
)
=
(
x
,
y
,
u
,
v
,
λ
)
{\displaystyle (x_{1},x_{2},x_{3},x_{4},x_{5})=(x,y,u,v,\lambda )}
. The corresponding signature matrix is
Σ
=
[
1
−
0
∙
−
−
−
1
∙
−
0
−
0
−
1
−
0
∙
−
0
−
1
∙
0
0
∙
0
−
−
−
]
{\displaystyle \Sigma ={\begin{bmatrix}1&-&0^{\bullet }&-&-\\-&1^{\bullet }&-&0&-\\0&-&1&-&0^{\bullet }\\-&0&-&1^{\bullet }&0\\0^{\bullet }&0&-&-&-\end{bmatrix}}}
== See also ==
Algebraic differential equation, a different concept despite the similar name
Delay differential equation
Partial differential algebraic equation
Modelica Language
== References ==
== Further reading ==
== External links ==
http://www.scholarpedia.org/article/Differential-algebraic_equations | Wikipedia/Differential-algebraic_system_of_equations |
The wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields such as mechanical waves (e.g. water waves, sound waves and seismic waves) or electromagnetic waves (including light waves). It arises in fields like acoustics, electromagnetism, and fluid dynamics.
This article focuses on waves in classical physics. Quantum physics uses an operator-based wave equation often as a relativistic wave equation.
== Introduction ==
The wave equation is a hyperbolic partial differential equation describing waves, including traveling and standing waves; the latter can be considered as linear superpositions of waves traveling in opposite directions. This article mostly focuses on the scalar wave equation describing waves in scalars by scalar functions u = u (x, y, z, t) of a time variable t (a variable representing time) and one or more spatial variables x, y, z (variables representing a position in a space under discussion). At the same time, there are vector wave equations describing waves in vectors such as waves for an electrical field, magnetic field, and magnetic vector potential and elastic waves. By comparison with vector wave equations, the scalar wave equation can be seen as a special case of the vector wave equations; in the Cartesian coordinate system, the scalar wave equation is the equation to be satisfied by each component (for each coordinate axis, such as the x component for the x axis) of a vector wave without sources of waves in the considered domain (i.e., space and time). For example, in the Cartesian coordinate system, for
(
E
x
,
E
y
,
E
z
)
{\displaystyle (E_{x},E_{y},E_{z})}
as the representation of an electric vector field wave
E
→
{\displaystyle {\vec {E}}}
in the absence of wave sources, each coordinate axis component
E
i
{\displaystyle E_{i}}
(i = x, y, z) must satisfy the scalar wave equation. Other scalar wave equation solutions u are for physical quantities in scalars such as pressure in a liquid or gas, or the displacement along some specific direction of particles of a vibrating solid away from their resting (equilibrium) positions.
The scalar wave equation is
where
c is a fixed non-negative real coefficient representing the propagation speed of the wave
u is a scalar field representing the displacement or, more generally, the conserved quantity (e.g. pressure or density)
x, y and z are the three spatial coordinates and t being the time coordinate.
The equation states that, at any given point, the second derivative of
u
{\displaystyle u}
with respect to time is proportional to the sum of the second derivatives of
u
{\displaystyle u}
with respect to space, with the constant of proportionality being the square of the speed of the wave.
Using notations from vector calculus, the wave equation can be written compactly as
u
t
t
=
c
2
Δ
u
,
{\displaystyle u_{tt}=c^{2}\Delta u,}
or
◻
u
=
0
,
{\displaystyle \Box u=0,}
where the double subscript denotes the second-order partial derivative with respect to time,
Δ
{\displaystyle \Delta }
is the Laplace operator and
◻
{\displaystyle \Box }
the d'Alembert operator, defined as:
u
t
t
=
∂
2
u
∂
t
2
,
Δ
=
∂
2
∂
x
2
+
∂
2
∂
y
2
+
∂
2
∂
z
2
,
◻
=
1
c
2
∂
2
∂
t
2
−
Δ
.
{\displaystyle u_{tt}={\frac {\partial ^{2}u}{\partial t^{2}}},\qquad \Delta ={\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}+{\frac {\partial ^{2}}{\partial z^{2}}},\qquad \Box ={\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}-\Delta .}
A solution to this (two-way) wave equation can be quite complicated. Still, it can be analyzed as a linear combination of simple solutions that are sinusoidal plane waves with various directions of propagation and wavelengths but all with the same propagation speed c. This analysis is possible because the wave equation is linear and homogeneous, so that any multiple of a solution is also a solution, and the sum of any two solutions is again a solution. This property is called the superposition principle in physics.
The wave equation alone does not specify a physical solution; a unique solution is usually obtained by setting a problem with further conditions, such as initial conditions, which prescribe the amplitude and phase of the wave. Another important class of problems occurs in enclosed spaces specified by boundary conditions, for which the solutions represent standing waves, or harmonics, analogous to the harmonics of musical instruments.
== Wave equation in one space dimension ==
The wave equation in one spatial dimension can be written as follows:
∂
2
u
∂
t
2
=
c
2
∂
2
u
∂
x
2
.
{\displaystyle {\frac {\partial ^{2}u}{\partial t^{2}}}=c^{2}{\frac {\partial ^{2}u}{\partial x^{2}}}.}
This equation is typically described as having only one spatial dimension x, because the only other independent variable is the time t.
=== Derivation ===
The wave equation in one space dimension can be derived in a variety of different physical settings. Most famously, it can be derived for the case of a string vibrating in a two-dimensional plane, with each of its elements being pulled in opposite directions by the force of tension.
Another physical setting for derivation of the wave equation in one space dimension uses Hooke's law. In the theory of elasticity, Hooke's law is an approximation for certain materials, stating that the amount by which a material body is deformed (the strain) is linearly related to the force causing the deformation (the stress).
==== Hooke's law ====
The wave equation in the one-dimensional case can be derived from Hooke's law in the following way: imagine an array of little weights of mass m interconnected with massless springs of length h. The springs have a spring constant of k:
Here the dependent variable u(x) measures the distance from the equilibrium of the mass situated at x, so that u(x) essentially measures the magnitude of a disturbance (i.e. strain) that is traveling in an elastic material. The resulting force exerted on the mass m at the location x + h is:
F
Hooke
=
F
x
+
2
h
−
F
x
=
k
[
u
(
x
+
2
h
,
t
)
−
u
(
x
+
h
,
t
)
]
−
k
[
u
(
x
+
h
,
t
)
−
u
(
x
,
t
)
]
.
{\displaystyle {\begin{aligned}F_{\text{Hooke}}&=F_{x+2h}-F_{x}=k[u(x+2h,t)-u(x+h,t)]-k[u(x+h,t)-u(x,t)].\end{aligned}}}
By equating the latter equation with
F
Newton
=
m
a
(
t
)
=
m
∂
2
∂
t
2
u
(
x
+
h
,
t
)
,
{\displaystyle {\begin{aligned}F_{\text{Newton}}&=m\,a(t)=m\,{\frac {\partial ^{2}}{\partial t^{2}}}u(x+h,t),\end{aligned}}}
the equation of motion for the weight at the location x + h is obtained:
∂
2
∂
t
2
u
(
x
+
h
,
t
)
=
k
m
[
u
(
x
+
2
h
,
t
)
−
u
(
x
+
h
,
t
)
−
u
(
x
+
h
,
t
)
+
u
(
x
,
t
)
]
.
{\displaystyle {\frac {\partial ^{2}}{\partial t^{2}}}u(x+h,t)={\frac {k}{m}}[u(x+2h,t)-u(x+h,t)-u(x+h,t)+u(x,t)].}
If the array of weights consists of N weights spaced evenly over the length L = Nh of total mass M = Nm, and the total spring constant of the array K = k/N, we can write the above equation as
∂
2
∂
t
2
u
(
x
+
h
,
t
)
=
K
L
2
M
[
u
(
x
+
2
h
,
t
)
−
2
u
(
x
+
h
,
t
)
+
u
(
x
,
t
)
]
h
2
.
{\displaystyle {\frac {\partial ^{2}}{\partial t^{2}}}u(x+h,t)={\frac {KL^{2}}{M}}{\frac {[u(x+2h,t)-2u(x+h,t)+u(x,t)]}{h^{2}}}.}
Taking the limit N → ∞, h → 0 and assuming smoothness, one gets
∂
2
u
(
x
,
t
)
∂
t
2
=
K
L
2
M
∂
2
u
(
x
,
t
)
∂
x
2
,
{\displaystyle {\frac {\partial ^{2}u(x,t)}{\partial t^{2}}}={\frac {KL^{2}}{M}}{\frac {\partial ^{2}u(x,t)}{\partial x^{2}}},}
which is from the definition of a second derivative. KL2/M is the square of the propagation speed in this particular case.
==== Stress pulse in a bar ====
In the case of a stress pulse propagating longitudinally through a bar, the bar acts much like an infinite number of springs in series and can be taken as an extension of the equation derived for Hooke's law. A uniform bar, i.e. of constant cross-section, made from a linear elastic material has a stiffness K given by
K
=
E
A
L
,
{\displaystyle K={\frac {EA}{L}},}
where A is the cross-sectional area, and E is the Young's modulus of the material. The wave equation becomes
∂
2
u
(
x
,
t
)
∂
t
2
=
E
A
L
M
∂
2
u
(
x
,
t
)
∂
x
2
.
{\displaystyle {\frac {\partial ^{2}u(x,t)}{\partial t^{2}}}={\frac {EAL}{M}}{\frac {\partial ^{2}u(x,t)}{\partial x^{2}}}.}
AL is equal to the volume of the bar, and therefore
A
L
M
=
1
ρ
,
{\displaystyle {\frac {AL}{M}}={\frac {1}{\rho }},}
where ρ is the density of the material. The wave equation reduces to
∂
2
u
(
x
,
t
)
∂
t
2
=
E
ρ
∂
2
u
(
x
,
t
)
∂
x
2
.
{\displaystyle {\frac {\partial ^{2}u(x,t)}{\partial t^{2}}}={\frac {E}{\rho }}{\frac {\partial ^{2}u(x,t)}{\partial x^{2}}}.}
The speed of a stress wave in a bar is therefore
E
/
ρ
{\displaystyle {\sqrt {E/\rho }}}
.
=== General solution ===
==== Algebraic approach ====
For the one-dimensional wave equation a relatively simple general solution may be found. Defining new variables
ξ
=
x
−
c
t
,
η
=
x
+
c
t
{\displaystyle {\begin{aligned}\xi &=x-ct,\\\eta &=x+ct\end{aligned}}}
changes the wave equation into
∂
2
u
∂
ξ
∂
η
(
x
,
t
)
=
0
,
{\displaystyle {\frac {\partial ^{2}u}{\partial \xi \partial \eta }}(x,t)=0,}
which leads to the general solution
u
(
x
,
t
)
=
F
(
ξ
)
+
G
(
η
)
=
F
(
x
−
c
t
)
+
G
(
x
+
c
t
)
.
{\displaystyle u(x,t)=F(\xi )+G(\eta )=F(x-ct)+G(x+ct).}
In other words, the solution is the sum of a right-traveling function F and a left-traveling function G. "Traveling" means that the shape of these individual arbitrary functions with respect to x stays constant, however, the functions are translated left and right with time at the speed c. This was derived by Jean le Rond d'Alembert.
Another way to arrive at this result is to factor the wave equation using two first-order differential operators:
[
∂
∂
t
−
c
∂
∂
x
]
[
∂
∂
t
+
c
∂
∂
x
]
u
=
0.
{\displaystyle \left[{\frac {\partial }{\partial t}}-c{\frac {\partial }{\partial x}}\right]\left[{\frac {\partial }{\partial t}}+c{\frac {\partial }{\partial x}}\right]u=0.}
Then, for our original equation, we can define
v
≡
∂
u
∂
t
+
c
∂
u
∂
x
,
{\displaystyle v\equiv {\frac {\partial u}{\partial t}}+c{\frac {\partial u}{\partial x}},}
and find that we must have
∂
v
∂
t
−
c
∂
v
∂
x
=
0.
{\displaystyle {\frac {\partial v}{\partial t}}-c{\frac {\partial v}{\partial x}}=0.}
This advection equation can be solved by interpreting it as telling us that the directional derivative of v in the (1, -c) direction is 0. This means that the value of v is constant on characteristic lines of the form x + ct = x0, and thus that v must depend only on x + ct, that is, have the form H(x + ct). Then, to solve the first (inhomogenous) equation relating v to u, we can note that its homogenous solution must be a function of the form F(x - ct), by logic similar to the above. Guessing a particular solution of the form G(x + ct), we find that
[
∂
∂
t
+
c
∂
∂
x
]
G
(
x
+
c
t
)
=
H
(
x
+
c
t
)
.
{\displaystyle \left[{\frac {\partial }{\partial t}}+c{\frac {\partial }{\partial x}}\right]G(x+ct)=H(x+ct).}
Expanding out the left side, rearranging terms, then using the change of variables s = x + ct simplifies the equation to
G
′
(
s
)
=
H
(
s
)
2
c
.
{\displaystyle G'(s)={\frac {H(s)}{2c}}.}
This means we can find a particular solution G of the desired form by integration. Thus, we have again shown that u obeys u(x, t) = F(x - ct) + G(x + ct).
For an initial-value problem, the arbitrary functions F and G can be determined to satisfy initial conditions:
u
(
x
,
0
)
=
f
(
x
)
,
{\displaystyle u(x,0)=f(x),}
u
t
(
x
,
0
)
=
g
(
x
)
.
{\displaystyle u_{t}(x,0)=g(x).}
The result is d'Alembert's formula:
u
(
x
,
t
)
=
f
(
x
−
c
t
)
+
f
(
x
+
c
t
)
2
+
1
2
c
∫
x
−
c
t
x
+
c
t
g
(
s
)
d
s
.
{\displaystyle u(x,t)={\frac {f(x-ct)+f(x+ct)}{2}}+{\frac {1}{2c}}\int _{x-ct}^{x+ct}g(s)\,ds.}
In the classical sense, if f(x) ∈ Ck, and g(x) ∈ Ck−1, then u(t, x) ∈ Ck. However, the waveforms F and G may also be generalized functions, such as the delta-function. In that case, the solution may be interpreted as an impulse that travels to the right or the left.
The basic wave equation is a linear differential equation, and so it will adhere to the superposition principle. This means that the net displacement caused by two or more waves is the sum of the displacements which would have been caused by each wave individually. In addition, the behavior of a wave can be analyzed by breaking up the wave into components, e.g. the Fourier transform breaks up a wave into sinusoidal components.
==== Plane-wave eigenmodes ====
Another way to solve the one-dimensional wave equation is to first analyze its frequency eigenmodes. A so-called eigenmode is a solution that oscillates in time with a well-defined constant angular frequency ω, so that the temporal part of the wave function takes the form e−iωt = cos(ωt) − i sin(ωt), and the amplitude is a function f(x) of the spatial variable x, giving a separation of variables for the wave function:
u
ω
(
x
,
t
)
=
e
−
i
ω
t
f
(
x
)
.
{\displaystyle u_{\omega }(x,t)=e^{-i\omega t}f(x).}
This produces an ordinary differential equation for the spatial part f(x):
∂
2
u
ω
∂
t
2
=
∂
2
∂
t
2
(
e
−
i
ω
t
f
(
x
)
)
=
−
ω
2
e
−
i
ω
t
f
(
x
)
=
c
2
∂
2
∂
x
2
(
e
−
i
ω
t
f
(
x
)
)
.
{\displaystyle {\frac {\partial ^{2}u_{\omega }}{\partial t^{2}}}={\frac {\partial ^{2}}{\partial t^{2}}}\left(e^{-i\omega t}f(x)\right)=-\omega ^{2}e^{-i\omega t}f(x)=c^{2}{\frac {\partial ^{2}}{\partial x^{2}}}\left(e^{-i\omega t}f(x)\right).}
Therefore,
d
2
d
x
2
f
(
x
)
=
−
(
ω
c
)
2
f
(
x
)
,
{\displaystyle {\frac {d^{2}}{dx^{2}}}f(x)=-\left({\frac {\omega }{c}}\right)^{2}f(x),}
which is precisely an eigenvalue equation for f(x), hence the name eigenmode. Known as the Helmholtz equation, it has the well-known plane-wave solutions
f
(
x
)
=
A
e
±
i
k
x
,
{\displaystyle f(x)=Ae^{\pm ikx},}
with wave number k = ω/c.
The total wave function for this eigenmode is then the linear combination
u
ω
(
x
,
t
)
=
e
−
i
ω
t
(
A
e
−
i
k
x
+
B
e
i
k
x
)
=
A
e
−
i
(
k
x
+
ω
t
)
+
B
e
i
(
k
x
−
ω
t
)
,
{\displaystyle u_{\omega }(x,t)=e^{-i\omega t}\left(Ae^{-ikx}+Be^{ikx}\right)=Ae^{-i(kx+\omega t)}+Be^{i(kx-\omega t)},}
where complex numbers A, B depend in general on any initial and boundary conditions of the problem.
Eigenmodes are useful in constructing a full solution to the wave equation, because each of them evolves in time trivially with the phase factor
e
−
i
ω
t
,
{\displaystyle e^{-i\omega t},}
so that a full solution can be decomposed into an eigenmode expansion:
u
(
x
,
t
)
=
∫
−
∞
∞
s
(
ω
)
u
ω
(
x
,
t
)
d
ω
,
{\displaystyle u(x,t)=\int _{-\infty }^{\infty }s(\omega )u_{\omega }(x,t)\,d\omega ,}
or in terms of the plane waves,
u
(
x
,
t
)
=
∫
−
∞
∞
s
+
(
ω
)
e
−
i
(
k
x
+
ω
t
)
d
ω
+
∫
−
∞
∞
s
−
(
ω
)
e
i
(
k
x
−
ω
t
)
d
ω
=
∫
−
∞
∞
s
+
(
ω
)
e
−
i
k
(
x
+
c
t
)
d
ω
+
∫
−
∞
∞
s
−
(
ω
)
e
i
k
(
x
−
c
t
)
d
ω
=
F
(
x
−
c
t
)
+
G
(
x
+
c
t
)
,
{\displaystyle {\begin{aligned}u(x,t)&=\int _{-\infty }^{\infty }s_{+}(\omega )e^{-i(kx+\omega t)}\,d\omega +\int _{-\infty }^{\infty }s_{-}(\omega )e^{i(kx-\omega t)}\,d\omega \\&=\int _{-\infty }^{\infty }s_{+}(\omega )e^{-ik(x+ct)}\,d\omega +\int _{-\infty }^{\infty }s_{-}(\omega )e^{ik(x-ct)}\,d\omega \\&=F(x-ct)+G(x+ct),\end{aligned}}}
which is exactly in the same form as in the algebraic approach. Functions s±(ω) are known as the Fourier component and are determined by initial and boundary conditions. This is a so-called frequency-domain method, alternative to direct time-domain propagations, such as FDTD method, of the wave packet u(x, t), which is complete for representing waves in absence of time dilations. Completeness of the Fourier expansion for representing waves in the presence of time dilations has been challenged by chirp wave solutions allowing for time variation of ω. The chirp wave solutions seem particularly implied by very large but previously inexplicable radar residuals in the flyby anomaly and differ from the sinusoidal solutions in being receivable at any distance only at proportionally shifted frequencies and time dilations, corresponding to past chirp states of the source.
== Vectorial wave equation in three space dimensions ==
The vectorial wave equation (from which the scalar wave equation can be directly derived) can be obtained by applying a force equilibrium to an infinitesimal volume element. If the medium has a modulus of elasticity
E
{\displaystyle E}
that is homogeneous (i.e. independent of
x
{\displaystyle \mathbf {x} }
) within the volume element, then its stress tensor is given by
T
=
E
∇
u
{\displaystyle \mathbf {T} =E\nabla \mathbf {u} }
, for a vectorial elastic deflection
u
(
x
,
t
)
{\displaystyle \mathbf {u} (\mathbf {x} ,t)}
. The local equilibrium of:
the tension force
div
T
=
∇
⋅
(
E
∇
u
)
=
E
Δ
u
{\displaystyle \operatorname {div} \mathbf {T} =\nabla \cdot (E\nabla \mathbf {u} )=E\Delta \mathbf {u} }
due to deflection
u
{\displaystyle \mathbf {u} }
, and
the inertial force
ρ
∂
2
u
/
∂
t
2
{\displaystyle \rho \partial ^{2}\mathbf {u} /\partial t^{2}}
caused by the local acceleration
∂
2
u
/
∂
t
2
{\displaystyle \partial ^{2}\mathbf {u} /\partial t^{2}}
can be written as
ρ
∂
2
u
∂
t
2
−
E
Δ
u
=
0
.
{\displaystyle \rho {\frac {\partial ^{2}\mathbf {u} }{\partial t^{2}}}-E\Delta \mathbf {u} =\mathbf {0} .}
By merging density
ρ
{\displaystyle \rho }
and elasticity module
E
,
{\displaystyle E,}
the sound velocity
c
=
E
/
ρ
{\displaystyle c={\sqrt {E/\rho }}}
results (material law). After insertion, follows the well-known governing wave equation for a homogeneous medium:
∂
2
u
∂
t
2
−
c
2
Δ
u
=
0
.
{\displaystyle {\frac {\partial ^{2}\mathbf {u} }{\partial t^{2}}}-c^{2}\Delta \mathbf {u} ={\boldsymbol {0}}.}
(Note: Instead of vectorial
u
(
x
,
t
)
,
{\displaystyle \mathbf {u} (\mathbf {x} ,t),}
only scalar
u
(
x
,
t
)
{\displaystyle u(x,t)}
can be used, i.e. waves are travelling only along the
x
{\displaystyle x}
axis, and the scalar wave equation follows as
∂
2
u
∂
t
2
−
c
2
∂
2
u
∂
x
2
=
0
{\displaystyle {\frac {\partial ^{2}u}{\partial t^{2}}}-c^{2}{\frac {\partial ^{2}u}{\partial x^{2}}}=0}
.)
The above vectorial partial differential equation of the 2nd order delivers two mutually independent solutions. From the quadratic velocity term
c
2
=
(
+
c
)
2
=
(
−
c
)
2
{\displaystyle c^{2}=(+c)^{2}=(-c)^{2}}
can be seen that there are two waves travelling in opposite directions
+
c
{\displaystyle +c}
and
−
c
{\displaystyle -c}
are possible, hence results the designation “two-way wave equation”.
It can be shown for plane longitudinal wave propagation that the synthesis of two one-way wave equations leads to a general two-way wave equation. For
∇
c
=
0
,
{\displaystyle \nabla \mathbf {c} =\mathbf {0} ,}
special two-wave equation with the d'Alembert operator results:
(
∂
∂
t
−
c
⋅
∇
)
(
∂
∂
t
+
c
⋅
∇
)
u
=
(
∂
2
∂
t
2
+
(
c
⋅
∇
)
c
⋅
∇
)
u
=
(
∂
2
∂
t
2
+
(
c
⋅
∇
)
2
)
u
=
0
.
{\displaystyle \left({\frac {\partial }{\partial t}}-\mathbf {c} \cdot \nabla \right)\left({\frac {\partial }{\partial t}}+\mathbf {c} \cdot \nabla \right)\mathbf {u} =\left({\frac {\partial ^{2}}{\partial t^{2}}}+(\mathbf {c} \cdot \nabla )\mathbf {c} \cdot \nabla \right)\mathbf {u} =\left({\frac {\partial ^{2}}{\partial t^{2}}}+(\mathbf {c} \cdot \nabla )^{2}\right)\mathbf {u} =\mathbf {0} .}
For
∇
c
=
0
,
{\displaystyle \nabla \mathbf {c} =\mathbf {0} ,}
this simplifies to
(
∂
2
∂
t
2
+
c
2
Δ
)
u
=
0
.
{\displaystyle \left({\frac {\partial ^{2}}{\partial t^{2}}}+c^{2}\Delta \right)\mathbf {u} =\mathbf {0} .}
Therefore, the vectorial 1st-order one-way wave equation with waves travelling in a pre-defined propagation direction
c
{\displaystyle \mathbf {c} }
results as
∂
u
∂
t
−
c
⋅
∇
u
=
0
.
{\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}-\mathbf {c} \cdot \nabla \mathbf {u} =\mathbf {0} .}
== Scalar wave equation in three space dimensions ==
A solution of the initial-value problem for the wave equation in three space dimensions can be obtained from the corresponding solution for a spherical wave. The result can then be also used to obtain the same solution in two space dimensions.
=== Spherical waves ===
To obtain a solution with constant frequencies, apply the Fourier transform
Ψ
(
r
,
t
)
=
∫
−
∞
∞
Ψ
(
r
,
ω
)
e
−
i
ω
t
d
ω
,
{\displaystyle \Psi (\mathbf {r} ,t)=\int _{-\infty }^{\infty }\Psi (\mathbf {r} ,\omega )e^{-i\omega t}\,d\omega ,}
which transforms the wave equation into an elliptic partial differential equation of the form:
(
∇
2
+
ω
2
c
2
)
Ψ
(
r
,
ω
)
=
0.
{\displaystyle \left(\nabla ^{2}+{\frac {\omega ^{2}}{c^{2}}}\right)\Psi (\mathbf {r} ,\omega )=0.}
This is the Helmholtz equation and can be solved using separation of variables. In spherical coordinates this leads to a separation of the radial and angular variables, writing the solution as:
Ψ
(
r
,
ω
)
=
∑
l
,
m
f
l
m
(
r
)
Y
l
m
(
θ
,
ϕ
)
.
{\displaystyle \Psi (\mathbf {r} ,\omega )=\sum _{l,m}f_{lm}(r)Y_{lm}(\theta ,\phi ).}
The angular part of the solution take the form of spherical harmonics and the radial function satisfies:
[
d
2
d
r
2
+
2
r
d
d
r
+
k
2
−
l
(
l
+
1
)
r
2
]
f
l
(
r
)
=
0.
{\displaystyle \left[{\frac {d^{2}}{dr^{2}}}+{\frac {2}{r}}{\frac {d}{dr}}+k^{2}-{\frac {l(l+1)}{r^{2}}}\right]f_{l}(r)=0.}
independent of
m
{\displaystyle m}
, with
k
2
=
ω
2
/
c
2
{\displaystyle k^{2}=\omega ^{2}/c^{2}}
. Substituting
f
l
(
r
)
=
1
r
u
l
(
r
)
,
{\displaystyle f_{l}(r)={\frac {1}{\sqrt {r}}}u_{l}(r),}
transforms the equation into
[
d
2
d
r
2
+
1
r
d
d
r
+
k
2
−
(
l
+
1
2
)
2
r
2
]
u
l
(
r
)
=
0
,
{\displaystyle \left[{\frac {d^{2}}{dr^{2}}}+{\frac {1}{r}}{\frac {d}{dr}}+k^{2}-{\frac {(l+{\frac {1}{2}})^{2}}{r^{2}}}\right]u_{l}(r)=0,}
which is the Bessel equation.
==== Example ====
Consider the case l = 0. Then there is no angular dependence and the amplitude depends only on the radial distance, i.e., Ψ(r, t) → u(r, t). In this case, the wave equation reduces to
(
∇
2
−
1
c
2
∂
2
∂
t
2
)
Ψ
(
r
,
t
)
=
0
,
{\displaystyle \left(\nabla ^{2}-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\right)\Psi (\mathbf {r} ,t)=0,}
or
(
∂
2
∂
r
2
+
2
r
∂
∂
r
−
1
c
2
∂
2
∂
t
2
)
u
(
r
,
t
)
=
0.
{\displaystyle \left({\frac {\partial ^{2}}{\partial r^{2}}}+{\frac {2}{r}}{\frac {\partial }{\partial r}}-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\right)u(r,t)=0.}
This equation can be rewritten as
∂
2
(
r
u
)
∂
t
2
−
c
2
∂
2
(
r
u
)
∂
r
2
=
0
,
{\displaystyle {\frac {\partial ^{2}(ru)}{\partial t^{2}}}-c^{2}{\frac {\partial ^{2}(ru)}{\partial r^{2}}}=0,}
where the quantity ru satisfies the one-dimensional wave equation. Therefore, there are solutions in the form
u
(
r
,
t
)
=
1
r
F
(
r
−
c
t
)
+
1
r
G
(
r
+
c
t
)
,
{\displaystyle u(r,t)={\frac {1}{r}}F(r-ct)+{\frac {1}{r}}G(r+ct),}
where F and G are general solutions to the one-dimensional wave equation and can be interpreted as respectively an outgoing and incoming spherical waves. The outgoing wave can be generated by a point source, and they make possible sharp signals whose form is altered only by a decrease in amplitude as r increases (see an illustration of a spherical wave on the top right). Such waves exist only in cases of space with odd dimensions.
For physical examples of solutions to the 3D wave equation that possess angular dependence, see dipole radiation.
==== Monochromatic spherical wave ====
Although the word "monochromatic" is not exactly accurate, since it refers to light or electromagnetic radiation with well-defined frequency, the spirit is to discover the eigenmode of the wave equation in three dimensions. Following the derivation in the previous section on plane-wave eigenmodes, if we again restrict our solutions to spherical waves that oscillate in time with well-defined constant angular frequency ω, then the transformed function ru(r, t) has simply plane-wave solutions:
r
u
(
r
,
t
)
=
A
e
i
(
ω
t
±
k
r
)
,
{\displaystyle ru(r,t)=Ae^{i(\omega t\pm kr)},}
or
u
(
r
,
t
)
=
A
r
e
i
(
ω
t
±
k
r
)
.
{\displaystyle u(r,t)={\frac {A}{r}}e^{i(\omega t\pm kr)}.}
From this we can observe that the peak intensity of the spherical-wave oscillation, characterized as the squared wave amplitude
I
=
|
u
(
r
,
t
)
|
2
=
|
A
|
2
r
2
,
{\displaystyle I=|u(r,t)|^{2}={\frac {|A|^{2}}{r^{2}}},}
drops at the rate proportional to 1/r2, an example of the inverse-square law.
=== Solution of a general initial-value problem ===
The wave equation is linear in u and is left unaltered by translations in space and time. Therefore, we can generate a great variety of solutions by translating and summing spherical waves. Let φ(ξ, η, ζ) be an arbitrary function of three independent variables, and let the spherical wave form F be a delta function. Let a family of spherical waves have center at (ξ, η, ζ), and let r be the radial distance from that point. Thus
r
2
=
(
x
−
ξ
)
2
+
(
y
−
η
)
2
+
(
z
−
ζ
)
2
.
{\displaystyle r^{2}=(x-\xi )^{2}+(y-\eta )^{2}+(z-\zeta )^{2}.}
If u is a superposition of such waves with weighting function φ, then
u
(
t
,
x
,
y
,
z
)
=
1
4
π
c
∭
φ
(
ξ
,
η
,
ζ
)
δ
(
r
−
c
t
)
r
d
ξ
d
η
d
ζ
;
{\displaystyle u(t,x,y,z)={\frac {1}{4\pi c}}\iiint \varphi (\xi ,\eta ,\zeta ){\frac {\delta (r-ct)}{r}}\,d\xi \,d\eta \,d\zeta ;}
the denominator 4πc is a convenience.
From the definition of the delta function, u may also be written as
u
(
t
,
x
,
y
,
z
)
=
t
4
π
∬
S
φ
(
x
+
c
t
α
,
y
+
c
t
β
,
z
+
c
t
γ
)
d
ω
,
{\displaystyle u(t,x,y,z)={\frac {t}{4\pi }}\iint _{S}\varphi (x+ct\alpha ,y+ct\beta ,z+ct\gamma )\,d\omega ,}
where α, β, and γ are coordinates on the unit sphere S, and ω is the area element on S. This result has the interpretation that u(t, x) is t times the mean value of φ on a sphere of radius ct centered at x:
u
(
t
,
x
,
y
,
z
)
=
t
M
c
t
[
φ
]
.
{\displaystyle u(t,x,y,z)=tM_{ct}[\varphi ].}
It follows that
u
(
0
,
x
,
y
,
z
)
=
0
,
u
t
(
0
,
x
,
y
,
z
)
=
φ
(
x
,
y
,
z
)
.
{\displaystyle u(0,x,y,z)=0,\quad u_{t}(0,x,y,z)=\varphi (x,y,z).}
The mean value is an even function of t, and hence if
v
(
t
,
x
,
y
,
z
)
=
∂
∂
t
(
t
M
c
t
[
φ
]
)
,
{\displaystyle v(t,x,y,z)={\frac {\partial }{\partial t}}{\big (}tM_{ct}[\varphi ]{\big )},}
then
v
(
0
,
x
,
y
,
z
)
=
φ
(
x
,
y
,
z
)
,
v
t
(
0
,
x
,
y
,
z
)
=
0.
{\displaystyle v(0,x,y,z)=\varphi (x,y,z),\quad v_{t}(0,x,y,z)=0.}
These formulas provide the solution for the initial-value problem for the wave equation. They show that the solution at a given point P, given (t, x, y, z) depends only on the data on the sphere of radius ct that is intersected by the light cone drawn backwards from P. It does not depend upon data on the interior of this sphere. Thus the interior of the sphere is a lacuna for the solution. This phenomenon is called Huygens' principle. It is only true for odd numbers of space dimension, where for one dimension the integration is performed over the boundary of an interval with respect to the Dirac measure.
== Scalar wave equation in two space dimensions ==
In two space dimensions, the wave equation is
u
t
t
=
c
2
(
u
x
x
+
u
y
y
)
.
{\displaystyle u_{tt}=c^{2}\left(u_{xx}+u_{yy}\right).}
We can use the three-dimensional theory to solve this problem if we regard u as a function in three dimensions that is independent of the third dimension. If
u
(
0
,
x
,
y
)
=
0
,
u
t
(
0
,
x
,
y
)
=
ϕ
(
x
,
y
)
,
{\displaystyle u(0,x,y)=0,\quad u_{t}(0,x,y)=\phi (x,y),}
then the three-dimensional solution formula becomes
u
(
t
,
x
,
y
)
=
t
M
c
t
[
ϕ
]
=
t
4
π
∬
S
ϕ
(
x
+
c
t
α
,
y
+
c
t
β
)
d
ω
,
{\displaystyle u(t,x,y)=tM_{ct}[\phi ]={\frac {t}{4\pi }}\iint _{S}\phi (x+ct\alpha ,\,y+ct\beta )\,d\omega ,}
where α and β are the first two coordinates on the unit sphere, and dω is the area element on the sphere. This integral may be rewritten as a double integral over the disc D with center (x, y) and radius ct:
u
(
t
,
x
,
y
)
=
1
2
π
c
∬
D
ϕ
(
x
+
ξ
,
y
+
η
)
(
c
t
)
2
−
ξ
2
−
η
2
d
ξ
d
η
.
{\displaystyle u(t,x,y)={\frac {1}{2\pi c}}\iint _{D}{\frac {\phi (x+\xi ,y+\eta )}{\sqrt {(ct)^{2}-\xi ^{2}-\eta ^{2}}}}d\xi \,d\eta .}
It is apparent that the solution at (t, x, y) depends not only on the data on the light cone where
(
x
−
ξ
)
2
+
(
y
−
η
)
2
=
c
2
t
2
,
{\displaystyle (x-\xi )^{2}+(y-\eta )^{2}=c^{2}t^{2},}
but also on data that are interior to that cone.
== Scalar wave equation in general dimension and Kirchhoff's formulae ==
We want to find solutions to utt − Δu = 0 for u : Rn × (0, ∞) → R with u(x, 0) = g(x) and ut(x, 0) = h(x).
=== Odd dimensions ===
Assume n ≥ 3 is an odd integer, and g ∈ Cm+1(Rn), h ∈ Cm(Rn) for m = (n + 1)/2. Let γn = 1 × 3 × 5 × ⋯ × (n − 2) and let
u
(
x
,
t
)
=
1
γ
n
[
∂
t
(
1
t
∂
t
)
n
−
3
2
(
t
n
−
2
1
|
∂
B
t
(
x
)
|
∫
∂
B
t
(
x
)
g
d
S
)
+
(
1
t
∂
t
)
n
−
3
2
(
t
n
−
2
1
|
∂
B
t
(
x
)
|
∫
∂
B
t
(
x
)
h
d
S
)
]
{\displaystyle u(x,t)={\frac {1}{\gamma _{n}}}\left[\partial _{t}\left({\frac {1}{t}}\partial _{t}\right)^{\frac {n-3}{2}}\left(t^{n-2}{\frac {1}{|\partial B_{t}(x)|}}\int _{\partial B_{t}(x)}g\,dS\right)+\left({\frac {1}{t}}\partial _{t}\right)^{\frac {n-3}{2}}\left(t^{n-2}{\frac {1}{|\partial B_{t}(x)|}}\int _{\partial B_{t}(x)}h\,dS\right)\right]}
Then
u
∈
C
2
(
R
n
×
[
0
,
∞
)
)
{\displaystyle u\in C^{2}{\big (}\mathbf {R} ^{n}\times [0,\infty ){\big )}}
,
u
t
t
−
Δ
u
=
0
{\displaystyle u_{tt}-\Delta u=0}
in
R
n
×
(
0
,
∞
)
{\displaystyle \mathbf {R} ^{n}\times (0,\infty )}
,
lim
(
x
,
t
)
→
(
x
0
,
0
)
u
(
x
,
t
)
=
g
(
x
0
)
{\displaystyle \lim _{(x,t)\to (x^{0},0)}u(x,t)=g(x^{0})}
,
lim
(
x
,
t
)
→
(
x
0
,
0
)
u
t
(
x
,
t
)
=
h
(
x
0
)
{\displaystyle \lim _{(x,t)\to (x^{0},0)}u_{t}(x,t)=h(x^{0})}
.
=== Even dimensions ===
Assume n ≥ 2 is an even integer and g ∈ Cm+1(Rn), h ∈ Cm(Rn), for m = (n + 2)/2. Let γn = 2 × 4 × ⋯ × n and let
u
(
x
,
t
)
=
1
γ
n
[
∂
t
(
1
t
∂
t
)
n
−
2
2
(
t
n
1
|
B
t
(
x
)
|
∫
B
t
(
x
)
g
(
t
2
−
|
y
−
x
|
2
)
1
2
d
y
)
+
(
1
t
∂
t
)
n
−
2
2
(
t
n
1
|
B
t
(
x
)
|
∫
B
t
(
x
)
h
(
t
2
−
|
y
−
x
|
2
)
1
2
d
y
)
]
{\displaystyle u(x,t)={\frac {1}{\gamma _{n}}}\left[\partial _{t}\left({\frac {1}{t}}\partial _{t}\right)^{\frac {n-2}{2}}\left(t^{n}{\frac {1}{|B_{t}(x)|}}\int _{B_{t}(x)}{\frac {g}{(t^{2}-|y-x|^{2})^{\frac {1}{2}}}}dy\right)+\left({\frac {1}{t}}\partial _{t}\right)^{\frac {n-2}{2}}\left(t^{n}{\frac {1}{|B_{t}(x)|}}\int _{B_{t}(x)}{\frac {h}{(t^{2}-|y-x|^{2})^{\frac {1}{2}}}}dy\right)\right]}
then
u ∈ C2(Rn × [0, ∞))
utt − Δu = 0 in Rn × (0, ∞)
lim
(
x
,
t
)
→
(
x
0
,
0
)
u
(
x
,
t
)
=
g
(
x
0
)
{\displaystyle \lim _{(x,t)\to (x^{0},0)}u(x,t)=g(x^{0})}
lim
(
x
,
t
)
→
(
x
0
,
0
)
u
t
(
x
,
t
)
=
h
(
x
0
)
{\displaystyle \lim _{(x,t)\to (x^{0},0)}u_{t}(x,t)=h(x^{0})}
== Green's function ==
Consider the inhomogeneous wave equation in
1
+
D
{\displaystyle 1+D}
dimensions
(
∂
t
t
−
c
2
∇
2
)
u
=
s
(
t
,
x
)
{\displaystyle (\partial _{tt}-c^{2}\nabla ^{2})u=s(t,x)}
By rescaling time, we can set wave speed
c
=
1
{\displaystyle c=1}
.
Since the wave equation
(
∂
t
t
−
∇
2
)
u
=
s
(
t
,
x
)
{\displaystyle (\partial _{tt}-\nabla ^{2})u=s(t,x)}
has order 2 in time, there are two impulse responses: an acceleration impulse and a velocity impulse. The effect of inflicting an acceleration impulse is to suddenly change the wave velocity
∂
t
u
{\displaystyle \partial _{t}u}
. The effect of inflicting a velocity impulse is to suddenly change the wave displacement
u
{\displaystyle u}
.
For acceleration impulse,
s
(
t
,
x
)
=
δ
D
+
1
(
t
,
x
)
{\displaystyle s(t,x)=\delta ^{D+1}(t,x)}
where
δ
{\displaystyle \delta }
is the Dirac delta function. The solution to this case is called the Green's function
G
{\displaystyle G}
for the wave equation.
For velocity impulse,
s
(
t
,
x
)
=
∂
t
δ
D
+
1
(
t
,
x
)
{\displaystyle s(t,x)=\partial _{t}\delta ^{D+1}(t,x)}
, so if we solve the Green function
G
{\displaystyle G}
, the solution for this case is just
∂
t
G
{\displaystyle \partial _{t}G}
.
=== Duhamel's principle ===
The main use of Green's functions is to solve initial value problems by Duhamel's principle, both for the homogeneous and the inhomogeneous case.
Given the Green function
G
{\displaystyle G}
, and initial conditions
u
(
0
,
x
)
,
∂
t
u
(
0
,
x
)
{\displaystyle u(0,x),\partial _{t}u(0,x)}
, the solution to the homogeneous wave equation is
u
=
(
∂
t
G
)
∗
u
+
G
∗
∂
t
u
{\displaystyle u=(\partial _{t}G)\ast u+G\ast \partial _{t}u}
where the asterisk is convolution in space. More explicitly,
u
(
t
,
x
)
=
∫
(
∂
t
G
)
(
t
,
x
−
x
′
)
u
(
0
,
x
′
)
d
x
′
+
∫
G
(
t
,
x
−
x
′
)
(
∂
t
u
)
(
0
,
x
′
)
d
x
′
.
{\displaystyle u(t,x)=\int (\partial _{t}G)(t,x-x')u(0,x')dx'+\int G(t,x-x')(\partial _{t}u)(0,x')dx'.}
For the inhomogeneous case, the solution has one additional term by convolution over spacetime:
∬
t
′
<
t
G
(
t
−
t
′
,
x
−
x
′
)
s
(
t
′
,
x
′
)
d
t
′
d
x
′
.
{\displaystyle \iint _{t'<t}G(t-t',x-x')s(t',x')dt'dx'.}
=== Solution by Fourier transform ===
By a Fourier transform,
G
^
(
ω
)
=
1
−
ω
0
2
+
ω
1
2
+
⋯
+
ω
D
2
,
G
(
t
,
x
)
=
1
(
2
π
)
D
+
1
∫
G
^
(
ω
)
e
+
i
ω
0
t
+
i
ω
→
⋅
x
→
d
ω
0
d
ω
→
.
{\displaystyle {\hat {G}}(\omega )={\frac {1}{-\omega _{0}^{2}+\omega _{1}^{2}+\cdots +\omega _{D}^{2}}},\quad G(t,x)={\frac {1}{(2\pi )^{D+1}}}\int {\hat {G}}(\omega )e^{+i\omega _{0}t+i{\vec {\omega }}\cdot {\vec {x}}}d\omega _{0}d{\vec {\omega }}.}
The
ω
0
{\displaystyle \omega _{0}}
term can be integrated by the residue theorem. It would require us to perturb the integral slightly either by
+
i
ϵ
{\displaystyle +i\epsilon }
or by
−
i
ϵ
{\displaystyle -i\epsilon }
, because it is an improper integral. One perturbation gives the forward solution, and the other the backward solution. The forward solution gives
G
(
t
,
x
)
=
1
(
2
π
)
D
∫
sin
(
‖
ω
→
‖
t
)
‖
ω
→
‖
e
i
ω
→
⋅
x
→
d
ω
→
,
∂
t
G
(
t
,
x
)
=
1
(
2
π
)
D
∫
cos
(
‖
ω
→
‖
t
)
e
i
ω
→
⋅
x
→
d
ω
→
.
{\displaystyle G(t,x)={\frac {1}{(2\pi )^{D}}}\int {\frac {\sin(\|{\vec {\omega }}\|t)}{\|{\vec {\omega }}\|}}e^{i{\vec {\omega }}\cdot {\vec {x}}}d{\vec {\omega }},\quad \partial _{t}G(t,x)={\frac {1}{(2\pi )^{D}}}\int \cos(\|{\vec {\omega }}\|t)e^{i{\vec {\omega }}\cdot {\vec {x}}}d{\vec {\omega }}.}
The integral can be solved by analytically continuing the Poisson kernel, giving
G
(
t
,
x
)
=
lim
ϵ
→
0
+
C
D
D
−
1
Im
[
‖
x
‖
2
−
(
t
−
i
ϵ
)
2
]
−
(
D
−
1
)
/
2
{\displaystyle G(t,x)=\lim _{\epsilon \rightarrow 0^{+}}{\frac {C_{D}}{D-1}}\operatorname {Im} \left[\|x\|^{2}-(t-i\epsilon )^{2}\right]^{-(D-1)/2}}
where
C
D
=
π
−
(
D
+
1
)
/
2
Γ
(
(
D
+
1
)
/
2
)
{\displaystyle C_{D}=\pi ^{-(D+1)/2}\Gamma ((D+1)/2)}
is half the surface area of a
(
D
+
1
)
{\displaystyle (D+1)}
-dimensional hypersphere.
=== Solutions in particular dimensions ===
We can relate the Green's function in
D
{\displaystyle D}
dimensions to the Green's function in
D
+
n
{\displaystyle D+n}
dimensions.
==== Lowering dimensions ====
Given a function
s
(
t
,
x
)
{\displaystyle s(t,x)}
and a solution
u
(
t
,
x
)
{\displaystyle u(t,x)}
of a differential equation in
(
1
+
D
)
{\displaystyle (1+D)}
dimensions, we can trivially extend it to
(
1
+
D
+
n
)
{\displaystyle (1+D+n)}
dimensions by setting the additional
n
{\displaystyle n}
dimensions to be constant:
s
(
t
,
x
1
:
D
,
x
D
+
1
:
D
+
n
)
=
s
(
t
,
x
1
:
D
)
,
u
(
t
,
x
1
:
D
,
x
D
+
1
:
D
+
n
)
=
u
(
t
,
x
1
:
D
)
.
{\displaystyle s(t,x_{1:D},x_{D+1:D+n})=s(t,x_{1:D}),\quad u(t,x_{1:D},x_{D+1:D+n})=u(t,x_{1:D}).}
Since the Green's function is constructed from
f
{\displaystyle f}
and
u
{\displaystyle u}
, the Green's function in
(
1
+
D
+
n
)
{\displaystyle (1+D+n)}
dimensions integrates to the Green's function in
(
1
+
D
)
{\displaystyle (1+D)}
dimensions:
G
D
(
t
,
x
1
:
D
)
=
∫
R
n
G
D
+
n
(
t
,
x
1
:
D
,
x
D
+
1
:
D
+
n
)
d
n
x
D
+
1
:
D
+
n
.
{\displaystyle G_{D}(t,x_{1:D})=\int _{\mathbb {R} ^{n}}G_{D+n}(t,x_{1:D},x_{D+1:D+n})d^{n}x_{D+1:D+n}.}
==== Raising dimensions ====
The Green's function in
D
{\displaystyle D}
dimensions can be related to the Green's function in
D
+
2
{\displaystyle D+2}
dimensions. By spherical symmetry,
G
D
(
t
,
r
)
=
∫
R
2
G
D
+
2
(
t
,
r
2
+
y
2
+
z
2
)
d
y
d
z
.
{\displaystyle G_{D}(t,r)=\int _{\mathbb {R} ^{2}}G_{D+2}(t,{\sqrt {r^{2}+y^{2}+z^{2}}})dydz.}
Integrating in polar coordinates,
G
D
(
t
,
r
)
=
2
π
∫
0
∞
G
D
+
2
(
t
,
r
2
+
q
2
)
q
d
q
=
2
π
∫
r
∞
G
D
+
2
(
t
,
q
′
)
q
′
d
q
′
,
{\displaystyle G_{D}(t,r)=2\pi \int _{0}^{\infty }G_{D+2}(t,{\sqrt {r^{2}+q^{2}}})qdq=2\pi \int _{r}^{\infty }G_{D+2}(t,q')q'dq',}
where in the last equality we made the change of variables
q
′
=
r
2
+
q
2
{\displaystyle q'={\sqrt {r^{2}+q^{2}}}}
. Thus, we obtain the recurrence relation
G
D
+
2
(
t
,
r
)
=
−
1
2
π
r
∂
r
G
D
(
t
,
r
)
.
{\displaystyle G_{D+2}(t,r)=-{\frac {1}{2\pi r}}\partial _{r}G_{D}(t,r).}
=== Solutions in D = 1, 2, 3 ===
When
D
=
1
{\displaystyle D=1}
, the integrand in the Fourier transform is the sinc function
G
1
(
t
,
x
)
=
1
2
π
∫
R
sin
(
|
ω
|
t
)
|
ω
|
e
i
ω
x
d
ω
=
1
2
π
∫
sinc
(
ω
)
e
i
ω
x
t
d
ω
=
sgn
(
t
−
x
)
+
sgn
(
t
+
x
)
4
=
{
1
2
θ
(
t
−
|
x
|
)
t
>
0
−
1
2
θ
(
−
t
−
|
x
|
)
t
<
0
{\displaystyle {\begin{aligned}G_{1}(t,x)&={\frac {1}{2\pi }}\int _{\mathbb {R} }{\frac {\sin(|\omega |t)}{|\omega |}}e^{i\omega x}d\omega \\&={\frac {1}{2\pi }}\int \operatorname {sinc} (\omega )e^{i\omega {\frac {x}{t}}}d\omega \\&={\frac {\operatorname {sgn}(t-x)+\operatorname {sgn}(t+x)}{4}}\\&={\begin{cases}{\frac {1}{2}}\theta (t-|x|)\quad t>0\\-{\frac {1}{2}}\theta (-t-|x|)\quad t<0\end{cases}}\end{aligned}}}
where
sgn
{\displaystyle \operatorname {sgn} }
is the sign function and
θ
{\displaystyle \theta }
is the unit step function. One solution is the forward solution, the other is the backward solution.
The dimension can be raised to give the
D
=
3
{\displaystyle D=3}
case
G
3
(
t
,
r
)
=
δ
(
t
−
r
)
4
π
r
{\displaystyle G_{3}(t,r)={\frac {\delta (t-r)}{4\pi r}}}
and similarly for the backward solution. This can be integrated down by one dimension to give the
D
=
2
{\displaystyle D=2}
case
G
2
(
t
,
r
)
=
∫
R
δ
(
t
−
r
2
+
z
2
)
4
π
r
2
+
z
2
d
z
=
θ
(
t
−
r
)
2
π
t
2
−
r
2
{\displaystyle G_{2}(t,r)=\int _{\mathbb {R} }{\frac {\delta (t-{\sqrt {r^{2}+z^{2}}})}{4\pi {\sqrt {r^{2}+z^{2}}}}}dz={\frac {\theta (t-r)}{2\pi {\sqrt {t^{2}-r^{2}}}}}}
=== Wavefronts and wakes ===
In
D
=
1
{\displaystyle D=1}
case, the Green's function solution is the sum of two wavefronts
sgn
(
t
−
x
)
4
+
sgn
(
t
+
x
)
4
{\displaystyle {\frac {\operatorname {sgn}(t-x)}{4}}+{\frac {\operatorname {sgn}(t+x)}{4}}}
moving in opposite directions.
In odd dimensions, the forward solution is nonzero only at
t
=
r
{\displaystyle t=r}
. As the dimensions increase, the shape of wavefront becomes increasingly complex, involving higher derivatives of the Dirac delta function. For example,
G
1
=
1
2
c
θ
(
τ
)
G
3
=
1
4
π
c
2
δ
(
τ
)
r
G
5
=
1
8
π
2
c
2
(
δ
(
τ
)
r
3
+
δ
′
(
τ
)
c
r
2
)
G
7
=
1
16
π
3
c
2
(
3
δ
(
τ
)
r
4
+
3
δ
′
(
τ
)
c
r
3
+
δ
′
′
(
τ
)
c
2
r
2
)
{\displaystyle {\begin{aligned}&G_{1}={\frac {1}{2c}}\theta (\tau )\\&G_{3}={\frac {1}{4\pi c^{2}}}{\frac {\delta (\tau )}{r}}\\&G_{5}={\frac {1}{8\pi ^{2}c^{2}}}\left({\frac {\delta (\tau )}{r^{3}}}+{\frac {\delta ^{\prime }(\tau )}{cr^{2}}}\right)\\&G_{7}={\frac {1}{16\pi ^{3}c^{2}}}\left(3{\frac {\delta (\tau )}{r^{4}}}+3{\frac {\delta ^{\prime }(\tau )}{cr^{3}}}+{\frac {\delta ^{\prime \prime }(\tau )}{c^{2}r^{2}}}\right)\end{aligned}}}
where
τ
=
t
−
r
{\displaystyle \tau =t-r}
, and the wave speed
c
{\displaystyle c}
is restored.
In even dimensions, the forward solution is nonzero in
r
≤
t
{\displaystyle r\leq t}
, the entire region behind the wavefront becomes nonzero, called a wake. The wake has equation:
G
D
(
t
,
x
)
=
(
−
1
)
1
+
D
/
2
1
(
2
π
)
D
/
2
1
c
D
θ
(
t
−
r
/
c
)
(
t
2
−
r
2
/
c
2
)
(
D
−
1
)
/
2
{\displaystyle G_{D}(t,x)=(-1)^{1+D/2}{\frac {1}{(2\pi )^{D/2}}}{\frac {1}{c^{D}}}{\frac {\theta (t-r/c)}{\left(t^{2}-r^{2}/c^{2}\right)^{(D-1)/2}}}}
The wavefront itself also involves increasingly higher derivatives of the Dirac delta function.
This means that a general Huygens' principle – the wave displacement at a point
(
t
,
x
)
{\displaystyle (t,x)}
in spacetime depends only on the state at points on characteristic rays passing
(
t
,
x
)
{\displaystyle (t,x)}
– only holds in odd dimensions. A physical interpretation is that signals transmitted by waves remain undistorted in odd dimensions, but distorted in even dimensions.: 698
Hadamard's conjecture states that this generalized Huygens' principle still holds in all odd dimensions even when the coefficients in the wave equation are no longer constant. It is not strictly correct, but it is correct for certain families of coefficients: 765
== Problems with boundaries ==
=== One space dimension ===
==== Reflection and transmission at the boundary of two media ====
For an incident wave traveling from one medium (where the wave speed is c1) to another medium (where the wave speed is c2), one part of the wave will transmit into the second medium, while another part reflects back into the other direction and stays in the first medium. The amplitude of the transmitted wave and the reflected wave can be calculated by using the continuity condition at the boundary.
Consider the component of the incident wave with an angular frequency of ω, which has the waveform
u
inc
(
x
,
t
)
=
A
e
i
(
k
1
x
−
ω
t
)
,
A
∈
C
.
{\displaystyle u^{\text{inc}}(x,t)=Ae^{i(k_{1}x-\omega t)},\quad A\in \mathbb {C} .}
At t = 0, the incident reaches the boundary between the two media at x = 0. Therefore, the corresponding reflected wave and the transmitted wave will have the waveforms
u
refl
(
x
,
t
)
=
B
e
i
(
−
k
1
x
−
ω
t
)
,
u
trans
(
x
,
t
)
=
C
e
i
(
k
2
x
−
ω
t
)
,
B
,
C
∈
C
.
{\displaystyle u^{\text{refl}}(x,t)=Be^{i(-k_{1}x-\omega t)},\quad u^{\text{trans}}(x,t)=Ce^{i(k_{2}x-\omega t)},\quad B,C\in \mathbb {C} .}
The continuity condition at the boundary is
u
inc
(
0
,
t
)
+
u
refl
(
0
,
t
)
=
u
trans
(
0
,
t
)
,
u
x
inc
(
0
,
t
)
+
u
x
ref
(
0
,
t
)
=
u
x
trans
(
0
,
t
)
.
{\displaystyle u^{\text{inc}}(0,t)+u^{\text{refl}}(0,t)=u^{\text{trans}}(0,t),\quad u_{x}^{\text{inc}}(0,t)+u_{x}^{\text{ref}}(0,t)=u_{x}^{\text{trans}}(0,t).}
This gives the equations
A
+
B
=
C
,
A
−
B
=
k
2
k
1
C
=
c
1
c
2
C
,
{\displaystyle A+B=C,\quad A-B={\frac {k_{2}}{k_{1}}}C={\frac {c_{1}}{c_{2}}}C,}
and we have the reflectivity and transmissivity
B
A
=
c
2
−
c
1
c
2
+
c
1
,
C
A
=
2
c
2
c
2
+
c
1
.
{\displaystyle {\frac {B}{A}}={\frac {c_{2}-c_{1}}{c_{2}+c_{1}}},\quad {\frac {C}{A}}={\frac {2c_{2}}{c_{2}+c_{1}}}.}
When c2 < c1, the reflected wave has a reflection phase change of 180°, since B/A < 0. The energy conservation can be verified by
B
2
c
1
+
C
2
c
2
=
A
2
c
1
.
{\displaystyle {\frac {B^{2}}{c_{1}}}+{\frac {C^{2}}{c_{2}}}={\frac {A^{2}}{c_{1}}}.}
The above discussion holds true for any component, regardless of its angular frequency of ω.
The limiting case of c2 = 0 corresponds to a "fixed end" that does not move, whereas the limiting case of c2 → ∞ corresponds to a "free end".
==== The Sturm–Liouville formulation ====
A flexible string that is stretched between two points x = 0 and x = L satisfies the wave equation for t > 0 and 0 < x < L. On the boundary points, u may satisfy a variety of boundary conditions. A general form that is appropriate for applications is
−
u
x
(
t
,
0
)
+
a
u
(
t
,
0
)
=
0
,
u
x
(
t
,
L
)
+
b
u
(
t
,
L
)
=
0
,
{\displaystyle {\begin{aligned}-u_{x}(t,0)+au(t,0)&=0,\\u_{x}(t,L)+bu(t,L)&=0,\end{aligned}}}
where a and b are non-negative. The case where u is required to vanish at an endpoint (i.e. "fixed end") is the limit of this condition when the respective a or b approaches infinity. The method of separation of variables consists in looking for solutions of this problem in the special form
u
(
t
,
x
)
=
T
(
t
)
v
(
x
)
.
{\displaystyle u(t,x)=T(t)v(x).}
A consequence is that
T
″
c
2
T
=
v
″
v
=
−
λ
.
{\displaystyle {\frac {T''}{c^{2}T}}={\frac {v''}{v}}=-\lambda .}
The eigenvalue λ must be determined so that there is a non-trivial solution of the boundary-value problem
v
″
+
λ
v
=
0
,
−
v
′
(
0
)
+
a
v
(
0
)
=
0
,
v
′
(
L
)
+
b
v
(
L
)
=
0.
{\displaystyle {\begin{aligned}v''+\lambda v=0,&\\-v'(0)+av(0)&=0,\\v'(L)+bv(L)&=0.\end{aligned}}}
This is a special case of the general problem of Sturm–Liouville theory. If a and b are positive, the eigenvalues are all positive, and the solutions are trigonometric functions. A solution that satisfies square-integrable initial conditions for u and ut can be obtained from expansion of these functions in the appropriate trigonometric series.
=== Several space dimensions ===
The one-dimensional initial-boundary value theory may be extended to an arbitrary number of space dimensions. Consider a domain D in m-dimensional x space, with boundary B. Then the wave equation is to be satisfied if x is in D, and t > 0. On the boundary of D, the solution u shall satisfy
∂
u
∂
n
+
a
u
=
0
,
{\displaystyle {\frac {\partial u}{\partial n}}+au=0,}
where n is the unit outward normal to B, and a is a non-negative function defined on B. The case where u vanishes on B is a limiting case for a approaching infinity. The initial conditions are
u
(
0
,
x
)
=
f
(
x
)
,
u
t
(
0
,
x
)
=
g
(
x
)
,
{\displaystyle u(0,x)=f(x),\quad u_{t}(0,x)=g(x),}
where f and g are defined in D. This problem may be solved by expanding f and g in the eigenfunctions of the Laplacian in D, which satisfy the boundary conditions. Thus the eigenfunction v satisfies
∇
⋅
∇
v
+
λ
v
=
0
{\displaystyle \nabla \cdot \nabla v+\lambda v=0}
in D, and
∂
v
∂
n
+
a
v
=
0
{\displaystyle {\frac {\partial v}{\partial n}}+av=0}
on B.
In the case of two space dimensions, the eigenfunctions may be interpreted as the modes of vibration of a drumhead stretched over the boundary B. If B is a circle, then these eigenfunctions have an angular component that is a trigonometric function of the polar angle θ, multiplied by a Bessel function (of integer order) of the radial component. Further details are in Helmholtz equation.
If the boundary is a sphere in three space dimensions, the angular components of the eigenfunctions are spherical harmonics, and the radial components are Bessel functions of half-integer order.
== Inhomogeneous wave equation in one dimension ==
The inhomogeneous wave equation in one dimension is
u
t
t
(
x
,
t
)
−
c
2
u
x
x
(
x
,
t
)
=
s
(
x
,
t
)
{\displaystyle u_{tt}(x,t)-c^{2}u_{xx}(x,t)=s(x,t)}
with initial conditions
u
(
x
,
0
)
=
f
(
x
)
,
{\displaystyle u(x,0)=f(x),}
u
t
(
x
,
0
)
=
g
(
x
)
.
{\displaystyle u_{t}(x,0)=g(x).}
The function s(x, t) is often called the source function because in practice it describes the effects of the sources of waves on the medium carrying them. Physical examples of source functions include the force driving a wave on a string, or the charge or current density in the Lorenz gauge of electromagnetism.
One method to solve the initial-value problem (with the initial values as posed above) is to take advantage of a special property of the wave equation in an odd number of space dimensions, namely that its solutions respect causality. That is, for any point (xi, ti), the value of u(xi, ti) depends only on the values of f(xi + cti) and f(xi − cti) and the values of the function g(x) between (xi − cti) and (xi + cti). This can be seen in d'Alembert's formula, stated above, where these quantities are the only ones that show up in it. Physically, if the maximum propagation speed is c, then no part of the wave that cannot propagate to a given point by a given time can affect the amplitude at the same point and time.
In terms of finding a solution, this causality property means that for any given point on the line being considered, the only area that needs to be considered is the area encompassing all the points that could causally affect the point being considered. Denote the area that causally affects point (xi, ti) as RC. Suppose we integrate the inhomogeneous wave equation over this region:
∬
R
C
(
c
2
u
x
x
(
x
,
t
)
−
u
t
t
(
x
,
t
)
)
d
x
d
t
=
∬
R
C
s
(
x
,
t
)
d
x
d
t
.
{\displaystyle \iint _{R_{C}}{\big (}c^{2}u_{xx}(x,t)-u_{tt}(x,t){\big )}\,dx\,dt=\iint _{R_{C}}s(x,t)\,dx\,dt.}
To simplify this greatly, we can use Green's theorem to simplify the left side to get the following:
∫
L
0
+
L
1
+
L
2
(
−
c
2
u
x
(
x
,
t
)
d
t
−
u
t
(
x
,
t
)
d
x
)
=
∬
R
C
s
(
x
,
t
)
d
x
d
t
.
{\displaystyle \int _{L_{0}+L_{1}+L_{2}}{\big (}{-}c^{2}u_{x}(x,t)\,dt-u_{t}(x,t)\,dx{\big )}=\iint _{R_{C}}s(x,t)\,dx\,dt.}
The left side is now the sum of three line integrals along the bounds of the causality region. These turn out to be fairly easy to compute:
∫
x
i
−
c
t
i
x
i
+
c
t
i
−
u
t
(
x
,
0
)
d
x
=
−
∫
x
i
−
c
t
i
x
i
+
c
t
i
g
(
x
)
d
x
.
{\displaystyle \int _{x_{i}-ct_{i}}^{x_{i}+ct_{i}}-u_{t}(x,0)\,dx=-\int _{x_{i}-ct_{i}}^{x_{i}+ct_{i}}g(x)\,dx.}
In the above, the term to be integrated with respect to time disappears because the time interval involved is zero, thus dt = 0.
For the other two sides of the region, it is worth noting that x ± ct is a constant, namely xi ± cti, where the sign is chosen appropriately. Using this, we can get the relation dx ± cdt = 0, again choosing the right sign:
∫
L
1
(
−
c
2
u
x
(
x
,
t
)
d
t
−
u
t
(
x
,
t
)
d
x
)
=
∫
L
1
(
c
u
x
(
x
,
t
)
d
x
+
c
u
t
(
x
,
t
)
d
t
)
=
c
∫
L
1
d
u
(
x
,
t
)
=
c
u
(
x
i
,
t
i
)
−
c
f
(
x
i
+
c
t
i
)
.
{\displaystyle {\begin{aligned}\int _{L_{1}}{\big (}{-}c^{2}u_{x}(x,t)\,dt-u_{t}(x,t)\,dx{\big )}&=\int _{L_{1}}{\big (}cu_{x}(x,t)\,dx+cu_{t}(x,t)\,dt{\big )}\\&=c\int _{L_{1}}\,du(x,t)\\&=cu(x_{i},t_{i})-cf(x_{i}+ct_{i}).\end{aligned}}}
And similarly for the final boundary segment:
∫
L
2
(
−
c
2
u
x
(
x
,
t
)
d
t
−
u
t
(
x
,
t
)
d
x
)
=
−
∫
L
2
(
c
u
x
(
x
,
t
)
d
x
+
c
u
t
(
x
,
t
)
d
t
)
=
−
c
∫
L
2
d
u
(
x
,
t
)
=
c
u
(
x
i
,
t
i
)
−
c
f
(
x
i
−
c
t
i
)
.
{\displaystyle {\begin{aligned}\int _{L_{2}}{\big (}{-}c^{2}u_{x}(x,t)\,dt-u_{t}(x,t)\,dx{\big )}&=-\int _{L_{2}}{\big (}cu_{x}(x,t)\,dx+cu_{t}(x,t)\,dt{\big )}\\&=-c\int _{L_{2}}\,du(x,t)\\&=cu(x_{i},t_{i})-cf(x_{i}-ct_{i}).\end{aligned}}}
Adding the three results together and putting them back in the original integral gives
∬
R
C
s
(
x
,
t
)
d
x
d
t
=
−
∫
x
i
−
c
t
i
x
i
+
c
t
i
g
(
x
)
d
x
+
c
u
(
x
i
,
t
i
)
−
c
f
(
x
i
+
c
t
i
)
+
c
u
(
x
i
,
t
i
)
−
c
f
(
x
i
−
c
t
i
)
=
2
c
u
(
x
i
,
t
i
)
−
c
f
(
x
i
+
c
t
i
)
−
c
f
(
x
i
−
c
t
i
)
−
∫
x
i
−
c
t
i
x
i
+
c
t
i
g
(
x
)
d
x
.
{\displaystyle {\begin{aligned}\iint _{R_{C}}s(x,t)\,dx\,dt&=-\int _{x_{i}-ct_{i}}^{x_{i}+ct_{i}}g(x)\,dx+cu(x_{i},t_{i})-cf(x_{i}+ct_{i})+cu(x_{i},t_{i})-cf(x_{i}-ct_{i})\\&=2cu(x_{i},t_{i})-cf(x_{i}+ct_{i})-cf(x_{i}-ct_{i})-\int _{x_{i}-ct_{i}}^{x_{i}+ct_{i}}g(x)\,dx.\end{aligned}}}
Solving for u(xi, ti), we arrive at
u
(
x
i
,
t
i
)
=
f
(
x
i
+
c
t
i
)
+
f
(
x
i
−
c
t
i
)
2
+
1
2
c
∫
x
i
−
c
t
i
x
i
+
c
t
i
g
(
x
)
d
x
+
1
2
c
∫
0
t
i
∫
x
i
−
c
(
t
i
−
t
)
x
i
+
c
(
t
i
−
t
)
s
(
x
,
t
)
d
x
d
t
.
{\displaystyle u(x_{i},t_{i})={\frac {f(x_{i}+ct_{i})+f(x_{i}-ct_{i})}{2}}+{\frac {1}{2c}}\int _{x_{i}-ct_{i}}^{x_{i}+ct_{i}}g(x)\,dx+{\frac {1}{2c}}\int _{0}^{t_{i}}\int _{x_{i}-c(t_{i}-t)}^{x_{i}+c(t_{i}-t)}s(x,t)\,dx\,dt.}
In the last equation of the sequence, the bounds of the integral over the source function have been made explicit. Looking at this solution, which is valid for all choices (xi, ti) compatible with the wave equation, it is clear that the first two terms are simply d'Alembert's formula, as stated above as the solution of the homogeneous wave equation in one dimension. The difference is in the third term, the integral over the source.
== Further generalizations ==
=== Elastic waves ===
The elastic wave equation (also known as the Navier–Cauchy equation) in three dimensions describes the propagation of waves in an isotropic homogeneous elastic medium. Most solid materials are elastic, so this equation describes such phenomena as seismic waves in the Earth and ultrasonic waves used to detect flaws in materials. While linear, this equation has a more complex form than the equations given above, as it must account for both longitudinal and transverse motion:
ρ
u
¨
=
f
+
(
λ
+
2
μ
)
∇
(
∇
⋅
u
)
−
μ
∇
×
(
∇
×
u
)
,
{\displaystyle \rho {\ddot {\mathbf {u} }}=\mathbf {f} +(\lambda +2\mu )\nabla (\nabla \cdot \mathbf {u} )-\mu \nabla \times (\nabla \times \mathbf {u} ),}
where:
λ and μ are the so-called Lamé parameters describing the elastic properties of the medium,
ρ is the density,
f is the source function (driving force),
u is the displacement vector.
By using ∇ × (∇ × u) = ∇(∇ ⋅ u) − ∇ ⋅ ∇ u = ∇(∇ ⋅ u) − ∆u, the elastic wave equation can be rewritten into the more common form of the Navier–Cauchy equation.
Note that in the elastic wave equation, both force and displacement are vector quantities. Thus, this equation is sometimes known as the vector wave equation.
As an aid to understanding, the reader will observe that if f and ∇ ⋅ u are set to zero, this becomes (effectively) Maxwell's equation for the propagation of the electric field E, which has only transverse waves.
=== Dispersion relation ===
In dispersive wave phenomena, the speed of wave propagation varies with the wavelength of the wave, which is reflected by a dispersion relation
ω
=
ω
(
k
)
,
{\displaystyle \omega =\omega (\mathbf {k} ),}
where ω is the angular frequency, and k is the wavevector describing plane-wave solutions. For light waves, the dispersion relation is ω = ±c |k|, but in general, the constant speed c gets replaced by a variable phase velocity:
v
p
=
ω
(
k
)
k
.
{\displaystyle v_{\text{p}}={\frac {\omega (k)}{k}}.}
== See also ==
== Notes ==
== References ==
Flint, H.T. (1929) "Wave Mechanics" Methuen & Co. Ltd. London.
Atiyah, M. F.; Bott, R.; Gårding, L. (1970). "Lacunas for hyperbolic differential operators with constant coefficients I". Acta Mathematica. 124: 109–189. doi:10.1007/BF02394570. ISSN 0001-5962.
Atiyah, M. F.; Bott, R.; Gårding, L. (1973). "Lacunas for hyperbolic differential operators with constant coefficients. II". Acta Mathematica. 131: 145–206. doi:10.1007/BF02392039. ISSN 0001-5962.
R. Courant, D. Hilbert, Methods of Mathematical Physics, vol II. Interscience (Wiley) New York, 1962.
Evans, Lawrence C. (2010). Partial Differential Equations. Providence (R.I.): American Mathematical Soc. ISBN 978-0-8218-4974-3.
"Linear Wave Equations", EqWorld: The World of Mathematical Equations.
"Nonlinear Wave Equations", EqWorld: The World of Mathematical Equations.
William C. Lane, "MISN-0-201 The Wave Equation and Its Solutions", Project PHYSNET.
== External links ==
Nonlinear Wave Equations by Stephen Wolfram and Rob Knapp, Nonlinear Wave Equation Explorer by Wolfram Demonstrations Project.
Mathematical aspects of wave equations are discussed on the Dispersive PDE Wiki Archived 2007-04-25 at the Wayback Machine.
Graham W Griffiths and William E. Schiesser (2009). Linear and nonlinear waves. Scholarpedia, 4(7):4308. doi:10.4249/scholarpedia.4308 | Wikipedia/Wave_equation |
In mathematics, delay differential equations (DDEs) are a type of differential equation in which the derivative of the unknown function at a certain time is given in terms of the values of the function at previous times.
DDEs are also called time-delay systems, systems with aftereffect or dead-time, hereditary systems, equations with deviating argument, or differential-difference equations. They belong to the class of systems with the functional state, i.e. partial differential equations (PDEs) which are infinite dimensional, as opposed to ordinary differential equations (ODEs) having a finite dimensional state vector. Four points may give a possible explanation of the popularity of DDEs:
Aftereffect is an applied problem: it is well known that, together with the increasing expectations of dynamic performances, engineers need their models to behave more like the real process. Many processes include aftereffect phenomena in their inner dynamics. In addition, actuators, sensors, and communication networks that are now involved in feedback control loops introduce such delays. Finally, besides actual delays, time lags are frequently used to simplify very high order models. Then, the interest for DDEs keeps on growing in all scientific areas and, especially, in control engineering.
Delay systems are still resistant to many classical controllers: one could think that the simplest approach would consist in replacing them by some finite-dimensional approximations. Unfortunately, ignoring effects which are adequately represented by DDEs is not a general alternative: in the best situation (constant and known delays), it leads to the same degree of complexity in the control design. In worst cases (time-varying delays, for instance), it is potentially disastrous in terms of stability and oscillations.
Voluntary introduction of delays can benefit the control system.
In spite of their complexity, DDEs often appear as simple infinite-dimensional models in the very complex area of partial differential equations (PDEs).
A general form of the time-delay differential equation for
x
(
t
)
∈
R
n
{\displaystyle x(t)\in \mathbb {R} ^{n}}
is
d
d
t
x
(
t
)
=
f
(
t
,
x
(
t
)
,
x
t
)
,
{\displaystyle {\frac {d}{dt}}x(t)=f(t,x(t),x_{t}),}
where
x
t
=
{
x
(
τ
)
:
τ
≤
t
}
{\displaystyle x_{t}=\{x(\tau ):\tau \leq t\}}
represents the trajectory of the solution in the past. In this equation,
f
{\displaystyle f}
is a functional operator from
R
×
R
n
×
C
1
(
R
,
R
n
)
{\displaystyle \mathbb {R} \times \mathbb {R} ^{n}\times C^{1}(\mathbb {R} ,\mathbb {R} ^{n})}
to
R
n
.
{\displaystyle \mathbb {R} ^{n}.}
== Examples ==
Continuous delay
d
d
t
x
(
t
)
=
f
(
t
,
x
(
t
)
,
∫
−
∞
0
x
(
t
+
τ
)
d
μ
(
τ
)
)
{\displaystyle {\frac {d}{dt}}x(t)=f\left(t,x(t),\int _{-\infty }^{0}x(t+\tau )\,d\mu (\tau )\right)}
Discrete delay
d
d
t
x
(
t
)
=
f
(
t
,
x
(
t
)
,
x
(
t
−
τ
1
)
,
…
,
x
(
t
−
τ
m
)
)
{\displaystyle {\frac {d}{dt}}x(t)=f(t,x(t),x(t-\tau _{1}),\dots ,x(t-\tau _{m}))}
for
τ
1
>
⋯
>
τ
m
≥
0.
{\displaystyle \tau _{1}>\dots >\tau _{m}\geq 0.}
Linear with discrete delays
d
d
t
x
(
t
)
=
A
0
x
(
t
)
+
A
1
x
(
t
−
τ
1
)
+
⋯
+
A
m
x
(
t
−
τ
m
)
{\displaystyle {\frac {d}{dt}}x(t)=A_{0}x(t)+A_{1}x(t-\tau _{1})+\dots +A_{m}x(t-\tau _{m})}
where
A
0
,
…
,
A
m
∈
R
n
×
n
{\displaystyle A_{0},\dotsc ,A_{m}\in \mathbb {R} ^{n\times n}}
.
Pantograph equation
d
d
t
x
(
t
)
=
a
x
(
t
)
+
b
x
(
λ
t
)
,
{\displaystyle {\frac {d}{dt}}x(t)=ax(t)+bx(\lambda t),}
where a, b and λ are constants and 0 < λ < 1. This equation and some more general forms are named after the pantographs on trains.
== Solving DDEs ==
DDEs are mostly solved in a stepwise fashion with a principle called the method of steps. For instance, consider the DDE with a single delay
d
d
t
x
(
t
)
=
f
(
x
(
t
)
,
x
(
t
−
τ
)
)
{\displaystyle {\frac {d}{dt}}x(t)=f(x(t),x(t-\tau ))}
with given initial condition
ϕ
:
[
−
τ
,
0
]
→
R
n
{\displaystyle \phi \colon [-\tau ,0]\to \mathbb {R} ^{n}}
. Then the solution on the interval
[
0
,
τ
]
{\displaystyle [0,\tau ]}
is given by
ψ
(
t
)
{\displaystyle \psi (t)}
which is the solution to the inhomogeneous initial value problem
d
d
t
ψ
(
t
)
=
f
(
ψ
(
t
)
,
ϕ
(
t
−
τ
)
)
,
{\displaystyle {\frac {d}{dt}}\psi (t)=f(\psi (t),\phi (t-\tau )),}
with
ψ
(
0
)
=
ϕ
(
0
)
{\displaystyle \psi (0)=\phi (0)}
. This can be continued for the successive intervals by using the solution to the previous interval as inhomogeneous term. In practice, the initial value problem is often solved numerically.
=== Example ===
Suppose
f
(
x
(
t
)
,
x
(
t
−
τ
)
)
=
a
x
(
t
−
τ
)
{\displaystyle f(x(t),x(t-\tau ))=ax(t-\tau )}
and
ϕ
(
t
)
=
1
{\displaystyle \phi (t)=1}
. Then the initial value problem can be solved with integration,
x
(
t
)
=
x
(
0
)
+
∫
s
=
0
t
d
d
t
x
(
s
)
d
s
=
1
+
a
∫
s
=
0
t
ϕ
(
s
−
τ
)
d
s
,
{\displaystyle x(t)=x(0)+\int _{s=0}^{t}{\frac {d}{dt}}x(s)\,ds=1+a\int _{s=0}^{t}\phi (s-\tau )\,ds,}
i.e.,
x
(
t
)
=
a
t
+
1
{\displaystyle x(t)=at+1}
, where the initial condition is given by
x
(
0
)
=
ϕ
(
0
)
=
1
{\displaystyle x(0)=\phi (0)=1}
. Similarly, for the interval
t
∈
[
τ
,
2
τ
]
{\displaystyle t\in [\tau ,2\tau ]}
we integrate and fit the initial condition,
x
(
t
)
=
x
(
τ
)
+
∫
s
=
τ
t
d
d
t
x
(
s
)
d
s
=
(
a
τ
+
1
)
+
a
∫
s
=
τ
t
(
a
(
s
−
τ
)
+
1
)
d
s
=
(
a
τ
+
1
)
+
a
∫
s
=
0
t
−
τ
(
a
s
+
1
)
d
s
,
{\displaystyle {\begin{aligned}x(t)=x(\tau )+\int _{s=\tau }^{t}{\frac {d}{dt}}x(s)\,ds&=(a\tau +1)+a\int _{s=\tau }^{t}\left(a(s-\tau )+1\right)ds\\&=(a\tau +1)+a\int _{s=0}^{t-\tau }\left(as+1\right)ds,\end{aligned}}}
i.e.,
x
(
t
)
=
(
a
τ
+
1
)
+
a
(
t
−
τ
)
(
1
2
a
(
t
−
τ
)
+
1
)
.
{\textstyle x(t)=(a\tau +1)+a(t-\tau )\left({\frac {1}{2}}{a(t-\tau )}+1\right).}
== Reduction to ODE ==
In some cases, differential equations can be represented in a format that looks like delay differential equations.
Example 1 Consider an equation
d
d
t
x
(
t
)
=
f
(
t
,
x
(
t
)
,
∫
−
∞
0
x
(
t
+
τ
)
e
λ
τ
d
τ
)
.
{\displaystyle {\frac {d}{dt}}x(t)=f\left(t,x(t),\int _{-\infty }^{0}x(t+\tau )e^{\lambda \tau }\,d\tau \right).}
Introduce
y
(
t
)
=
∫
−
∞
0
x
(
t
+
τ
)
e
λ
τ
d
τ
{\displaystyle y(t)=\int _{-\infty }^{0}x(t+\tau )e^{\lambda \tau }\,d\tau }
to get a system of ODEs
d
d
t
x
(
t
)
=
f
(
t
,
x
,
y
)
,
d
d
t
y
(
t
)
=
x
−
λ
y
.
{\displaystyle {\frac {d}{dt}}x(t)=f(t,x,y),\quad {\frac {d}{dt}}y(t)=x-\lambda y.}
Example 2 An equation
d
d
t
x
(
t
)
=
f
(
t
,
x
(
t
)
,
∫
−
∞
0
x
(
t
+
τ
)
cos
(
α
τ
+
β
)
d
τ
)
{\displaystyle {\frac {d}{dt}}x(t)=f\left(t,x(t),\int _{-\infty }^{0}x(t+\tau )\cos(\alpha \tau +\beta )\,d\tau \right)}
is equivalent to
d
d
t
x
(
t
)
=
f
(
t
,
x
,
y
)
,
d
d
t
y
(
t
)
=
cos
(
β
)
x
+
α
z
,
d
d
t
z
(
t
)
=
sin
(
β
)
x
−
α
y
,
{\displaystyle {\frac {d}{dt}}x(t)=f(t,x,y),\quad {\frac {d}{dt}}y(t)=\cos(\beta )x+\alpha z,\quad {\frac {d}{dt}}z(t)=\sin(\beta )x-\alpha y,}
where
y
=
∫
−
∞
0
x
(
t
+
τ
)
cos
(
α
τ
+
β
)
d
τ
,
z
=
∫
−
∞
0
x
(
t
+
τ
)
sin
(
α
τ
+
β
)
d
τ
.
{\displaystyle y=\int _{-\infty }^{0}x(t+\tau )\cos(\alpha \tau +\beta )\,d\tau ,\quad z=\int _{-\infty }^{0}x(t+\tau )\sin(\alpha \tau +\beta )\,d\tau .}
== The characteristic equation ==
Similar to ODEs, many properties of linear DDEs can be characterized and analyzed using the characteristic equation. The characteristic equation associated with the linear DDE with discrete delays
d
d
t
x
(
t
)
=
A
0
x
(
t
)
+
A
1
x
(
t
−
τ
1
)
+
⋯
+
A
m
x
(
t
−
τ
m
)
{\displaystyle {\frac {d}{dt}}x(t)=A_{0}x(t)+A_{1}x(t-\tau _{1})+\dots +A_{m}x(t-\tau _{m})}
is the exponential polynomial given by
det
(
−
λ
I
+
A
0
+
A
1
e
−
τ
1
λ
+
⋯
+
A
m
e
−
τ
m
λ
)
=
0.
{\displaystyle \det(-\lambda I+A_{0}+A_{1}e^{-\tau _{1}\lambda }+\dotsb +A_{m}e^{-\tau _{m}\lambda })=0.}
The roots λ of the characteristic equation are called characteristic roots or eigenvalues and the solution set is often referred to as the spectrum. Because of the exponential in the characteristic equation, the DDE has, unlike the ODE case, an infinite number of eigenvalues, making a spectral analysis more involved. The spectrum does however have some properties which can be exploited in the analysis. For instance, even though there are an infinite number of eigenvalues, there are only a finite number of eigenvalues in any vertical strip of the complex plane.
This characteristic equation is a nonlinear eigenproblem and there are many methods to compute the spectrum numerically. In some special situations it is possible to solve the characteristic equation explicitly. Consider, for example, the following DDE:
d
d
t
x
(
t
)
=
−
x
(
t
−
1
)
.
{\displaystyle {\frac {d}{dt}}x(t)=-x(t-1).}
The characteristic equation is
−
λ
−
e
−
λ
=
0.
{\displaystyle -\lambda -e^{-\lambda }=0.}
There are an infinite number of solutions to this equation for complex λ. They are given by
λ
=
W
k
(
−
1
)
,
{\displaystyle \lambda =W_{k}(-1),}
where Wk is the kth branch of the Lambert W function, so:
x
(
t
)
=
x
(
0
)
e
W
k
(
−
1
)
⋅
t
.
{\displaystyle x(t)=x(0)\,e^{W_{k}(-1)\cdot t}.}
== Another example ==
The following DDE:
d
d
t
u
(
t
)
=
2
u
(
2
t
+
1
)
−
2
u
(
2
t
−
1
)
.
{\displaystyle {\frac {d}{dt}}u(t)=2u(2t+1)-2u(2t-1).}
Have as solution in
R
{\displaystyle \mathbb {R} }
the function:
u
(
t
)
=
{
F
(
t
+
1
)
,
|
t
|
<
1
0
,
|
t
|
≥
1
{\displaystyle u(t)={\begin{cases}F(t+1),\quad |t|<1\\0,\quad |t|\geq 1\end{cases}}}
with
F
(
t
)
{\displaystyle F(t)}
the Fabius function, known as Rvachëv up function.
== Applications ==
Dynamics of diabetes
Epidemiology
Population dynamics
Classical electrodynamics
== See also ==
Functional differential equation
Halanay Inequality
== References ==
== Further reading ==
Bellen, Alfredo; Zennaro, Marino (2003). Numerical Methods for Delay Differential Equations. Numerical Mathematics and Scientific Computation. Oxford, UK: Oxford University Press. ISBN 978-0198506546.
Bellman, Richard; Cooke, Kenneth L. (1963). Differential-Difference Equations (PDF). Mathematics in Science and Engineering. New York, NY: Academic Press. ISBN 978-0120848508. {{cite book}}: ISBN / Date incompatibility (help)
Briat, Corentin (2015). Linear Parameter-Varying and Time-Delay Systems: Analysis, Observation, Filtering & Control. Advances in Delays and Dynamics. Heidelberg, DE: Springer-Verlag. ISBN 978-3662440490.
Driver, Rodney D. (1977). Ordinary and Delay Differential Equations. Applied Mathematical Sciences. Vol. 20. New York, NY: Springer-Verlag. doi:10.1007/978-1-4684-9467-9. ISBN 978-0387902319.
Erneux, Thomas (2009). Applied Delay Differential Equations. Surveys and Tutorials in the Applied Mathematical Sciences. Vol. 3. New York, NY: Springer Science+Business Media. doi:10.1007/978-0-387-74372-1. ISBN 978-0387743714.
== External links ==
Skip Thompson (ed.). "Delay-Differential Equations". Scholarpedia. | Wikipedia/Delay_differential_equation |
Numerical methods for ordinary differential equations are methods used to find numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although this term can also refer to the computation of integrals.
Many differential equations cannot be solved exactly. For practical purposes, however – such as in engineering – a numeric approximation to the solution is often sufficient. The algorithms studied here can be used to compute such an approximation. An alternative method is to use techniques from calculus to obtain a series expansion of the solution.
Ordinary differential equations occur in many scientific disciplines, including physics, chemistry, biology, and economics. In addition, some methods in numerical partial differential equations convert the partial differential equation into an ordinary differential equation, which must then be solved.
== The problem ==
A first-order differential equation is an Initial value problem (IVP) of the form,
where
f
{\displaystyle f}
is a function
f
:
[
t
0
,
∞
)
×
R
d
→
R
d
{\displaystyle f:[t_{0},\infty )\times \mathbb {R} ^{d}\to \mathbb {R} ^{d}}
, and the initial condition
y
0
∈
R
d
{\displaystyle y_{0}\in \mathbb {R} ^{d}}
is a given vector. First-order means that only the first derivative of y appears in the equation, and higher derivatives are absent.
Without loss of generality to higher-order systems, we restrict ourselves to first-order differential equations, because a higher-order ODE can be converted into a larger system of first-order equations by introducing extra variables. For example, the second-order equation y′′ = −y can be rewritten as two first-order equations: y′ = z and z′ = −y.
In this section, we describe numerical methods for IVPs, and remark that boundary value problems (BVPs) require a different set of tools. In a BVP, one defines values, or components of the solution y at more than one point. Because of this, different methods need to be used to solve BVPs. For example, the shooting method (and its variants) or global methods like finite differences, Galerkin methods, or collocation methods are appropriate for that class of problems.
The Picard–Lindelöf theorem states that there is a unique solution, provided f is Lipschitz-continuous.
== Methods ==
Numerical methods for solving first-order IVPs often fall into one of two large categories: linear multistep methods, or Runge–Kutta methods. A further division can be realized by dividing methods into those that are explicit and those that are implicit. For example, implicit linear multistep methods include Adams-Moulton methods, and backward differentiation methods (BDF), whereas implicit Runge–Kutta methods include diagonally implicit Runge–Kutta (DIRK), singly diagonally implicit Runge–Kutta (SDIRK), and Gauss–Radau (based on Gaussian quadrature) numerical methods. Explicit examples from the linear multistep family include the Adams–Bashforth methods, and any Runge–Kutta method with a lower diagonal Butcher tableau is explicit. A loose rule of thumb dictates that stiff differential equations require the use of implicit schemes, whereas non-stiff problems can be solved more efficiently with explicit schemes.
The so-called general linear methods (GLMs) are a generalization of the above two large classes of methods.
=== Euler method ===
From any point on a curve, you can find an approximation of a nearby point on the curve by moving a short distance along a line tangent to the curve.
Starting with the differential equation (1), we replace the derivative y′ by the finite difference approximation
which when re-arranged yields the following formula
y
(
t
+
h
)
≈
y
(
t
)
+
h
y
′
(
t
)
{\displaystyle y(t+h)\approx y(t)+hy'(t)}
and using (1) gives:
This formula is usually applied in the following way. We choose a step size h, and we construct the sequence
t
0
,
t
1
=
t
0
+
h
,
t
2
=
t
0
+
2
h
,
.
.
.
{\displaystyle t_{0},t_{1}=t_{0}+h,t_{2}=t_{0}+2h,...}
We denote by
y
n
{\displaystyle y_{n}}
a numerical estimate of the exact solution
y
(
t
n
)
{\displaystyle y(t_{n})}
. Motivated by (3), we compute these estimates by the following recursive scheme
This is the Euler method (or forward Euler method, in contrast with the backward Euler method, to be described below). The method is named after Leonhard Euler who described it in 1768.
The Euler method is an example of an explicit method. This means that the new value yn+1 is defined in terms of things that are already known, like yn.
=== Backward Euler method ===
If, instead of (2), we use the approximation
we get the backward Euler method:
The backward Euler method is an implicit method, meaning that we have to solve an equation to find yn+1. One often uses fixed-point iteration or (some modification of) the Newton–Raphson method to achieve this.
It costs more time to solve this equation than explicit methods; this cost must be taken into consideration when one selects the method to use. The advantage of implicit methods such as (6) is that they are usually more stable for solving a stiff equation, meaning that a larger step size h can be used.
=== First-order exponential integrator method ===
Exponential integrators describe a large class of integrators that have recently seen a lot of development. They date back to at least the 1960s.
In place of (1), we assume the differential equation is either of the form
or it has been locally linearized about a background state to produce a linear term
−
A
y
{\displaystyle -Ay}
and a nonlinear term
N
(
y
)
{\displaystyle {\mathcal {N}}(y)}
.
Exponential integrators are constructed by multiplying (7) by
e
A
t
{\textstyle e^{At}}
, and exactly integrating the result over
a time interval
[
t
n
,
t
n
+
1
=
t
n
+
h
]
{\displaystyle [t_{n},t_{n+1}=t_{n}+h]}
:
y
n
+
1
=
e
−
A
h
y
n
+
∫
0
h
e
−
(
h
−
τ
)
A
N
(
y
(
t
n
+
τ
)
)
d
τ
.
{\displaystyle y_{n+1}=e^{-Ah}y_{n}+\int _{0}^{h}e^{-(h-\tau )A}{\mathcal {N}}\left(y\left(t_{n}+\tau \right)\right)\,d\tau .}
This integral equation is exact, but it doesn't define the integral.
The first-order exponential integrator can be realized by holding
N
(
y
(
t
n
+
τ
)
)
{\displaystyle {\mathcal {N}}(y(t_{n}+\tau ))}
constant over the full interval:
=== Generalizations ===
The Euler method is often not accurate enough. In more precise terms, it only has order one (the concept of order is explained below). This caused mathematicians to look for higher-order methods.
One possibility is to use not only the previously computed value yn to determine yn+1, but to make the solution depend on more past values. This yields a so-called multistep method. Perhaps the simplest is the leapfrog method which is second order and (roughly speaking) relies on two time values.
Almost all practical multistep methods fall within the family of linear multistep methods, which have the form
α
k
y
n
+
k
+
α
k
−
1
y
n
+
k
−
1
+
⋯
+
α
0
y
n
=
h
[
β
k
f
(
t
n
+
k
,
y
n
+
k
)
+
β
k
−
1
f
(
t
n
+
k
−
1
,
y
n
+
k
−
1
)
+
⋯
+
β
0
f
(
t
n
,
y
n
)
]
.
{\displaystyle {\begin{aligned}&{}\alpha _{k}y_{n+k}+\alpha _{k-1}y_{n+k-1}+\cdots +\alpha _{0}y_{n}\\&{}\quad =h\left[\beta _{k}f(t_{n+k},y_{n+k})+\beta _{k-1}f(t_{n+k-1},y_{n+k-1})+\cdots +\beta _{0}f(t_{n},y_{n})\right].\end{aligned}}}
Another possibility is to use more points in the interval
[
t
n
,
t
n
+
1
]
{\displaystyle [t_{n},t_{n+1}]}
. This leads to the family of Runge–Kutta methods, named after Carl Runge and Martin Kutta. One of their fourth-order methods is especially popular.
=== Advanced features ===
A good implementation of one of these methods for solving an ODE entails more than the time-stepping formula.
It is often inefficient to use the same step size all the time, so variable step-size methods have been developed. Usually, the step size is chosen such that the (local) error per step is below some tolerance level. This means that the methods must also compute an error indicator, an estimate of the local error.
An extension of this idea is to choose dynamically between different methods of different orders (this is called a variable order method). Methods based on Richardson extrapolation, such as the Bulirsch–Stoer algorithm, are often used to construct various methods of different orders.
Other desirable features include:
dense output: cheap numerical approximations for the whole integration interval, and not only at the points t0, t1, t2, ...
event location: finding the times where, say, a particular function vanishes. This typically requires the use of a root-finding algorithm.
support for parallel computing.
when used for integrating with respect to time, time reversibility
=== Alternative methods ===
Many methods do not fall within the framework discussed here. Some classes of alternative methods are:
multiderivative methods, which use not only the function f but also its derivatives. This class includes Hermite–Obreschkoff methods and Fehlberg methods, as well as methods like the Parker–Sochacki method or Bychkov–Scherbakov method, which compute the coefficients of the Taylor series of the solution y recursively.
methods for second order ODEs. We said that all higher-order ODEs can be transformed to first-order ODEs of the form (1). While this is certainly true, it may not be the best way to proceed. In particular, Nyström methods work directly with second-order equations.
geometric integration methods are especially designed for special classes of ODEs (for example, symplectic integrators for the solution of Hamiltonian equations). They take care that the numerical solution respects the underlying structure or geometry of these classes.
Quantized state systems methods are a family of ODE integration methods based on the idea of state quantization. They are efficient when simulating sparse systems with frequent discontinuities.
=== Parallel-in-time methods ===
Some IVPs require integration at such high temporal resolution and/or over such long time intervals that classical serial time-stepping methods become computationally infeasible to run in real-time (e.g. IVPs in numerical weather prediction, plasma modelling, and molecular dynamics). Parallel-in-time (PinT) methods have been developed in response to these issues in order to reduce simulation runtimes through the use of parallel computing.
Early PinT methods (the earliest being proposed in the 1960s) were initially overlooked by researchers due to the fact that the parallel computing architectures that they required were not yet widely available. With more computing power available, interest was renewed in the early 2000s with the development of Parareal, a flexible, easy-to-use PinT algorithm that is suitable for solving a wide variety of IVPs. The advent of exascale computing has meant that PinT algorithms are attracting increasing research attention and are being developed in such a way that they can harness the world's most powerful supercomputers. The most popular methods as of 2023 include Parareal, PFASST, ParaDiag, and MGRIT.
== Analysis ==
Numerical analysis is not only the design of numerical methods, but also their analysis. Three central concepts in this analysis are:
convergence: whether the method approximates the solution,
order: how well it approximates the solution, and
stability: whether errors are damped out.
=== Convergence ===
A numerical method is said to be convergent if the numerical solution approaches the exact solution as the step size h goes to 0. More precisely, we require that for every ODE (1) with a Lipschitz function f and every t* > 0,
lim
h
→
0
+
max
n
=
0
,
1
,
…
,
⌊
t
∗
/
h
⌋
‖
y
n
,
h
−
y
(
t
n
)
‖
=
0.
{\displaystyle \lim _{h\to 0^{+}}\max _{n=0,1,\dots ,\lfloor t^{*}/h\rfloor }\left\|y_{n,h}-y(t_{n})\right\|=0.}
All the methods mentioned above are convergent.
=== Consistency and order ===
Suppose the numerical method is
y
n
+
k
=
Ψ
(
t
n
+
k
;
y
n
,
y
n
+
1
,
…
,
y
n
+
k
−
1
;
h
)
.
{\displaystyle y_{n+k}=\Psi (t_{n+k};y_{n},y_{n+1},\dots ,y_{n+k-1};h).\,}
The local (truncation) error of the method is the error committed by one step of the method. That is, it is the difference between the result given by the method, assuming that no error was made in earlier steps, and the exact solution:
δ
n
+
k
h
=
Ψ
(
t
n
+
k
;
y
(
t
n
)
,
y
(
t
n
+
1
)
,
…
,
y
(
t
n
+
k
−
1
)
;
h
)
−
y
(
t
n
+
k
)
.
{\displaystyle \delta _{n+k}^{h}=\Psi \left(t_{n+k};y(t_{n}),y(t_{n+1}),\dots ,y(t_{n+k-1});h\right)-y(t_{n+k}).}
The method is said to be consistent if
lim
h
→
0
δ
n
+
k
h
h
=
0.
{\displaystyle \lim _{h\to 0}{\frac {\delta _{n+k}^{h}}{h}}=0.}
The method has order
p
{\displaystyle p}
if
δ
n
+
k
h
=
O
(
h
p
+
1
)
as
h
→
0.
{\displaystyle \delta _{n+k}^{h}=O(h^{p+1})\quad {\mbox{as }}h\to 0.}
Hence a method is consistent if it has an order greater than 0. The (forward) Euler method (4) and the backward Euler method (6) introduced above both have order 1, so they are consistent. Most methods being used in practice attain higher order. Consistency is a necessary condition for convergence, but not sufficient; for a method to be convergent, it must be both consistent and zero-stable.
A related concept is the global (truncation) error, the error sustained in all the steps one needs to reach a fixed time
t
{\displaystyle t}
. Explicitly, the global error at time
t
{\displaystyle t}
is
y
N
−
y
(
t
)
{\displaystyle y_{N}-y(t)}
where
N
=
(
t
−
t
0
)
/
h
{\displaystyle N=(t-t_{0})/h}
. The global error of a
p
{\displaystyle p}
th order one-step method is
O
(
h
p
)
{\displaystyle O(h^{p})}
; in particular, such a method is convergent. This statement is not necessarily true for multi-step methods.
=== Stability and stiffness ===
For some differential equations, application of standard methods—such as the Euler method, explicit Runge–Kutta methods, or multistep methods (for example, Adams–Bashforth methods)—exhibit instability in the solutions, though other methods may produce stable solutions. This "difficult behaviour" in the equation (which may not necessarily be complex itself) is described as stiffness, and is often caused by the presence of different time scales in the underlying problem. For example, a collision in a mechanical system like in an impact oscillator typically occurs at much smaller time scale than the time for the motion of objects; this discrepancy makes for very "sharp turns" in the curves of the state parameters.
Stiff problems are ubiquitous in chemical kinetics, control theory, solid mechanics, weather forecasting, biology, plasma physics, and electronics. One way to overcome stiffness is to extend the notion of differential equation to that of differential inclusion, which allows for and models non-smoothness.
== History ==
Below is a timeline of some important developments in this field.
1768 - Leonhard Euler publishes his method.
1824 - Augustin Louis Cauchy proves convergence of the Euler method. In this proof, Cauchy uses the implicit Euler method.
1855 - First mention of the multistep methods of John Couch Adams in a letter written by Francis Bashforth.
1895 - Carl Runge publishes the first Runge–Kutta method.
1901 - Martin Kutta describes the popular fourth-order Runge–Kutta method.
1910 - Lewis Fry Richardson announces his extrapolation method, Richardson extrapolation.
1952 - Charles F. Curtiss and Joseph Oakland Hirschfelder coin the term stiff equations.
1963 - Germund Dahlquist introduces A-stability of integration methods.
== Numerical solutions to second-order one-dimensional boundary value problems ==
Boundary value problems (BVPs) are usually solved numerically by solving an approximately equivalent matrix problem obtained by discretizing the original BVP. The most commonly used method for numerically solving BVPs in one dimension is called the Finite Difference Method. This method takes advantage of linear combinations of point values to construct finite difference coefficients that describe derivatives of the function. For example, the second-order central difference approximation to the first derivative is given by:
u
i
+
1
−
u
i
−
1
2
h
=
u
′
(
x
i
)
+
O
(
h
2
)
,
{\displaystyle {\frac {u_{i+1}-u_{i-1}}{2h}}=u'(x_{i})+{\mathcal {O}}(h^{2}),}
and the second-order central difference for the second derivative is given by:
u
i
+
1
−
2
u
i
+
u
i
−
1
h
2
=
u
″
(
x
i
)
+
O
(
h
2
)
.
{\displaystyle {\frac {u_{i+1}-2u_{i}+u_{i-1}}{h^{2}}}=u''(x_{i})+{\mathcal {O}}(h^{2}).}
In both of these formulae,
h
=
x
i
−
x
i
−
1
{\displaystyle h=x_{i}-x_{i-1}}
is the distance between neighbouring x values on the discretized domain. One then constructs a linear system that can then be solved by standard matrix methods. For example, suppose the equation to be solved is:
d
2
u
d
x
2
−
u
=
0
,
u
(
0
)
=
0
,
u
(
1
)
=
1.
{\displaystyle {\begin{aligned}&{}{\frac {d^{2}u}{dx^{2}}}-u=0,\\&{}u(0)=0,\\&{}u(1)=1.\end{aligned}}}
The next step would be to discretize the problem and use linear derivative approximations such as
u
i
″
=
u
i
+
1
−
2
u
i
+
u
i
−
1
h
2
{\displaystyle u''_{i}={\frac {u_{i+1}-2u_{i}+u_{i-1}}{h^{2}}}}
and solve the resulting system of linear equations. This would lead to equations such as:
u
i
+
1
−
2
u
i
+
u
i
−
1
h
2
−
u
i
=
0
,
∀
i
=
1
,
2
,
3
,
.
.
.
,
n
−
1
.
{\displaystyle {\frac {u_{i+1}-2u_{i}+u_{i-1}}{h^{2}}}-u_{i}=0,\quad \forall i={1,2,3,...,n-1}.}
On first viewing, this system of equations appears to have difficulty associated with the fact that the equation involves no terms that are not multiplied by variables, but in fact this is false. At i = 1 and n − 1 there is a term involving the boundary values
u
(
0
)
=
u
0
{\displaystyle u(0)=u_{0}}
and
u
(
1
)
=
u
n
{\displaystyle u(1)=u_{n}}
and since these two values are known, one can simply substitute them into this equation and as a result have a non-homogeneous system of linear equations that has non-trivial solutions.
== See also ==
Courant–Friedrichs–Lewy condition
Energy drift
General linear methods
List of numerical analysis topics#Numerical methods for ordinary differential equations
Reversible reference system propagation algorithm
Modelica Language and OpenModelica software
== Notes ==
== References ==
Bradie, Brian (2006). A Friendly Introduction to Numerical Analysis. Upper Saddle River, New Jersey: Pearson Prentice Hall. ISBN 978-0-13-013054-9.
J. C. Butcher, Numerical methods for ordinary differential equations, ISBN 0-471-96758-0
Hairer, E.; Nørsett, S. P.; Wanner, G. (1993). Solving Ordinary Differential Equations. I. Nonstiff Problems. Springer Series in Computational Mathematics. Vol. 8 (2nd ed.). Springer-Verlag, Berlin. ISBN 3-540-56670-8. MR 1227985.
Ernst Hairer and Gerhard Wanner, Solving ordinary differential equations II: Stiff and differential-algebraic problems, second edition, Springer Verlag, Berlin, 1996. ISBN 3-540-60452-9. (This two-volume monograph systematically covers all aspects of the field.)
Hochbruck, Marlis; Ostermann, Alexander (May 2010). "Exponential integrators". Acta Numerica. 19: 209–286. Bibcode:2010AcNum..19..209H. CiteSeerX 10.1.1.187.6794. doi:10.1017/S0962492910000048. S2CID 4841957.
Arieh Iserles, A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, 1996. ISBN 0-521-55376-8 (hardback), ISBN 0-521-55655-4 (paperback). (Textbook, targeting advanced undergraduate and postgraduate students in mathematics, which also discusses numerical partial differential equations.)
John Denholm Lambert, Numerical Methods for Ordinary Differential Systems, John Wiley & Sons, Chichester, 1991. ISBN 0-471-92990-5. (Textbook, slightly more demanding than the book by Iserles.)
== External links ==
Joseph W. Rudmin, Application of the Parker–Sochacki Method to Celestial Mechanics Archived 2016-05-16 at the Portuguese Web Archive, 1998.
Dominique Tournès, L'intégration approchée des équations différentielles ordinaires (1671–1914), thèse de doctorat de l'université Paris 7 - Denis Diderot, juin 1996. Réimp. Villeneuve d'Ascq : Presses universitaires du Septentrion, 1997, 468 p. (Extensive online material on ODE numerical analysis history, for English-language material on the history of ODE numerical analysis, see, for example, the paper books by Chabert and Goldstine quoted by him.)
Pchelintsev, A.N. (2020). "An accurate numerical method and algorithm for constructing solutions of chaotic systems". Journal of Applied Nonlinear Dynamics. 9 (2): 207–221. arXiv:2011.10664. doi:10.5890/JAND.2020.06.004. S2CID 225853788.
kv on GitHub (C++ library with rigorous ODE solvers)
INTLAB (A library made by MATLAB/GNU Octave which includes rigorous ODE solvers) | Wikipedia/Numerical_ordinary_differential_equations |
Electrostatics is a branch of physics that studies slow-moving or stationary electric charges.
Since classical times, it has been known that some materials, such as amber, attract lightweight particles after rubbing. The Greek word ḗlektron (ἤλεκτρον), meaning 'amber', was thus the root of the word electricity. Electrostatic phenomena arise from the forces that electric charges exert on each other. Such forces are described by Coulomb's law.
There are many examples of electrostatic phenomena, from those as simple as the attraction of plastic wrap to one's hand after it is removed from a package, to the apparently spontaneous explosion of grain silos, the damage of electronic components during manufacturing, and photocopier and laser printer operation.
The electrostatic model accurately predicts electrical phenomena in "classical" cases where the velocities are low and the system is macroscopic so no quantum effects are involved. It also plays a role in quantum mechanics, where additional terms also need to be included.
== Coulomb's law ==
Coulomb's law states that:
The magnitude of the electrostatic force of attraction or repulsion between two point charges is directly proportional to the product of the magnitudes of charges and inversely proportional to the square of the distance between them.
The force is along the straight line joining them. If the two charges have the same sign, the electrostatic force between them is repulsive; if they have different signs, the force between them is attractive.
If
r
{\displaystyle r}
is the distance (in meters) between two charges, then the force between two point charges
Q
{\displaystyle Q}
and
q
{\displaystyle q}
is:
F
=
1
4
π
ε
0
|
Q
q
|
r
2
,
{\displaystyle F={1 \over 4\pi \varepsilon _{0}}{|Qq| \over r^{2}},}
where ε0 = 8.8541878188(14)×10−12 F⋅m−1 is the vacuum permittivity.
The SI unit of ε0 is equivalently A2⋅s4 ⋅kg−1⋅m−3 or C2⋅N−1⋅m−2 or F⋅m−1.
== Electric field ==
The electric field,
E
{\displaystyle \mathbf {E} }
, in units of Newtons per Coulomb or volts per meter, is a vector field that can be defined everywhere, except at the location of point charges (where it diverges to infinity). It is defined as the electrostatic force
F
{\displaystyle \mathbf {F} }
on a hypothetical small test charge at the point due to Coulomb's law, divided by the charge
q
{\displaystyle q}
E
=
F
q
{\displaystyle \mathbf {E} ={\mathbf {F} \over q}}
Electric field lines are useful for visualizing the electric field. Field lines begin on positive charge and terminate on negative charge. They are parallel to the direction of the electric field at each point, and the density of these field lines is a measure of the magnitude of the electric field at any given point.
A collection of
n
{\displaystyle n}
particles of charge
q
i
{\displaystyle q_{i}}
, located at points
r
i
{\displaystyle \mathbf {r} _{i}}
(called source points) generates the electric field at
r
{\displaystyle \mathbf {r} }
(called the field point) of:
E
(
r
)
=
1
4
π
ε
0
∑
i
=
1
n
q
i
r
−
r
i
^
|
r
−
r
i
|
2
=
1
4
π
ε
0
∑
i
=
1
n
q
i
r
−
r
i
|
r
−
r
i
|
3
,
{\displaystyle \mathbf {E} (\mathbf {r} )={1 \over 4\pi \varepsilon _{0}}\sum _{i=1}^{n}q_{i}{{\hat {\mathbf {r-r_{i}} }} \over {|\mathbf {r-r_{i}} |}^{2}}={1 \over 4\pi \varepsilon _{0}}\sum _{i=1}^{n}q_{i}{\mathbf {r-r_{i}} \over {|\mathbf {r-r_{i}} |}^{3}},}
where
r
−
r
i
{\textstyle \mathbf {r} -\mathbf {r} _{i}}
is the displacement vector from a source point
r
i
{\displaystyle \mathbf {r} _{i}}
to the field point
r
{\displaystyle \mathbf {r} }
, and
r
−
r
i
^
=
d
e
f
r
−
r
i
|
r
−
r
i
|
{\textstyle {\hat {\mathbf {r-r_{i}} }}\ {\stackrel {\mathrm {def} }{=}}\ {\frac {\mathbf {r-r_{i}} }{|\mathbf {r-r_{i}} |}}}
is the unit vector of the displacement vector that indicates the direction of the field due to the source at point
r
i
{\displaystyle \mathbf {r_{i}} }
. For a single point charge,
q
{\displaystyle q}
, at the origin, the magnitude of this electric field is
E
=
q
/
4
π
ε
0
r
2
{\displaystyle E=q/4\pi \varepsilon _{0}r^{2}}
and points away from that charge if it is positive. The fact that the force (and hence the field) can be calculated by summing over all the contributions due to individual source particles is an example of the superposition principle. The electric field produced by a distribution of charges is given by the volume charge density
ρ
(
r
)
{\displaystyle \rho (\mathbf {r} )}
and can be obtained by converting this sum into a triple integral:
E
(
r
)
=
1
4
π
ε
0
∭
ρ
(
r
′
)
r
−
r
′
|
r
−
r
′
|
3
d
3
|
r
′
|
{\displaystyle \mathbf {E} (\mathbf {r} )={\frac {1}{4\pi \varepsilon _{0}}}\iiint \,\rho (\mathbf {r} '){\mathbf {r-r'} \over {|\mathbf {r-r'} |}^{3}}\mathrm {d} ^{3}|\mathbf {r} '|}
=== Gauss's law ===
Gauss's law states that "the total electric flux through any closed surface in free space of any shape drawn in an electric field is proportional to the total electric charge enclosed by the surface." Many numerical problems can be solved by considering a Gaussian surface around a body. Mathematically, Gauss's law takes the form of an integral equation:
Φ
E
=
∮
S
E
⋅
d
A
=
Q
enclosed
ε
0
=
∫
V
ρ
ε
0
d
3
r
,
{\displaystyle \Phi _{E}=\oint _{S}\mathbf {E} \cdot \mathrm {d} \mathbf {A} ={Q_{\text{enclosed}} \over \varepsilon _{0}}=\int _{V}{\rho \over \varepsilon _{0}}\mathrm {d} ^{3}r,}
where
d
3
r
=
d
x
d
y
d
z
{\displaystyle \mathrm {d} ^{3}r=\mathrm {d} x\ \mathrm {d} y\ \mathrm {d} z}
is a volume element. If the charge is distributed over a surface or along a line, replace
ρ
d
3
r
{\displaystyle \rho \,\mathrm {d} ^{3}r}
by
σ
d
A
{\displaystyle \sigma \,\mathrm {d} A}
or
λ
d
ℓ
{\displaystyle \lambda \,\mathrm {d} \ell }
. The divergence theorem allows Gauss's Law to be written in differential form:
∇
⋅
E
=
ρ
ε
0
.
{\displaystyle \nabla \cdot \mathbf {E} ={\rho \over \varepsilon _{0}}.}
where
∇
⋅
{\displaystyle \nabla \cdot }
is the divergence operator.
=== Poisson and Laplace equations ===
The definition of electrostatic potential, combined with the differential form of Gauss's law (above), provides a relationship between the potential Φ and the charge density ρ:
∇
2
ϕ
=
−
ρ
ε
0
.
{\displaystyle {\nabla }^{2}\phi =-{\rho \over \varepsilon _{0}}.}
This relationship is a form of Poisson's equation. In the absence of unpaired electric charge, the equation becomes Laplace's equation:
∇
2
ϕ
=
0
,
{\displaystyle {\nabla }^{2}\phi =0,}
== Electrostatic approximation ==
If the electric field in a system can be assumed to result from static charges, that is, a system that exhibits no significant time-varying magnetic fields, the system is justifiably analyzed using only the principles of electrostatics. This is called the "electrostatic approximation".
The validity of the electrostatic approximation rests on the assumption that the electric field is irrotational, or nearly so:
∇
×
E
≈
0.
{\displaystyle \nabla \times \mathbf {E} \approx 0.}
From Faraday's law, this assumption implies the absence or near-absence of time-varying magnetic fields:
∂
B
∂
t
≈
0.
{\displaystyle {\partial \mathbf {B} \over \partial t}\approx 0.}
In other words, electrostatics does not require the absence of magnetic fields or electric currents. Rather, if magnetic fields or electric currents do exist, they must not change with time, or in the worst-case, they must change with time only very slowly. In some problems, both electrostatics and magnetostatics may be required for accurate predictions, but the coupling between the two can still be ignored. Electrostatics and magnetostatics can both be seen as non-relativistic Galilean limits for electromagnetism. In addition, conventional electrostatics ignore quantum effects which have to be added for a complete description.: 2
=== Electrostatic potential ===
As the electric field is irrotational, it is possible to express the electric field as the gradient of a scalar function,
ϕ
{\displaystyle \phi }
, called the electrostatic potential (also known as the voltage). An electric field,
E
{\displaystyle E}
, points from regions of high electric potential to regions of low electric potential, expressed mathematically as
E
=
−
∇
ϕ
.
{\displaystyle \mathbf {E} =-\nabla \phi .}
The gradient theorem can be used to establish that the electrostatic potential is the amount of work per unit charge required to move a charge from point
a
{\displaystyle a}
to point
b
{\displaystyle b}
with the following line integral:
−
∫
a
b
E
⋅
d
ℓ
=
ϕ
(
b
)
−
ϕ
(
a
)
.
{\displaystyle -\int _{a}^{b}{\mathbf {E} \cdot \mathrm {d} \mathbf {\ell } }=\phi (\mathbf {b} )-\phi (\mathbf {a} ).}
From these equations, we see that the electric potential is constant in any region for which the electric field vanishes (such as occurs inside a conducting object).
=== Electrostatic energy ===
A test particle's potential energy,
U
E
single
{\displaystyle U_{\mathrm {E} }^{\text{single}}}
, can be calculated from a line integral of the work,
q
n
E
⋅
d
ℓ
{\displaystyle q_{n}\mathbf {E} \cdot \mathrm {d} \mathbf {\ell } }
. We integrate from a point at infinity, and assume a collection of
N
{\displaystyle N}
particles of charge
Q
n
{\displaystyle Q_{n}}
, are already situated at the points
r
i
{\displaystyle \mathbf {r} _{i}}
. This potential energy (in Joules) is:
U
E
single
=
q
ϕ
(
r
)
=
q
4
π
ε
0
∑
i
=
1
N
Q
i
‖
R
i
‖
{\displaystyle U_{\mathrm {E} }^{\text{single}}=q\phi (\mathbf {r} )={\frac {q}{4\pi \varepsilon _{0}}}\sum _{i=1}^{N}{\frac {Q_{i}}{\left\|{\mathcal {\mathbf {R} _{i}}}\right\|}}}
where
R
i
=
r
−
r
i
{\displaystyle \mathbf {\mathcal {R_{i}}} =\mathbf {r} -\mathbf {r} _{i}}
is the distance of each charge
Q
i
{\displaystyle Q_{i}}
from the test charge
q
{\displaystyle q}
, which situated at the point
r
{\displaystyle \mathbf {r} }
, and
ϕ
(
r
)
{\displaystyle \phi (\mathbf {r} )}
is the electric potential that would be at
r
{\displaystyle \mathbf {r} }
if the test charge were not present. If only two charges are present, the potential energy is
Q
1
Q
2
/
(
4
π
ε
0
r
)
{\displaystyle Q_{1}Q_{2}/(4\pi \varepsilon _{0}r)}
. The total electric potential energy due a collection of N charges is calculating by assembling these particles one at a time:
U
E
total
=
1
4
π
ε
0
∑
j
=
1
N
Q
j
∑
i
=
1
j
−
1
Q
i
r
i
j
=
1
2
∑
i
=
1
N
Q
i
ϕ
i
,
{\displaystyle U_{\mathrm {E} }^{\text{total}}={\frac {1}{4\pi \varepsilon _{0}}}\sum _{j=1}^{N}Q_{j}\sum _{i=1}^{j-1}{\frac {Q_{i}}{r_{ij}}}={\frac {1}{2}}\sum _{i=1}^{N}Q_{i}\phi _{i},}
where the following sum from, j = 1 to N, excludes i = j:
ϕ
i
=
1
4
π
ε
0
∑
j
≠
i
j
=
1
N
Q
j
r
i
j
.
{\displaystyle \phi _{i}={\frac {1}{4\pi \varepsilon _{0}}}\sum _{\stackrel {j=1}{j\neq i}}^{N}{\frac {Q_{j}}{r_{ij}}}.}
This electric potential,
ϕ
i
{\displaystyle \phi _{i}}
is what would be measured at
r
i
{\displaystyle \mathbf {r} _{i}}
if the charge
Q
i
{\displaystyle Q_{i}}
were missing. This formula obviously excludes the (infinite) energy that would be required to assemble each point charge from a disperse cloud of charge. The sum over charges can be converted into an integral over charge density using the prescription
∑
(
⋯
)
→
∫
(
⋯
)
ρ
d
3
r
{\textstyle \sum (\cdots )\rightarrow \int (\cdots )\rho \,\mathrm {d} ^{3}r}
:
U
E
total
=
1
2
∫
ρ
(
r
)
ϕ
(
r
)
d
3
r
=
ε
0
2
∫
|
E
|
2
d
3
r
,
{\displaystyle U_{\mathrm {E} }^{\text{total}}={\frac {1}{2}}\int \rho (\mathbf {r} )\phi (\mathbf {r} )\,\mathrm {d} ^{3}r={\frac {\varepsilon _{0}}{2}}\int \left|{\mathbf {E} }\right|^{2}\,\mathrm {d} ^{3}r,}
This second expression for electrostatic energy uses the fact that the electric field is the negative gradient of the electric potential, as well as vector calculus identities in a way that resembles integration by parts. These two integrals for electric field energy seem to indicate two mutually exclusive formulas for electrostatic energy density, namely
1
2
ρ
ϕ
{\textstyle {\frac {1}{2}}\rho \phi }
and
1
2
ε
0
E
2
{\textstyle {\frac {1}{2}}\varepsilon _{0}E^{2}}
; they yield equal values for the total electrostatic energy only if both are integrated over all space.
=== Electrostatic pressure ===
On a conductor, a surface charge will experience a force in the presence of an electric field. This force is the average of the discontinuous electric field at the surface charge. This average in terms of the field just outside the surface amounts to:
P
=
ε
0
2
E
2
,
{\displaystyle P={\frac {\varepsilon _{0}}{2}}E^{2},}
This pressure tends to draw the conductor into the field, regardless of the sign of the surface charge.
== See also ==
Electromagnetism – Fundamental interaction between charged particles
Electrostatic generator, machines that create static electricity.
Electrostatic induction, separation of charges due to electric fields.
Permittivity and relative permittivity, the electric polarizability of materials.
Quantization of charge, the charge units carried by electrons or protons.
Static electricity, stationary charge accumulated on a material.
Triboelectric effect, separation of charges due to sliding or contact.
== References ==
== Further reading ==
Hermann A. Haus; James R. Melcher (1989). Electromagnetic Fields and Energy. Englewood Cliffs, NJ: Prentice-Hall. ISBN 0-13-249020-X.
Halliday, David; Robert Resnick; Kenneth S. Krane (1992). Physics. New York: John Wiley & Sons. ISBN 0-471-80457-6.
Griffiths, David J. (1999). Introduction to Electrodynamics. Upper Saddle River, NJ: Prentice Hall. ISBN 0-13-805326-X.
== External links ==
Media related to Electrostatics at Wikimedia Commons
The Feynman Lectures on Physics Vol. II Ch. 4: Electrostatics
Introduction to Electrostatics: Point charges can be treated as a distribution using the Dirac delta function
Learning materials related to Electrostatics at Wikiversity | Wikipedia/Electrostatics |
In category theory, a branch of mathematics, the abstract notion of a limit captures the essential properties of universal constructions such as products, pullbacks and inverse limits. The dual notion of a colimit generalizes constructions such as disjoint unions, direct sums, coproducts, pushouts and direct limits.
Limits and colimits, like the strongly related notions of universal properties and adjoint functors, exist at a high level of abstraction. In order to understand them, it is helpful to first study the specific examples these concepts are meant to generalize.
== Definition ==
Limits and colimits in a category
C
{\displaystyle C}
are defined by means of diagrams in
C
{\displaystyle C}
. Formally, a diagram of shape
J
{\displaystyle J}
in
C
{\displaystyle C}
is a functor from
J
{\displaystyle J}
to
C
{\displaystyle C}
:
F
:
J
→
C
.
{\displaystyle F:J\to C.}
The category
J
{\displaystyle J}
is thought of as an index category, and the diagram
F
{\displaystyle F}
is thought of as indexing a collection of objects and morphisms in
C
{\displaystyle C}
patterned on
J
{\displaystyle J}
.
One is most often interested in the case where the category
J
{\displaystyle J}
is a small or even finite category. A diagram is said to be small or finite whenever
J
{\displaystyle J}
is.
=== Limits ===
Let
F
:
J
→
C
{\displaystyle F:J\to C}
be a diagram of shape
J
{\displaystyle J}
in a category
C
{\displaystyle C}
. A cone to
F
{\displaystyle F}
is an object
N
{\displaystyle N}
of
C
{\displaystyle C}
together with a family
ψ
X
:
N
→
F
(
X
)
{\displaystyle \psi _{X}:N\to F(X)}
of morphisms indexed by the objects
X
{\displaystyle X}
of
J
{\displaystyle J}
, such that for every morphism
f
:
X
→
Y
{\displaystyle f:X\to Y}
in
J
{\displaystyle J}
, we have
F
(
f
)
∘
ψ
X
=
ψ
Y
{\displaystyle F(f)\circ \psi _{X}=\psi _{Y}}
.
A limit of the diagram
F
:
J
→
C
{\displaystyle F:J\to C}
is a cone
(
L
,
ϕ
)
{\displaystyle (L,\phi )}
to
F
{\displaystyle F}
such that for every cone
(
N
,
ψ
)
{\displaystyle (N,\psi )}
to
F
{\displaystyle F}
there exists a unique morphism
u
:
N
→
L
{\displaystyle u:N\to L}
such that
ϕ
X
∘
u
=
ψ
X
{\displaystyle \phi _{X}\circ u=\psi _{X}}
for all
X
{\displaystyle X}
in
J
{\displaystyle J}
.
One says that the cone
(
N
,
ψ
)
{\displaystyle (N,\psi )}
factors through the cone
(
L
,
ϕ
)
{\displaystyle (L,\phi )}
with
the unique factorization
u
{\displaystyle u}
. The morphism
u
{\displaystyle u}
is sometimes called the mediating morphism.
Limits are also referred to as universal cones, since they are characterized by a universal property (see below for more information). As with every universal property, the above definition describes a balanced state of generality: The limit object
L
{\displaystyle L}
has to be general enough to allow any cone to factor through it; on the other hand,
L
{\displaystyle L}
has to be sufficiently specific, so that only one such factorization is possible for every cone.
Limits may also be characterized as terminal objects in the category of cones to F.
It is possible that a diagram does not have a limit at all. However, if a diagram does have a limit then this limit is essentially unique: it is unique up to a unique isomorphism. For this reason one often speaks of the limit of F.
=== Colimits ===
The dual notions of limits and cones are colimits and co-cones. Although it is straightforward to obtain the definitions of these by inverting all morphisms in the above definitions, we will explicitly state them here:
A co-cone of a diagram
F
:
J
→
C
{\displaystyle F:J\to C}
is an object
N
{\displaystyle N}
of
C
{\displaystyle C}
together with a family of morphisms
ψ
X
:
F
(
X
)
→
N
{\displaystyle \psi _{X}:F(X)\to N}
for every object
X
{\displaystyle X}
of
J
{\displaystyle J}
, such that for every morphism
f
:
X
→
Y
{\displaystyle f:X\to Y}
in
J
{\displaystyle J}
, we have
ψ
Y
∘
F
(
f
)
=
ψ
X
{\displaystyle \psi _{Y}\circ F(f)=\psi _{X}}
.
A colimit of a diagram
F
:
J
→
C
{\displaystyle F:J\to C}
is a co-cone
(
L
,
ϕ
)
{\displaystyle (L,\phi )}
of
F
{\displaystyle F}
such that for any other co-cone
(
N
,
ψ
)
{\displaystyle (N,\psi )}
of
F
{\displaystyle F}
there exists a unique morphism
u
:
L
→
N
{\displaystyle u:L\to N}
such that
u
∘
ϕ
X
=
ψ
X
{\displaystyle u\circ \phi _{X}=\psi _{X}}
for all
X
{\displaystyle X}
in
J
{\displaystyle J}
.
Colimits are also referred to as universal co-cones. They can be characterized as initial objects in the category of co-cones from
F
{\displaystyle F}
.
As with limits, if a diagram
F
{\displaystyle F}
has a colimit then this colimit is unique up to a unique isomorphism.
=== Variations ===
Limits and colimits can also be defined for collections of objects and morphisms without the use of diagrams. The definitions are the same (note that in definitions above we never needed to use composition of morphisms in
J
{\displaystyle J}
). This variation, however, adds no new information. Any collection of objects and morphisms defines a (possibly large) directed graph
G
{\displaystyle G}
. If we let
J
{\displaystyle J}
be the free category generated by
G
{\displaystyle G}
, there is a universal diagram
F
:
J
→
C
{\displaystyle F:J\to C}
whose image contains
G
{\displaystyle G}
. The limit (or colimit) of this diagram is the same as the limit (or colimit) of the original collection of objects and morphisms.
Weak limit and weak colimits are defined like limits and colimits, except that the uniqueness property of the mediating morphism is dropped.
== Examples ==
=== Limits ===
The definition of limits is general enough to subsume several constructions useful in practical settings. In the following we will consider the limit (L, φ) of a diagram F : J → C.
Terminal objects. If J is the empty category there is only one diagram of shape J: the empty one (similar to the empty function in set theory). A cone to the empty diagram is essentially just an object of C. The limit of F is any object that is uniquely factored through by every other object. This is just the definition of a terminal object.
Products. If J is a discrete category then a diagram F is essentially nothing but a family of objects of C, indexed by J. The limit L of F is called the product of these objects. The cone φ consists of a family of morphisms φX : L → F(X) called the projections of the product. In the category of sets, for instance, the products are given by Cartesian products and the projections are just the natural projections onto the various factors.
Powers. A special case of a product is when the diagram F is a constant functor to an object X of C. The limit of this diagram is called the Jth power of X and denoted XJ.
Equalizers. If J is a category with two objects and two parallel morphisms from one object to the other, then a diagram of shape J is a pair of parallel morphisms in C. The limit L of such a diagram is called an equalizer of those morphisms.
Kernels. A kernel is a special case of an equalizer where one of the morphisms is a zero morphism.
Pullbacks. Let F be a diagram that picks out three objects X, Y, and Z in C, where the only non-identity morphisms are f : X → Z and g : Y → Z. The limit L of F is called a pullback or a fiber product. It can nicely be visualized as a commutative square:
Inverse limits. Let J be a directed set (considered as a small category by adding arrows i → j if and only if i ≥ j) and let F : Jop → C be a diagram. The limit of F is called an inverse limit or projective limit.
If J = 1, the category with a single object and morphism, then a diagram of shape J is essentially just an object X of C. A cone to an object X is just a morphism with codomain X. A morphism f : Y → X is a limit of the diagram X if and only if f is an isomorphism. More generally, if J is any category with an initial object i, then any diagram of shape J has a limit, namely any object isomorphic to F(i). Such an isomorphism uniquely determines a universal cone to F.
Topological limits. Limits of functions are a special case of limits of filters, which are related to categorical limits as follows. Given a topological space X, denote by F the set of filters on X, x ∈ X a point, V(x) ∈ F the neighborhood filter of x, A ∈ F a particular filter and
F
x
,
A
=
{
G
∈
F
∣
V
(
x
)
∪
A
⊂
G
}
{\displaystyle F_{x,A}=\{G\in F\mid V(x)\cup A\subset G\}}
the set of filters finer than A and that converge to x. The filters F are given a small and thin category structure by adding an arrow A → B if and only if A ⊆ B. The injection
I
x
,
A
:
F
x
,
A
→
F
{\displaystyle I_{x,A}:F_{x,A}\to F}
becomes a functor and the following equivalence holds :
x is a topological limit of A if and only if A is a categorical limit of
I
x
,
A
{\displaystyle I_{x,A}}
=== Colimits ===
Examples of colimits are given by the dual versions of the examples above:
Initial objects are colimits of empty diagrams.
Coproducts are colimits of diagrams indexed by discrete categories.
Copowers are colimits of constant diagrams from discrete categories.
Coequalizers are colimits of a parallel pair of morphisms.
Cokernels are coequalizers of a morphism and a parallel zero morphism.
Pushouts are colimits of a pair of morphisms with common domain.
Direct limits are colimits of diagrams indexed by directed sets.
== Properties ==
=== Existence of limits ===
A given diagram F : J → C may or may not have a limit (or colimit) in C. Indeed, there may not even be a cone to F, let alone a universal cone.
A category C is said to have limits of shape J if every diagram of shape J has a limit in C. Specifically, a category C is said to
have products if it has limits of shape J for every small discrete category J (it need not have large products),
have equalizers if it has limits of shape
∙
⇉
∙
{\displaystyle \bullet \rightrightarrows \bullet }
(i.e. every parallel pair of morphisms has an equalizer),
have pullbacks if it has limits of shape
∙
→
∙
←
∙
{\displaystyle \bullet \rightarrow \bullet \leftarrow \bullet }
(i.e. every pair of morphisms with common codomain has a pullback).
A complete category is a category that has all small limits (i.e. all limits of shape J for every small category J).
One can also make the dual definitions. A category has colimits of shape J if every diagram of shape J has a colimit in C. A cocomplete category is one that has all small colimits.
The existence theorem for limits states that if a category C has equalizers and all products indexed by the classes Ob(J) and Hom(J), then C has all limits of shape J.: §V.2 Thm.1 In this case, the limit of a diagram F : J → C can be constructed as the equalizer of the two morphisms: §V.2 Thm.2
s
,
t
:
∏
i
∈
Ob
(
J
)
F
(
i
)
⇉
∏
f
∈
Hom
(
J
)
F
(
cod
(
f
)
)
{\displaystyle s,t:\prod _{i\in \operatorname {Ob} (J)}F(i)\rightrightarrows \prod _{f\in \operatorname {Hom} (J)}F(\operatorname {cod} (f))}
given (in component form) by
s
=
(
F
(
f
)
∘
π
dom
(
f
)
)
f
∈
Hom
(
J
)
t
=
(
π
cod
(
f
)
)
f
∈
Hom
(
J
)
.
{\displaystyle {\begin{aligned}s&={\bigl (}F(f)\circ \pi _{\operatorname {dom} (f)}{\bigr )}_{f\in \operatorname {Hom} (J)}\\t&={\bigl (}\pi _{\operatorname {cod} (f)}{\bigr )}_{f\in \operatorname {Hom} (J)}.\end{aligned}}}
There is a dual existence theorem for colimits in terms of coequalizers and coproducts. Both of these theorems give sufficient and necessary conditions for the existence of all (co)limits of shape J.
=== Universal property ===
Limits and colimits are important special cases of universal constructions.
Let C be a category and let J be a small index category. The functor category CJ may be thought of as the category of all diagrams of shape J in C. The diagonal functor
Δ
:
C
→
C
J
{\displaystyle \Delta :{\mathcal {C}}\to {\mathcal {C}}^{\mathcal {J}}}
is the functor that maps each object N in C to the constant functor Δ(N) : J → C to N. That is, Δ(N)(X) = N for each object X in J and Δ(N)(f) = idN for each morphism f in J.
Given a diagram F: J → C (thought of as an object in CJ), a natural transformation ψ : Δ(N) → F (which is just a morphism in the category CJ) is the same thing as a cone from N to F. To see this, first note that Δ(N)(X) = N for all X implies that the components of ψ are morphisms ψX : N → F(X), which all share the domain N. Moreover, the requirement that the cone's diagrams commute is true simply because this ψ is a natural transformation. (Dually, a natural transformation ψ : F → Δ(N) is the same thing as a co-cone from F to N.)
Therefore, the definitions of limits and colimits can then be restated in the form:
A limit of F is a universal morphism from Δ to F.
A colimit of F is a universal morphism from F to Δ.
=== Adjunctions ===
Like all universal constructions, the formation of limits and colimits is functorial in nature. In other words, if every diagram of shape J has a limit in C (for J small) there exists a limit functor
lim
:
C
J
→
C
{\displaystyle \lim :{\mathcal {C}}^{\mathcal {J}}\to {\mathcal {C}}}
which assigns each diagram its limit and each natural transformation η : F → G the unique morphism lim η : lim F → lim G commuting with the corresponding universal cones. This functor is right adjoint to the diagonal functor Δ : C → CJ.
This adjunction gives a bijection between the set of all morphisms from N to lim F and the set of all cones from N to F
Hom
(
N
,
lim
F
)
≅
Cone
(
N
,
F
)
{\displaystyle \operatorname {Hom} (N,\lim F)\cong \operatorname {Cone} (N,F)}
which is natural in the variables N and F. The counit of this adjunction is simply the universal cone from lim F to F. If the index category J is connected (and nonempty) then the unit of the adjunction is an isomorphism so that lim is a left inverse of Δ. This fails if J is not connected. For example, if J is a discrete category, the components of the unit are the diagonal morphisms δ : N → NJ.
Dually, if every diagram of shape J has a colimit in C (for J small) there exists a colimit functor
colim
:
C
J
→
C
{\displaystyle \operatorname {colim} :{\mathcal {C}}^{\mathcal {J}}\to {\mathcal {C}}}
which assigns each diagram its colimit. This functor is left adjoint to the diagonal functor Δ : C → CJ, and one has a natural isomorphism
Hom
(
colim
F
,
N
)
≅
Cocone
(
F
,
N
)
.
{\displaystyle \operatorname {Hom} (\operatorname {colim} F,N)\cong \operatorname {Cocone} (F,N).}
The unit of this adjunction is the universal cocone from F to colim F. If J is connected (and nonempty) then the counit is an isomorphism, so that colim is a left inverse of Δ.
Note that both the limit and the colimit functors are covariant functors.
=== As representations of functors ===
One can use Hom functors to relate limits and colimits in a category C to limits in Set, the category of sets. This follows, in part, from the fact the covariant Hom functor Hom(N, –) : C → Set preserves all limits in C. By duality, the contravariant Hom functor must take colimits to limits.
If a diagram F : J → C has a limit in C, denoted by lim F, there is a canonical isomorphism
Hom
(
N
,
lim
F
)
≅
lim
Hom
(
N
,
F
−
)
{\displaystyle \operatorname {Hom} (N,\lim F)\cong \lim \operatorname {Hom} (N,F-)}
which is natural in the variable N. Here the functor Hom(N, F–) is the composition of the Hom functor Hom(N, –) with F. This isomorphism is the unique one which respects the limiting cones.
One can use the above relationship to define the limit of F in C. The first step is to observe that the limit of the functor Hom(N, F–) can be identified with the set of all cones from N to F:
lim
Hom
(
N
,
F
−
)
=
Cone
(
N
,
F
)
.
{\displaystyle \lim \operatorname {Hom} (N,F-)=\operatorname {Cone} (N,F).}
The limiting cone is given by the family of maps πX : Cone(N, F) → Hom(N, FX) where πX(ψ) = ψX. If one is given an object L of C together with a natural isomorphism Φ : Hom(L, –) → Cone(–, F), the object L will be a limit of F with the limiting cone given by ΦL(idL). In fancy language, this amounts to saying that a limit of F is a representation of the functor Cone(–, F) : C → Set.
Dually, if a diagram F : J → C has a colimit in C, denoted colim F, there is a unique canonical isomorphism
Hom
(
colim
F
,
N
)
≅
lim
Hom
(
F
−
,
N
)
{\displaystyle \operatorname {Hom} (\operatorname {colim} F,N)\cong \lim \operatorname {Hom} (F-,N)}
which is natural in the variable N and respects the colimiting cones. Identifying the limit of Hom(F–, N) with the set Cocone(F, N), this relationship can be used to define the colimit of the diagram F as a representation of the functor Cocone(F, –).
=== Interchange of limits and colimits of sets ===
Let I be a finite category and J be a small filtered category. For any bifunctor
F
:
I
×
J
→
S
e
t
,
{\displaystyle F:I\times J\to \mathbf {Set} ,}
there is a natural isomorphism
colim
J
lim
I
F
(
i
,
j
)
→
lim
I
colim
J
F
(
i
,
j
)
.
{\displaystyle \operatorname {colim} \limits _{J}\lim _{I}F(i,j)\rightarrow \lim _{I}\operatorname {colim} \limits _{J}F(i,j).}
In words, filtered colimits in Set commute with finite limits. It also holds that small colimits commute with small limits.
== Functors and limits ==
If F : J → C is a diagram in C and G : C → D is a functor then by composition (recall that a diagram is just a functor) one obtains a diagram GF : J → D. A natural question is then:
“How are the limits of GF related to those of F?”
=== Preservation of limits ===
A functor G : C → D induces a map from Cone(F) to Cone(GF): if Ψ is a cone from N to F then GΨ is a cone from GN to GF. The functor G is said to preserve the limits of F if (GL, Gφ) is a limit of GF whenever (L, φ) is a limit of F. (Note that if the limit of F does not exist, then G vacuously preserves the limits of F.)
A functor G is said to preserve all limits of shape J if it preserves the limits of all diagrams F : J → C. For example, one can say that G preserves products, equalizers, pullbacks, etc. A continuous functor is one that preserves all small limits.
One can make analogous definitions for colimits. For instance, a functor G preserves the colimits of F if G(L, φ) is a colimit of GF whenever (L, φ) is a colimit of F. A cocontinuous functor is one that preserves all small colimits.
If C is a complete category, then, by the above existence theorem for limits, a functor G : C → D is continuous if and only if it preserves (small) products and equalizers. Dually, G is cocontinuous if and only if it preserves (small) coproducts and coequalizers.
An important property of adjoint functors is that every right adjoint functor is continuous and every left adjoint functor is cocontinuous. Since adjoint functors exist in abundance, this gives numerous examples of continuous and cocontinuous functors.
For a given diagram F : J → C and functor G : C → D, if both F and GF have specified limits there is a unique canonical morphism
τ
F
:
G
lim
F
→
lim
G
F
{\displaystyle \tau _{F}:G\lim F\to \lim GF}
which respects the corresponding limit cones. The functor G preserves the limits of F if and only if this map is an isomorphism. If the categories C and D have all limits of shape J then lim is a functor and the morphisms τF form the components of a natural transformation
τ
:
G
lim
→
lim
G
J
.
{\displaystyle \tau :G\lim \to \lim G^{J}.}
The functor G preserves all limits of shape J if and only if τ is a natural isomorphism. In this sense, the functor G can be said to commute with limits (up to a canonical natural isomorphism).
Preservation of limits and colimits is a concept that only applies to covariant functors. For contravariant functors the corresponding notions would be a functor that takes colimits to limits, or one that takes limits to colimits.
=== Lifting of limits ===
A functor G : C → D is said to lift limits for a diagram F : J → C if whenever (L, φ) is a limit of GF there exists a limit (L′, φ′) of F such that G(L′, φ′) = (L, φ). A functor G lifts limits of shape J if it lifts limits for all diagrams of shape J. One can therefore talk about lifting products, equalizers, pullbacks, etc. Finally, one says that G lifts limits if it lifts all limits. There are dual definitions for the lifting of colimits.
A functor G lifts limits uniquely for a diagram F if there is a unique preimage cone (L′, φ′) such that (L′, φ′) is a limit of F and G(L′, φ′) = (L, φ). One can show that G lifts limits uniquely if and only if it lifts limits and is amnestic.
Lifting of limits is clearly related to preservation of limits. If G lifts limits for a diagram F and GF has a limit, then F also has a limit and G preserves the limits of F. It follows that:
If G lifts limits of all shape J and D has all limits of shape J, then C also has all limits of shape J and G preserves these limits.
If G lifts all small limits and D is complete, then C is also complete and G is continuous.
The dual statements for colimits are equally valid.
=== Creation and reflection of limits ===
Let F : J → C be a diagram. A functor G : C → D is said to
create limits for F if whenever (L, φ) is a limit of GF there exists a unique cone (L′, φ′) to F such that G(L′, φ′) = (L, φ), and furthermore, this cone is a limit of F.
reflect limits for F if each cone to F whose image under G is a limit of GF is already a limit of F.
Dually, one can define creation and reflection of colimits.
The following statements are easily seen to be equivalent:
The functor G creates limits.
The functor G lifts limits uniquely and reflects limits.
There are examples of functors which lift limits uniquely but neither create nor reflect them.
=== Examples ===
Every representable functor C → Set preserves limits (but not necessarily colimits). In particular, for any object A of C, this is true of the covariant Hom functor Hom(A,–) : C → Set.
The forgetful functor U : Grp → Set creates (and preserves) all small limits and filtered colimits; however, U does not preserve coproducts. This situation is typical of algebraic forgetful functors.
The free functor F : Set → Grp (which assigns to every set S the free group over S) is left adjoint to forgetful functor U and is, therefore, cocontinuous. This explains why the free product of two free groups G and H is the free group generated by the disjoint union of the generators of G and H.
The inclusion functor Ab → Grp creates limits but does not preserve coproducts (the coproduct of two abelian groups being the direct sum).
The forgetful functor Top → Set lifts limits and colimits uniquely but creates neither.
Let Metc be the category of metric spaces with continuous functions for morphisms. The forgetful functor Metc → Set lifts finite limits but does not lift them uniquely.
== A note on terminology ==
Older terminology referred to limits as "inverse limits" or "projective limits", and to colimits as "direct limits" or "inductive limits". This has been the source of a lot of confusion.
There are several ways to remember the modern terminology. First of all,
cokernels,
coproducts,
coequalizers, and
codomains
are types of colimits, whereas
kernels,
products
equalizers, and
domains
are types of limits. Second, the prefix "co" implies "first variable of the
Hom
{\displaystyle \operatorname {Hom} }
". Terms like "cohomology" and "cofibration" all have a slightly stronger association with the first variable, i.e., the contravariant variable, of the
Hom
{\displaystyle \operatorname {Hom} }
bifunctor.
== See also ==
Cartesian closed category – Type of category in category theory
Limits and colimits in an ∞-category
== References ==
== Further reading ==
Adámek, Jiří; Horst Herrlich; George E. Strecker (1990). Abstract and Concrete Categories (PDF). John Wiley & Sons. ISBN 0-471-60922-6.
Mac Lane, Saunders (1998). Categories for the Working Mathematician. Graduate Texts in Mathematics. Vol. 5 (2nd ed.). Springer-Verlag. ISBN 0-387-98403-8. Zbl 0906.18001.
Borceux, Francis (1994). "Limits". Handbook of categorical algebra. Encyclopedia of mathematics and its applications 50-51, 53 [i.e. 52]. Vol. 1. Cambridge University Press. ISBN 0-521-44178-1.
== External links ==
Interactive Web page which generates examples of limits and colimits in the category of finite sets. Written by Jocelyn Paine.
Limit at the nLab | Wikipedia/Limit_(category_theory) |
In mathematics, specifically in order theory and functional analysis, if
C
{\displaystyle C}
is a cone at the origin in a topological vector space
X
{\displaystyle X}
such that
0
∈
C
{\displaystyle 0\in C}
and if
U
{\displaystyle {\mathcal {U}}}
is the neighborhood filter at the origin, then
C
{\displaystyle C}
is called normal if
U
=
[
U
]
C
,
{\displaystyle {\mathcal {U}}=\left[{\mathcal {U}}\right]_{C},}
where
[
U
]
C
:=
{
[
U
]
C
:
U
∈
U
}
{\displaystyle \left[{\mathcal {U}}\right]_{C}:=\left\{[U]_{C}:U\in {\mathcal {U}}\right\}}
and where for any subset
S
⊆
X
,
{\displaystyle S\subseteq X,}
[
S
]
C
:=
(
S
+
C
)
∩
(
S
−
C
)
{\displaystyle [S]_{C}:=(S+C)\cap (S-C)}
is the
C
{\displaystyle C}
-saturatation of
S
.
{\displaystyle S.}
Normal cones play an important role in the theory of ordered topological vector spaces and topological vector lattices.
== Characterizations ==
If
C
{\displaystyle C}
is a cone in a TVS
X
{\displaystyle X}
then for any subset
S
⊆
X
{\displaystyle S\subseteq X}
let
[
S
]
C
:=
(
S
+
C
)
∩
(
S
−
C
)
{\displaystyle [S]_{C}:=\left(S+C\right)\cap \left(S-C\right)}
be the
C
{\displaystyle C}
-saturated hull of
S
⊆
X
{\displaystyle S\subseteq X}
and for any collection
S
{\displaystyle {\mathcal {S}}}
of subsets of
X
{\displaystyle X}
let
[
S
]
C
:=
{
[
S
]
C
:
S
∈
S
}
.
{\displaystyle \left[{\mathcal {S}}\right]_{C}:=\left\{\left[S\right]_{C}:S\in {\mathcal {S}}\right\}.}
If
C
{\displaystyle C}
is a cone in a TVS
X
{\displaystyle X}
then
C
{\displaystyle C}
is normal if
U
=
[
U
]
C
,
{\displaystyle {\mathcal {U}}=\left[{\mathcal {U}}\right]_{C},}
where
U
{\displaystyle {\mathcal {U}}}
is the neighborhood filter at the origin.
If
T
{\displaystyle {\mathcal {T}}}
is a collection of subsets of
X
{\displaystyle X}
and if
F
{\displaystyle {\mathcal {F}}}
is a subset of
T
{\displaystyle {\mathcal {T}}}
then
F
{\displaystyle {\mathcal {F}}}
is a fundamental subfamily of
T
{\displaystyle {\mathcal {T}}}
if every
T
∈
T
{\displaystyle T\in {\mathcal {T}}}
is contained as a subset of some element of
F
.
{\displaystyle {\mathcal {F}}.}
If
G
{\displaystyle {\mathcal {G}}}
is a family of subsets of a TVS
X
{\displaystyle X}
then a cone
C
{\displaystyle C}
in
X
{\displaystyle X}
is called a
G
{\displaystyle {\mathcal {G}}}
-cone if
{
[
G
]
C
¯
:
G
∈
G
}
{\displaystyle \left\{{\overline {\left[G\right]_{C}}}:G\in {\mathcal {G}}\right\}}
is a fundamental subfamily of
G
{\displaystyle {\mathcal {G}}}
and
C
{\displaystyle C}
is a strict
G
{\displaystyle {\mathcal {G}}}
-cone if
{
[
G
]
C
:
G
∈
G
}
{\displaystyle \left\{\left[G\right]_{C}:G\in {\mathcal {G}}\right\}}
is a fundamental subfamily of
G
.
{\displaystyle {\mathcal {G}}.}
Let
B
{\displaystyle {\mathcal {B}}}
denote the family of all bounded subsets of
X
.
{\displaystyle X.}
If
C
{\displaystyle C}
is a cone in a TVS
X
{\displaystyle X}
(over the real or complex numbers), then the following are equivalent:
C
{\displaystyle C}
is a normal cone.
For every filter
F
{\displaystyle {\mathcal {F}}}
in
X
,
{\displaystyle X,}
if
lim
F
=
0
{\displaystyle \lim {\mathcal {F}}=0}
then
lim
[
F
]
C
=
0.
{\displaystyle \lim \left[{\mathcal {F}}\right]_{C}=0.}
There exists a neighborhood base
G
{\displaystyle {\mathcal {G}}}
in
X
{\displaystyle X}
such that
B
∈
G
{\displaystyle B\in {\mathcal {G}}}
implies
[
B
∩
C
]
C
⊆
B
.
{\displaystyle \left[B\cap C\right]_{C}\subseteq B.}
and if
X
{\displaystyle X}
is a vector space over the reals then we may add to this list:
There exists a neighborhood base at the origin consisting of convex, balanced,
C
{\displaystyle C}
-saturated sets.
There exists a generating family
P
{\displaystyle {\mathcal {P}}}
of semi-norms on
X
{\displaystyle X}
such that
p
(
x
)
≤
p
(
x
+
y
)
{\displaystyle p(x)\leq p(x+y)}
for all
x
,
y
∈
C
{\displaystyle x,y\in C}
and
p
∈
P
.
{\displaystyle p\in {\mathcal {P}}.}
and if
X
{\displaystyle X}
is a locally convex space and if the dual cone of
C
{\displaystyle C}
is denoted by
X
′
{\displaystyle X^{\prime }}
then we may add to this list:
For any equicontinuous subset
S
⊆
X
′
,
{\displaystyle S\subseteq X^{\prime },}
there exists an equicontiuous
B
⊆
C
′
{\displaystyle B\subseteq C^{\prime }}
such that
S
⊆
B
−
B
.
{\displaystyle S\subseteq B-B.}
The topology of
X
{\displaystyle X}
is the topology of uniform convergence on the equicontinuous subsets of
C
′
.
{\displaystyle C^{\prime }.}
and if
X
{\displaystyle X}
is an infrabarreled locally convex space and if
B
′
{\displaystyle {\mathcal {B}}^{\prime }}
is the family of all strongly bounded subsets of
X
′
{\displaystyle X^{\prime }}
then we may add to this list:
The topology of
X
{\displaystyle X}
is the topology of uniform convergence on strongly bounded subsets of
C
′
.
{\displaystyle C^{\prime }.}
C
′
{\displaystyle C^{\prime }}
is a
B
′
{\displaystyle {\mathcal {B}}^{\prime }}
-cone in
X
′
.
{\displaystyle X^{\prime }.}
this means that the family
{
[
B
′
]
C
¯
:
B
′
∈
B
′
}
{\displaystyle \left\{{\overline {\left[B^{\prime }\right]_{C}}}:B^{\prime }\in {\mathcal {B}}^{\prime }\right\}}
is a fundamental subfamily of
B
′
.
{\displaystyle {\mathcal {B}}^{\prime }.}
C
′
{\displaystyle C^{\prime }}
is a strict
B
′
{\displaystyle {\mathcal {B}}^{\prime }}
-cone in
X
′
.
{\displaystyle X^{\prime }.}
this means that the family
{
[
B
′
]
C
:
B
′
∈
B
′
}
{\displaystyle \left\{\left[B^{\prime }\right]_{C}:B^{\prime }\in {\mathcal {B}}^{\prime }\right\}}
is a fundamental subfamily of
B
′
.
{\displaystyle {\mathcal {B}}^{\prime }.}
and if
X
{\displaystyle X}
is an ordered locally convex TVS over the reals whose positive cone is
C
,
{\displaystyle C,}
then we may add to this list:
there exists a Hausdorff locally compact topological space
S
{\displaystyle S}
such that
X
{\displaystyle X}
is isomorphic (as an ordered TVS) with a subspace of
R
(
S
)
,
{\displaystyle R(S),}
where
R
(
S
)
{\displaystyle R(S)}
is the space of all real-valued continuous functions on
X
{\displaystyle X}
under the topology of compact convergence.
If
X
{\displaystyle X}
is a locally convex TVS,
C
{\displaystyle C}
is a cone in
X
{\displaystyle X}
with dual cone
C
′
⊆
X
′
,
{\displaystyle C^{\prime }\subseteq X^{\prime },}
and
G
{\displaystyle {\mathcal {G}}}
is a saturated family of weakly bounded subsets of
X
′
,
{\displaystyle X^{\prime },}
then
if
C
′
{\displaystyle C^{\prime }}
is a
G
{\displaystyle {\mathcal {G}}}
-cone then
C
{\displaystyle C}
is a normal cone for the
G
{\displaystyle {\mathcal {G}}}
-topology on
X
{\displaystyle X}
;
if
C
{\displaystyle C}
is a normal cone for a
G
{\displaystyle {\mathcal {G}}}
-topology on
X
{\displaystyle X}
consistent with
⟨
X
,
X
′
⟩
{\displaystyle \left\langle X,X^{\prime }\right\rangle }
then
C
′
{\displaystyle C^{\prime }}
is a strict
G
{\displaystyle {\mathcal {G}}}
-cone in
X
′
.
{\displaystyle X^{\prime }.}
If
X
{\displaystyle X}
is a Banach space,
C
{\displaystyle C}
is a closed cone in
X
,
{\displaystyle X,}
, and
B
′
{\displaystyle {\mathcal {B}}^{\prime }}
is the family of all bounded subsets of
X
b
′
{\displaystyle X_{b}^{\prime }}
then the dual cone
C
′
{\displaystyle C^{\prime }}
is normal in
X
b
′
{\displaystyle X_{b}^{\prime }}
if and only if
C
{\displaystyle C}
is a strict
B
{\displaystyle {\mathcal {B}}}
-cone.
If
X
{\displaystyle X}
is a Banach space and
C
{\displaystyle C}
is a cone in
X
{\displaystyle X}
then the following are equivalent:
C
{\displaystyle C}
is a
B
{\displaystyle {\mathcal {B}}}
-cone in
X
{\displaystyle X}
;
X
=
C
¯
−
C
¯
{\displaystyle X={\overline {C}}-{\overline {C}}}
;
C
¯
{\displaystyle {\overline {C}}}
is a strict
B
{\displaystyle {\mathcal {B}}}
-cone in
X
.
{\displaystyle X.}
=== Ordered topological vector spaces ===
Suppose
L
{\displaystyle L}
is an ordered topological vector space. That is,
L
{\displaystyle L}
is a topological vector space, and we define
x
≥
y
{\displaystyle x\geq y}
whenever
x
−
y
{\displaystyle x-y}
lies in the cone
L
+
{\displaystyle L_{+}}
. The following statements are equivalent:
The cone
L
+
{\displaystyle L_{+}}
is normal;
The normed space
L
{\displaystyle L}
admits an equivalent monotone norm;
There exists a constant
c
>
0
{\displaystyle c>0}
such that
a
≤
x
≤
b
{\displaystyle a\leq x\leq b}
implies
‖
x
‖
≤
c
max
{
‖
a
‖
,
‖
b
‖
}
{\displaystyle \lVert x\rVert \leq c\max\{\lVert a\rVert ,\lVert b\rVert \}}
;
The full hull
[
U
]
=
(
U
+
L
+
)
∩
(
U
−
L
+
)
{\displaystyle [U]=(U+L_{+})\cap (U-L_{+})}
of the closed unit ball
U
{\displaystyle U}
of
L
{\displaystyle L}
is norm bounded;
There is a constant
c
>
0
{\displaystyle c>0}
such that
0
≤
x
≤
y
{\displaystyle 0\leq x\leq y}
implies
‖
x
‖
≤
c
‖
y
‖
{\displaystyle \lVert x\rVert \leq c\lVert y\rVert }
.
== Properties ==
If
X
{\displaystyle X}
is a Hausdorff TVS then every normal cone in
X
{\displaystyle X}
is a proper cone.
If
X
{\displaystyle X}
is a normable space and if
C
{\displaystyle C}
is a normal cone in
X
{\displaystyle X}
then
X
′
=
C
′
−
C
′
.
{\displaystyle X^{\prime }=C^{\prime }-C^{\prime }.}
Suppose that the positive cone of an ordered locally convex TVS
X
{\displaystyle X}
is weakly normal in
X
{\displaystyle X}
and that
Y
{\displaystyle Y}
is an ordered locally convex TVS with positive cone
D
.
{\displaystyle D.}
If
Y
=
D
−
D
{\displaystyle Y=D-D}
then
H
−
H
{\displaystyle H-H}
is dense in
L
s
(
X
;
Y
)
{\displaystyle L_{s}(X;Y)}
where
H
{\displaystyle H}
is the canonical positive cone of
L
(
X
;
Y
)
{\displaystyle L(X;Y)}
and
L
s
(
X
;
Y
)
{\displaystyle L_{s}(X;Y)}
is the space
L
(
X
;
Y
)
{\displaystyle L(X;Y)}
with the topology of simple convergence.
If
G
{\displaystyle {\mathcal {G}}}
is a family of bounded subsets of
X
,
{\displaystyle X,}
then there are apparently no simple conditions guaranteeing that
H
{\displaystyle H}
is a
T
{\displaystyle {\mathcal {T}}}
-cone in
L
G
(
X
;
Y
)
,
{\displaystyle L_{\mathcal {G}}(X;Y),}
even for the most common types of families
T
{\displaystyle {\mathcal {T}}}
of bounded subsets of
L
G
(
X
;
Y
)
{\displaystyle L_{\mathcal {G}}(X;Y)}
(except for very special cases).
== Sufficient conditions ==
If the topology on
X
{\displaystyle X}
is locally convex then the closure of a normal cone is a normal cone.
Suppose that
{
X
α
:
α
∈
A
}
{\displaystyle \left\{X_{\alpha }:\alpha \in A\right\}}
is a family of locally convex TVSs and that
C
α
{\displaystyle C_{\alpha }}
is a cone in
X
α
.
{\displaystyle X_{\alpha }.}
If
X
:=
⨁
α
X
α
{\displaystyle X:=\bigoplus _{\alpha }X_{\alpha }}
is the locally convex direct sum then the cone
C
:=
⨁
α
C
α
{\displaystyle C:=\bigoplus _{\alpha }C_{\alpha }}
is a normal cone in
X
{\displaystyle X}
if and only if each
X
α
{\displaystyle X_{\alpha }}
is normal in
X
α
.
{\displaystyle X_{\alpha }.}
If
X
{\displaystyle X}
is a locally convex space then the closure of a normal cone is a normal cone.
If
C
{\displaystyle C}
is a cone in a locally convex TVS
X
{\displaystyle X}
and if
C
′
{\displaystyle C^{\prime }}
is the dual cone of
C
,
{\displaystyle C,}
then
X
′
=
C
′
−
C
′
{\displaystyle X^{\prime }=C^{\prime }-C^{\prime }}
if and only if
C
{\displaystyle C}
is weakly normal.
Every normal cone in a locally convex TVS is weakly normal.
In a normed space, a cone is normal if and only if it is weakly normal.
If
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are ordered locally convex TVSs and if
G
{\displaystyle {\mathcal {G}}}
is a family of bounded subsets of
X
,
{\displaystyle X,}
then if the positive cone of
X
{\displaystyle X}
is a
G
{\displaystyle {\mathcal {G}}}
-cone in
X
{\displaystyle X}
and if the positive cone of
Y
{\displaystyle Y}
is a normal cone in
Y
{\displaystyle Y}
then the positive cone of
L
G
(
X
;
Y
)
{\displaystyle L_{\mathcal {G}}(X;Y)}
is a normal cone for the
G
{\displaystyle {\mathcal {G}}}
-topology on
L
(
X
;
Y
)
.
{\displaystyle L(X;Y).}
== See also ==
Cone-saturated
Topological vector lattice
Vector lattice – Partially ordered vector space, ordered as a latticePages displaying short descriptions of redirect targets
== References ==
== Bibliography ==
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. | Wikipedia/Normal_cone_(functional_analysis) |
In mathematics, the upper topology on a partially ordered set X is the coarsest topology in which the closure of a singleton
{
a
}
{\displaystyle \{a\}}
is the order section
a
]
=
{
x
≤
a
}
{\displaystyle a]=\{x\leq a\}}
for each
a
∈
X
.
{\displaystyle a\in X.}
If
≤
{\displaystyle \leq }
is a partial order, the upper topology is the least order consistent topology in which all open sets are up-sets. However, not all up-sets must necessarily be open sets. The lower topology induced by the preorder is defined similarly in terms of the down-sets. The preorder inducing the upper topology is its specialization preorder, but the specialization preorder of the lower topology is opposite to the inducing preorder.
The real upper topology is most naturally defined on the upper-extended real line
(
−
∞
,
+
∞
]
=
R
∪
{
+
∞
}
{\displaystyle (-\infty ,+\infty ]=\mathbb {R} \cup \{+\infty \}}
by the system
{
(
a
,
+
∞
]
:
a
∈
R
∪
{
±
∞
}
}
{\displaystyle \{(a,+\infty ]:a\in \mathbb {R} \cup \{\pm \infty \}\}}
of open sets. Similarly, the real lower topology
{
[
−
∞
,
a
)
:
a
∈
R
∪
{
±
∞
}
}
{\displaystyle \{[-\infty ,a):a\in \mathbb {R} \cup \{\pm \infty \}\}}
is naturally defined on the lower real line
[
−
∞
,
+
∞
)
=
R
∪
{
−
∞
}
.
{\displaystyle [-\infty ,+\infty )=\mathbb {R} \cup \{-\infty \}.}
A real function on a topological space is upper semi-continuous if and only if it is lower-continuous, i.e. is continuous with respect to the lower topology on the lower-extended line
[
−
∞
,
+
∞
)
.
{\displaystyle {[-\infty ,+\infty )}.}
Similarly, a function into the upper real line is lower semi-continuous if and only if it is upper-continuous, i.e. is continuous with respect to the upper topology on
(
−
∞
,
+
∞
]
.
{\displaystyle {(-\infty ,+\infty ]}.}
== See also ==
List of topologies – List of concrete topologies and topological spaces
== References ==
Gerhard Gierz; K.H. Hofmann; K. Keimel; J. D. Lawson; M. Mislove; D. S. Scott (2003). Continuous Lattices and Domains. Cambridge University Press. p. 510. ISBN 0-521-80338-1.
Kelley, John L. (1975) [1955]. General Topology. Graduate Texts in Mathematics. Vol. 27 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90125-1. OCLC 1365153. p.101
Knapp, Anthony W. (2005). Basic Real Analysis. Birkhhauser. p. 481. ISBN 0-8176-3250-6. | Wikipedia/Upper_topology |
In graph theory and order theory, a comparability graph is an undirected graph that connects pairs of elements that are comparable to each other in a partial order. Comparability graphs have also been called transitively orientable graphs, partially orderable graphs, containment graphs, and divisor graphs.
An incomparability graph is an undirected graph that connects pairs of elements that are not comparable to each other in a partial order.
== Definitions and characterization ==
For any strict partially ordered set (S,<), the comparability graph of (S, <) is the graph (S, ⊥) of which the vertices are the elements of S and the edges are those pairs {u, v} of elements such that u < v. That is, for a partially ordered set, take the directed acyclic graph, apply transitive closure, and remove orientation.
Equivalently, a comparability graph is a graph that has a transitive orientation, an assignment of directions to the edges of the graph (i.e. an orientation of the graph) such that the adjacency relation of the resulting directed graph is transitive: whenever there exist directed edges (x,y) and (y,z), there must exist an edge (x,z).
One can represent any finite partial order as a family of sets, such that x < y in the partial order whenever the set corresponding to x is a subset of the set corresponding to y. In this way, comparability graphs can be shown to be equivalent to containment graphs of set families; that is, a graph with a vertex for each set in the family and an edge between two sets whenever one is a subset of the other.
Alternatively, one can represent the partial order by a family of integers, such that x < y whenever the integer corresponding to x is a divisor of the integer corresponding to y. Because of this construction, comparability graphs have also been called divisor graphs.
Comparability graphs can be characterized as the graphs such that, for every generalized cycle (see below) of odd length, one can find an edge (x,y) connecting two vertices that are at distance two in the cycle. Such an edge is called a triangular chord. In this context, a generalized cycle is defined to be a closed walk that uses each edge of the graph at most once in each direction. Comparability graphs can also be characterized by a list of forbidden induced subgraphs.
== Relation to other graph families ==
Every complete graph is a comparability graph, the comparability graph of a total order. All acyclic orientations of a complete graph are transitive. Every bipartite graph is also a comparability graph. Orienting the edges of a bipartite graph from one side of the bipartition to the other results in a transitive orientation, corresponding to a partial order of height two. As Seymour (2006) observes, every comparability graph that is neither complete nor bipartite has a skew partition.
The complement of any interval graph is a comparability graph. The comparability relation is called an interval order. Interval graphs are exactly the graphs that are chordal and that have comparability graph complements.
A permutation graph is a containment graph on a set of intervals. Therefore, permutation graphs are another subclass of comparability graphs.
The trivially perfect graphs are the comparability graphs of rooted trees.
Cographs can be characterized as the comparability graphs of series-parallel partial orders; thus, cographs are also comparability graphs.
Threshold graphs are another special kind of comparability graph.
Every comparability graph is perfect. The perfection of comparability graphs is Mirsky's theorem, and the perfection of their complements is Dilworth's theorem; these facts, together with the perfect graph theorem can be used to prove Dilworth's theorem from Mirsky's theorem or vice versa. More specifically, comparability graphs are perfectly orderable graphs, a subclass of perfect graphs: a greedy coloring algorithm for a topological ordering of a transitive orientation of the graph will optimally color them.
The complement of every comparability graph is a string graph.
== Algorithms ==
A transitive orientation of a graph, if it exists, can be found in linear time. However, the algorithm for doing so will assign orientations to the edges of any graph, so to complete the task of testing whether a graph is a comparability graph, one must test whether the resulting orientation is transitive, a problem provably equivalent in complexity to matrix multiplication.
Because comparability graphs are perfect, many problems that are hard on more general classes of graphs, including graph coloring and the independent set problem, can be solved in polynomial time for comparability graphs.
== See also ==
Bound graph, a different graph defined from a partial order
== Notes ==
== References == | Wikipedia/Comparability_graph |
Zorn's lemma, also known as the Kuratowski–Zorn lemma, is a proposition of set theory. It states that a partially ordered set containing upper bounds for every chain (that is, every totally ordered subset) necessarily contains at least one maximal element.
The lemma was proved (assuming the axiom of choice) by Kazimierz Kuratowski in 1922 and independently by Max Zorn in 1935. It occurs in the proofs of several theorems of crucial importance, for instance the Hahn–Banach theorem in functional analysis, the theorem that every vector space has a basis, Tychonoff's theorem in topology stating that every product of compact spaces is compact, and the theorems in abstract algebra that in a ring with identity every proper ideal is contained in a maximal ideal and that every field has an algebraic closure.
Zorn's lemma is equivalent to the well-ordering theorem and also to the axiom of choice, in the sense that within ZF (Zermelo–Fraenkel set theory without the axiom of choice) any one of the three is sufficient to prove the other two. An earlier formulation of Zorn's lemma is the Hausdorff maximal principle which states that every totally ordered subset of a given partially ordered set is contained in a maximal totally ordered subset of that partially ordered set.
== Motivation ==
To prove the existence of a mathematical object that can be viewed as a maximal element in some partially ordered set in some way, one can try proving the existence of such an object by assuming there is no maximal element and using transfinite induction and the assumptions of the situation to get a contradiction. Zorn's lemma tidies up the conditions a situation needs to satisfy in order for such an argument to work and enables mathematicians to not have to repeat the transfinite induction argument by hand each time, but just check the conditions of Zorn's lemma.
If you are building a mathematical object in stages and find that (i) you have not finished even after infinitely many stages, and (ii) there seems to be nothing to stop you continuing to build, then Zorn’s lemma may well be able to help you.
== Statement of the lemma ==
Preliminary notions:
A set P equipped with a binary relation ≤ that is reflexive (x ≤ x for every x), antisymmetric (if both x ≤ y and y ≤ x hold, then x = y), and transitive (if x ≤ y and y ≤ z then x ≤ z) is said to be (partially) ordered by ≤. Given two elements x and y of P with x ≤ y, y is said to be greater than or equal to x. The word "partial" is meant to indicate that not every pair of elements of a partially ordered set is required to be comparable under the order relation, that is, in a partially ordered set P with order relation ≤ there may be elements x and y with neither x ≤ y nor y ≤ x. An ordered set in which every pair of elements is comparable is called totally ordered.
Every subset S of a partially ordered set P can itself be seen as partially ordered by restricting the order relation inherited from P to S. A subset S of a partially ordered set P is called a chain (in P) if it is totally ordered in the inherited order.
An element m of a partially ordered set P with order relation ≤ is maximal (with respect to ≤) if there is no other element of P greater than m, that is, there is no s in P with s ≠ m and m ≤ s. Depending on the order relation, a partially ordered set may have any number of maximal elements. However, a totally ordered set can have at most one maximal element.
Given a subset S of a partially ordered set P, an element u of P is an upper bound of S if it is greater than or equal to every element of S. Here, S is not required to be a chain, and u is required to be comparable to every element of S but need not itself be an element of S.
Zorn's lemma can then be stated as:
In fact, property (1) is redundant, since property (2) says, in particular, that the empty chain has an upper bound in
P
{\displaystyle P}
, implying
P
{\displaystyle P}
is nonempty. However, in practice, one often checks (1) and then verifies (2) only for nonempty chains, since the case of the empty chain is taken care by (1).
In the terminology of Bourbaki, a partially ordered set is called inductive if each chain has an upper bound in the set (in particular, the set is then nonempty). Then the lemma can be stated as:
For some applications, the following variant may be useful.
Indeed, let
Q
=
{
x
∈
P
∣
x
≥
a
}
{\displaystyle Q=\{x\in P\mid x\geq a\}}
with the partial ordering from
P
{\displaystyle P}
. Then, for a chain in
Q
{\displaystyle Q}
, an upper bound in
P
{\displaystyle P}
is in
Q
{\displaystyle Q}
and so
Q
{\displaystyle Q}
satisfies the hypothesis of Zorn's lemma and a maximal element in
Q
{\displaystyle Q}
is a maximal element in
P
{\displaystyle P}
as well.
== Example applications ==
=== Every vector space has a basis ===
Zorn's lemma can be used to show that every vector space V has a basis.
If V = {0}, then the empty set is a basis for V. Now, suppose that V ≠ {0}. Let P be the set consisting of all linearly independent subsets of V. Since V is not the zero vector space, there exists a nonzero element v of V, so P contains the linearly independent subset {v}. Furthermore, P is partially ordered by set inclusion (see inclusion order). Finding a maximal linearly independent subset of V is the same as finding a maximal element in P.
To apply Zorn's lemma, take a chain T in P (that is, T is a subset of P that is totally ordered). If T is the empty set, then {v} is an upper bound for T in P. Suppose then that T is non-empty. We need to show that T has an upper bound, that is, there exists a linearly independent subset B of V containing all the members of T.
Take B to be the union of all the sets in T. We wish to show that B is an upper bound for T in P. To do this, it suffices to show that B is a linearly independent subset of V.
Suppose otherwise, that B is not linearly independent. Then there exists vectors v1, v2, ..., vk ∈ B and scalars a1, a2, ..., ak, not all zero, such that
a
1
v
1
+
a
2
v
2
+
⋯
+
a
k
v
k
=
0
.
{\displaystyle a_{1}\mathbf {v} _{1}+a_{2}\mathbf {v} _{2}+\cdots +a_{k}\mathbf {v} _{k}=\mathbf {0} .}
Since B is the union of all the sets in T, there are some sets S1, S2, ..., Sk ∈ T such that vi ∈ Si for every i = 1, 2, ..., k. As T is totally ordered, one of the sets S1, S2, ..., Sk must contain the others, so there is some set Si that contains all of v1, v2, ..., vk. This tells us there is a linearly dependent set of vectors in Si, contradicting that Si is linearly independent (because it is a member of P).
The hypothesis of Zorn's lemma has been checked, and thus there is a maximal element in P, in other words a maximal linearly independent subset B of V.
Finally, we show that B is indeed a basis of V. It suffices to show that B is a spanning set of V. Suppose for the sake of contradiction that B is not spanning. Then there exists some v ∈ V not covered by the span of B. This says that B ∪ {v} is a linearly independent subset of V that is larger than B, contradicting the maximality of B. Therefore, B is a spanning set of V, and thus, a basis of V.
=== Every nontrivial ring with unity contains a maximal ideal ===
Zorn's lemma can be used to show that every nontrivial ring R with unity contains a maximal ideal.
Let P be the set consisting of all proper ideals in R (that is, all ideals in R except R itself). Since R is non-trivial, the set P contains the trivial ideal {0}. Furthermore, P is partially ordered by set inclusion. Finding a maximal ideal in R is the same as finding a maximal element in P.
To apply Zorn's lemma, take a chain T in P. If T is empty, then the trivial ideal {0} is an upper bound for T in P. Assume then that T is non-empty. It is necessary to show that T has an upper bound, that is, there exists an ideal I ⊆ R containing all the members of T but still smaller than R (otherwise it would not be a proper ideal, so it is not in P).
Take I to be the union of all the ideals in T. We wish to show that I is an upper bound for T in P. We will first show that I is an ideal of R. For I to be an ideal, it must satisfy three conditions:
I is a nonempty subset of R,
For every x, y ∈ I, the sum x + y is in I,
For every r ∈ R and every x ∈ I, the product rx is in I.
#1 - I is a nonempty subset of R.
Because T contains at least one element, and that element contains at least 0, the union I contains at least 0 and is not empty. Every element of T is a subset of R, so the union I only consists of elements in R.
#2 - For every x, y ∈ I, the sum x + y is in I.
Suppose x and y are elements of I. Then there exist two ideals J, K ∈ T such that x is an element of J and y is an element of K. Since T is totally ordered, we know that J ⊆ K or K ⊆ J. Without loss of generality, assume the first case. Both x and y are members of the ideal K, therefore their sum x + y is a member of K, which shows that x + y is a member of I.
#3 - For every r ∈ R and every x ∈ I, the product rx is in I.
Suppose x is an element of I. Then there exists an ideal J ∈ T such that x is in J. If r ∈ R, then rx is an element of J and hence an element of I. Thus, I is an ideal in R.
Now, we show that I is a proper ideal. An ideal is equal to R if and only if it contains 1. (It is clear that if it is R then it contains 1; on the other hand, if it contains 1 and r is an arbitrary element of R, then r1 = r is an element of the ideal, and so the ideal is equal to R.) So, if I were equal to R, then it would contain 1, and that means one of the members of T would contain 1 and would thus be equal to R – but R is explicitly excluded from P.
The hypothesis of Zorn's lemma has been checked, and thus there is a maximal element in P, in other words a maximal ideal in R.
== Proof sketch ==
A sketch of the proof of Zorn's lemma follows, assuming the axiom of choice. Suppose the lemma is false. Then there exists a partially ordered set, or poset, P such that every totally ordered subset has an upper bound, and that for every element in P there is another element bigger than it. For every totally ordered subset T we may then define a bigger element b(T), because T has an upper bound, and that upper bound has a bigger element. To actually define the function b, we need to employ the axiom of choice (explicitly: let
B
(
T
)
=
{
b
∈
P
:
∀
t
∈
T
,
b
≥
t
}
{\displaystyle B(T)=\{b\in P:\forall t\in T,b\geq t\}}
, that is, the set of upper bounds for T. The axiom of choice furnishes
b
:
b
(
T
)
∈
B
(
T
)
{\displaystyle b:b(T)\in B(T)}
).
Using the function b, we are going to define elements a0 < a1 < a2 < a3 < ... < aω < aω+1 <…, in P. This uncountable sequence is really long: the indices are not just the natural numbers, but all ordinals. In fact, the sequence is too long for the set P; there are too many ordinals (a proper class), more than there are elements in any set (in other words, given any set of ordinals, there exists a larger ordinal), and the set P will be exhausted before long and then we will run into the desired contradiction.
The ai are defined by transfinite recursion: we pick a0 in P arbitrary (this is possible, since P contains an upper bound for the empty set and is thus not empty) and for any other ordinal w we set aw = b({av : v < w}). Because the av are totally ordered, this is a well-founded definition.
The above proof can be formulated without explicitly referring to ordinals by considering the initial segments {av : v < w} as subsets of P. Such sets can be easily characterized as well-ordered chains S ⊆ P where each x ∈ S satisfies x = b({y ∈ S : y < x}). Contradiction is reached by noting that we can always find a "next" initial segment either by taking the union of all such S (corresponding to the limit ordinal case) or by appending b(S) to the "last" S (corresponding to the successor ordinal case).
This proof shows that actually a slightly stronger version of Zorn's lemma is true:
Alternatively, one can use the same proof for the Hausdorff maximal principle. This is the proof given for example in Halmos' Naive Set Theory or in § Proof below.
Finally, the Bourbaki–Witt theorem can also be used to give a proof.
== Proof ==
The basic idea of the proof is to reduce the proof to proving the following weak form of Zorn's lemma:
(Note that, strictly speaking, (1) is redundant since (2) implies the empty set is in
F
{\displaystyle F}
.) Note the above is a weak form of Zorn's lemma since Zorn's lemma says in particular that any set of subsets satisfying the above (1) and (2) has a maximal element ((3) is not needed). The point is that, conversely, Zorn's lemma follows from this weak form. Indeed, let
F
{\displaystyle F}
be the set of all chains in
P
{\displaystyle P}
. Then it satisfies all of the above properties (it is nonempty since the empty subset is a chain.) Thus, by the above weak form, we find a maximal element
C
{\displaystyle C}
in
F
{\displaystyle F}
; i.e., a maximal chain in
P
{\displaystyle P}
. By the hypothesis of Zorn's lemma,
C
{\displaystyle C}
has an upper bound
x
{\displaystyle x}
in
P
{\displaystyle P}
. Then this
x
{\displaystyle x}
is a maximal element since if
y
≥
x
{\displaystyle y\geq x}
, then
C
~
=
C
∪
{
y
}
{\displaystyle {\widetilde {C}}=C\cup \{y\}}
is larger than or equal to
C
{\displaystyle C}
and so
C
~
=
C
{\displaystyle {\widetilde {C}}=C}
. Thus,
y
=
x
{\displaystyle y=x}
.
The proof of the weak form is given in Hausdorff maximal principle#Proof. Indeed, the existence of a maximal chain is exactly the assertion of the Hausdorff maximal principle.
The same proof also shows the following equivalent variant of Zorn's lemma:
Indeed, trivially, Zorn's lemma implies the above lemma. Conversely, the above lemma implies the aforementioned weak form of Zorn's lemma, since a union gives a least upper bound.
== Zorn's lemma implies the axiom of choice ==
A proof that Zorn's lemma implies the axiom of choice illustrates a typical application of Zorn's lemma. (The structure of the proof is exactly the same as the one for the Hahn–Banach theorem.)
Given a set
X
{\displaystyle X}
of nonempty sets and its union
U
:=
⋃
X
{\displaystyle U:=\bigcup X}
(which exists by the axiom of union), we want to show there is a function
f
:
X
→
U
{\displaystyle f:X\to U}
such that
f
(
S
)
∈
S
{\displaystyle f(S)\in S}
for each
S
∈
X
{\displaystyle S\in X}
. For that end, consider the set
P
=
{
f
:
X
′
→
U
∣
X
′
⊂
X
,
f
(
S
)
∈
S
}
{\displaystyle P=\{f:X'\to U\mid X'\subset X,f(S)\in S\}}
.
It is partially ordered by extension; i.e.,
f
≤
g
{\displaystyle f\leq g}
if and only if
f
{\displaystyle f}
is the restriction of
g
{\displaystyle g}
. If
f
i
:
X
i
→
U
{\displaystyle f_{i}:X_{i}\to U}
is a chain in
P
{\displaystyle P}
, then we can define the function
f
{\displaystyle f}
on the union
X
′
=
∪
i
X
i
{\displaystyle X'=\cup _{i}X_{i}}
by setting
f
(
x
)
=
f
i
(
x
)
{\displaystyle f(x)=f_{i}(x)}
when
x
∈
X
i
{\displaystyle x\in X_{i}}
. This is well-defined since if
i
<
j
{\displaystyle i<j}
, then
f
i
{\displaystyle f_{i}}
is the restriction of
f
j
{\displaystyle f_{j}}
. The function
f
{\displaystyle f}
is also an element of
P
{\displaystyle P}
and is a common extension of all
f
i
{\displaystyle f_{i}}
's. Thus, we have shown that each chain in
P
{\displaystyle P}
has an upper bound in
P
{\displaystyle P}
. Hence, by Zorn's lemma, there is a maximal element
f
{\displaystyle f}
in
P
{\displaystyle P}
that is defined on some
X
′
⊂
X
{\displaystyle X'\subset X}
. We want to show
X
′
=
X
{\displaystyle X'=X}
. Suppose otherwise; then there is a set
S
∈
X
−
X
′
{\displaystyle S\in X-X'}
. As
S
{\displaystyle S}
is nonempty, it contains an element
s
{\displaystyle s}
. We can then extend
f
{\displaystyle f}
to a function
g
{\displaystyle g}
by setting
g
|
X
′
=
f
{\displaystyle g|_{X'}=f}
and
g
(
S
)
=
s
{\displaystyle g(S)=s}
. (Note this step does not need the axiom of choice.) The function
g
{\displaystyle g}
is in
P
{\displaystyle P}
and
f
<
g
{\displaystyle f<g}
, a contradiction to the maximality of
f
{\displaystyle f}
.
◻
{\displaystyle \square }
Essentially the same proof also shows that Zorn's lemma implies the well-ordering theorem: take
P
{\displaystyle P}
to be the set of all well-ordered subsets of a given set
X
{\displaystyle X}
and then shows a maximal element of
P
{\displaystyle P}
is
X
{\displaystyle X}
.
== History ==
The Hausdorff maximal principle is an early statement similar to Zorn's lemma.
Kazimierz Kuratowski proved in 1922 a version of the lemma close to its modern formulation (it applies to sets ordered by inclusion and closed under unions of well-ordered chains). Essentially the same formulation (weakened by using arbitrary chains, not just well-ordered) was independently given by Max Zorn in 1935, who proposed it as a new axiom of set theory replacing the well-ordering theorem, exhibited some of its applications in algebra, and promised to show its equivalence with the axiom of choice in another paper, which never appeared.
The name "Zorn's lemma" appears to be due to John Tukey, who used it in his book Convergence and Uniformity in Topology in 1940. Bourbaki's Théorie des Ensembles of 1939 refers to a similar maximal principle as "le théorème de Zorn". The name "Kuratowski–Zorn lemma" prevails in Poland and Russia.
== Equivalent forms of Zorn's lemma ==
Zorn's lemma is equivalent (in ZF) to three main results:
Hausdorff maximal principle
Axiom of choice
Well-ordering theorem.
A well-known joke alluding to this equivalency (which may defy human intuition) is attributed to Jerry Bona:
"The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?"
Zorn's lemma is also equivalent to the strong completeness theorem of first-order logic.
Moreover, Zorn's lemma (or one of its equivalent forms) implies some major results in other mathematical areas. For example,
Banach's extension theorem which is used to prove one of the most fundamental results in functional analysis, the Hahn–Banach theorem
Every vector space has a basis, a result from linear algebra (to which it is equivalent). In particular, the real numbers, as a vector space over the rational numbers, possess a Hamel basis.
Every commutative unital ring has a maximal ideal, a result from ring theory known as Krull's theorem, to which Zorn's lemma is equivalent
Tychonoff's theorem in topology (to which it is also equivalent)
Every proper filter is contained in an ultrafilter, a result that yields the completeness theorem of first-order logic
In this sense, Zorn's lemma is a powerful tool, applicable to many areas of mathematics.
=== Analogs under weakenings of the axiom of choice ===
A weakened form of Zorn's lemma can be proven from ZF + DC (Zermelo–Fraenkel set theory with the axiom of choice replaced by the axiom of dependent choice). Zorn's lemma can be expressed straightforwardly by observing that the set having no maximal element would be equivalent to stating that the set's ordering relation would be entire, which would allow us to apply the axiom of dependent choice to construct a countable chain. As a result, any partially ordered set with exclusively finite chains must have a maximal element.
More generally, strengthening the axiom of dependent choice to higher ordinals allows us to generalize the statement in the previous paragraph to higher cardinalities. In the limit where we allow arbitrarily large ordinals, we recover the proof of the full Zorn's lemma using the axiom of choice in the preceding section.
== In popular culture ==
The 1970 film Zorns Lemma is named after the lemma.
The lemma was referenced on The Simpsons in the episode "Bart's New Friend".
== See also ==
Antichain – Subset of incomparable elements
Chain-complete partial order – a partially ordered set in which every chain has a least upper bound
Szpilrajn extension theorem – Mathematical result on order relations
Tarski finiteness – Mathematical set containing a finite number of elementsPages displaying short descriptions of redirect targets
Teichmüller–Tukey lemma (sometimes named Tukey's lemma)
Bourbaki–Witt theorem – a choiceless fixed-point theorem that can be combined with choice can be used to prove Zorn's lemma
== Notes ==
== References ==
Bourbaki, N (1970). Théorie des Ensembles. Hermann.
Campbell, Paul J. (February 1978). "The Origin of 'Zorn's Lemma'". Historia Mathematica. 5 (1): 77–89. doi:10.1016/0315-0860(78)90136-2.
Halmos, Paul (1960). Naive Set Theory. Princeton, New Jersey: D. Van Nostrand Company.
Ciesielski, Krzysztof (1997). Set Theory for the Working Mathematician. Cambridge University Press. ISBN 978-0-521-59465-3.
Jech, Thomas (2008) [1973]. The Axiom of Choice. Mineola, New York: Dover Publications. ISBN 978-0-486-46624-8.
Moore, Gregory H. (2013) [1982]. Zermelo's axiom of choice: Its origins, development & influence. Dover Publications. ISBN 978-0-486-48841-7.
== Further reading ==
The Zorn Identity at the n-category cafe.
== External links ==
"Zorn lemma", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Zorn's Lemma at ProvenMath contains a formal proof down to the finest detail of the equivalence of the axiom of choice and Zorn's Lemma.
Zorn's Lemma at Metamath is another formal proof. (Unicode version for recent browsers.) | Wikipedia/Zorn's_Lemma |
In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of order theory.
== In calculus and analysis ==
In calculus, a function
f
{\displaystyle f}
defined on a subset of the real numbers with real values is called monotonic if it is either entirely non-decreasing, or entirely non-increasing. That is, as per Fig. 1, a function that increases monotonically does not exclusively have to increase, it simply must not decrease.
A function is termed monotonically increasing (also increasing or non-decreasing) if for all
x
{\displaystyle x}
and
y
{\displaystyle y}
such that
x
≤
y
{\displaystyle x\leq y}
one has
f
(
x
)
≤
f
(
y
)
{\displaystyle f\!\left(x\right)\leq f\!\left(y\right)}
, so
f
{\displaystyle f}
preserves the order (see Figure 1). Likewise, a function is called monotonically decreasing (also decreasing or non-increasing) if, whenever
x
≤
y
{\displaystyle x\leq y}
, then
f
(
x
)
≥
f
(
y
)
{\displaystyle f\!\left(x\right)\geq f\!\left(y\right)}
, so it reverses the order (see Figure 2).
If the order
≤
{\displaystyle \leq }
in the definition of monotonicity is replaced by the strict order
<
{\displaystyle <}
, one obtains a stronger requirement. A function with this property is called strictly increasing (also increasing). Again, by inverting the order symbol, one finds a corresponding concept called strictly decreasing (also decreasing). A function with either property is called strictly monotone. Functions that are strictly monotone are one-to-one (because for
x
{\displaystyle x}
not equal to
y
{\displaystyle y}
, either
x
<
y
{\displaystyle x<y}
or
x
>
y
{\displaystyle x>y}
and so, by monotonicity, either
f
(
x
)
<
f
(
y
)
{\displaystyle f\!\left(x\right)<f\!\left(y\right)}
or
f
(
x
)
>
f
(
y
)
{\displaystyle f\!\left(x\right)>f\!\left(y\right)}
, thus
f
(
x
)
≠
f
(
y
)
{\displaystyle f\!\left(x\right)\neq f\!\left(y\right)}
.)
To avoid ambiguity, the terms weakly monotone, weakly increasing and weakly decreasing are often used to refer to non-strict monotonicity.
The terms "non-decreasing" and "non-increasing" should not be confused with the (much weaker) negative qualifications "not decreasing" and "not increasing". For example, the non-monotonic function shown in figure 3 first falls, then rises, then falls again. It is therefore not decreasing and not increasing, but it is neither non-decreasing nor non-increasing.
A function
f
{\displaystyle f}
is said to be absolutely monotonic over an interval
(
a
,
b
)
{\displaystyle \left(a,b\right)}
if the derivatives of all orders of
f
{\displaystyle f}
are nonnegative or all nonpositive at all points on the interval.
=== Inverse of function ===
All strictly monotonic functions are invertible because they are guaranteed to have a one-to-one mapping from their range to their domain.
However, functions that are only weakly monotone are not invertible because they are constant on some interval (and therefore are not one-to-one).
A function may be strictly monotonic over a limited a range of values and thus have an inverse on that range even though it is not strictly monotonic everywhere. For example, if
y
=
g
(
x
)
{\displaystyle y=g(x)}
is strictly increasing on the range
[
a
,
b
]
{\displaystyle [a,b]}
, then it has an inverse
x
=
h
(
y
)
{\displaystyle x=h(y)}
on the range
[
g
(
a
)
,
g
(
b
)
]
{\displaystyle [g(a),g(b)]}
.
The term monotonic is sometimes used in place of strictly monotonic, so a source may state that all monotonic functions are invertible when they really mean that all strictly monotonic functions are invertible.
=== Monotonic transformation ===
The term monotonic transformation (or monotone transformation) may also cause confusion because it refers to a transformation by a strictly increasing function. This is the case in economics with respect to the ordinal properties of a utility function being preserved across a monotonic transform (see also monotone preferences). In this context, the term "monotonic transformation" refers to a positive monotonic transformation and is intended to distinguish it from a "negative monotonic transformation," which reverses the order of the numbers.
=== Some basic applications and results ===
The following properties are true for a monotonic function
f
:
R
→
R
{\displaystyle f\colon \mathbb {R} \to \mathbb {R} }
:
f
{\displaystyle f}
has limits from the right and from the left at every point of its domain;
f
{\displaystyle f}
has a limit at positive or negative infinity (
±
∞
{\displaystyle \pm \infty }
) of either a real number,
∞
{\displaystyle \infty }
, or
−
∞
{\displaystyle -\infty }
.
f
{\displaystyle f}
can only have jump discontinuities;
f
{\displaystyle f}
can only have countably many discontinuities in its domain. The discontinuities, however, do not necessarily consist of isolated points and may even be dense in an interval (a, b). For example, for any summable sequence
(
a
i
)
(a_{i})
of positive numbers and any enumeration
(
q
i
)
{\displaystyle (q_{i})}
of the rational numbers, the monotonically increasing function
f
(
x
)
=
∑
q
i
≤
x
a
i
{\displaystyle f(x)=\sum _{q_{i}\leq x}a_{i}}
is continuous exactly at every irrational number (cf. picture). It is the cumulative distribution function of the discrete measure on the rational numbers, where
a
i
{\displaystyle a_{i}}
is the weight of
q
i
{\displaystyle q_{i}}
.
If
f
{\displaystyle f}
is differentiable at
x
∗
∈
R
{\displaystyle x^{*}\in {\mathbb {R}}}
and
f
′
(
x
∗
)
>
0
{\displaystyle f'(x^{*})>0}
, then there is a non-degenerate interval I such that
x
∗
∈
I
{\displaystyle x^{*}\in I}
and
f
{\displaystyle f}
is increasing on I. As a partial converse, if f is differentiable and increasing on an interval, I, then its derivative is positive at every point in I.
These properties are the reason why monotonic functions are useful in technical work in analysis. Other important properties of these functions include:
if
f
{\displaystyle f}
is a monotonic function defined on an interval
I
{\displaystyle I}
, then
f
{\displaystyle f}
is differentiable almost everywhere on
I
{\displaystyle I}
; i.e. the set of numbers
x
{\displaystyle x}
in
I
{\displaystyle I}
such that
f
{\displaystyle f}
is not differentiable in
x
{\displaystyle x}
has Lebesgue measure zero. In addition, this result cannot be improved to countable: see Cantor function.
if this set is countable, then
f
{\displaystyle f}
is absolutely continuous
if
f
{\displaystyle f}
is a monotonic function defined on an interval
[
a
,
b
]
{\displaystyle \left[a,b\right]}
, then
f
{\displaystyle f}
is Riemann integrable.
An important application of monotonic functions is in probability theory. If
X
{\displaystyle X}
is a random variable, its cumulative distribution function
F
X
(
x
)
=
Prob
(
X
≤
x
)
{\displaystyle F_{X}\!\left(x\right)={\text{Prob}}\!\left(X\leq x\right)}
is a monotonically increasing function.
A function is unimodal if it is monotonically increasing up to some point (the mode) and then monotonically decreasing.
When
f
{\displaystyle f}
is a strictly monotonic function, then
f
{\displaystyle f}
is injective on its domain, and if
T
{\displaystyle T}
is the range of
f
{\displaystyle f}
, then there is an inverse function on
T
{\displaystyle T}
for
f
{\displaystyle f}
. In contrast, each constant function is monotonic, but not injective, and hence cannot have an inverse.
The graphic shows six monotonic functions. Their simplest forms are shown in the plot area and the expressions used to create them are shown on the y-axis.
== In topology ==
A map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is said to be monotone if each of its fibers is connected; that is, for each element
y
∈
Y
,
{\displaystyle y\in Y,}
the (possibly empty) set
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
is a connected subspace of
X
.
{\displaystyle X.}
== In functional analysis ==
In functional analysis on a topological vector space
X
{\displaystyle X}
, a (possibly non-linear) operator
T
:
X
→
X
∗
{\displaystyle T:X\rightarrow X^{*}}
is said to be a monotone operator if
(
T
u
−
T
v
,
u
−
v
)
≥
0
∀
u
,
v
∈
X
.
{\displaystyle (Tu-Tv,u-v)\geq 0\quad \forall u,v\in X.}
Kachurovskii's theorem shows that convex functions on Banach spaces have monotonic operators as their derivatives.
A subset
G
{\displaystyle G}
of
X
×
X
∗
{\displaystyle X\times X^{*}}
is said to be a monotone set if for every pair
[
u
1
,
w
1
]
{\displaystyle [u_{1},w_{1}]}
and
[
u
2
,
w
2
]
{\displaystyle [u_{2},w_{2}]}
in
G
{\displaystyle G}
,
(
w
1
−
w
2
,
u
1
−
u
2
)
≥
0.
{\displaystyle (w_{1}-w_{2},u_{1}-u_{2})\geq 0.}
G
{\displaystyle G}
is said to be maximal monotone if it is maximal among all monotone sets in the sense of set inclusion. The graph of a monotone operator
G
(
T
)
{\displaystyle G(T)}
is a monotone set. A monotone operator is said to be maximal monotone if its graph is a maximal monotone set.
== In order theory ==
Order theory deals with arbitrary partially ordered sets and preordered sets as a generalization of real numbers. The above definition of monotonicity is relevant in these cases as well. However, the terms "increasing" and "decreasing" are avoided, since their conventional pictorial representation does not apply to orders that are not total. Furthermore, the strict relations
<
{\displaystyle <}
and
>
{\displaystyle >}
are of little use in many non-total orders and hence no additional terminology is introduced for them.
Letting
≤
{\displaystyle \leq }
denote the partial order relation of any partially ordered set, a monotone function, also called isotone, or order-preserving, satisfies the property
x
≤
y
⟹
f
(
x
)
≤
f
(
y
)
{\displaystyle x\leq y\implies f(x)\leq f(y)}
for all x and y in its domain. The composite of two monotone mappings is also monotone.
The dual notion is often called antitone, anti-monotone, or order-reversing. Hence, an antitone function f satisfies the property
x
≤
y
⟹
f
(
y
)
≤
f
(
x
)
,
{\displaystyle x\leq y\implies f(y)\leq f(x),}
for all x and y in its domain.
A constant function is both monotone and antitone; conversely, if f is both monotone and antitone, and if the domain of f is a lattice, then f must be constant.
Monotone functions are central in order theory. They appear in most articles on the subject and examples from special applications are found in these places. Some notable special monotone functions are order embeddings (functions for which
x
≤
y
{\displaystyle x\leq y}
if and only if
f
(
x
)
≤
f
(
y
)
)
{\displaystyle f(x)\leq f(y))}
and order isomorphisms (surjective order embeddings).
== In the context of search algorithms ==
In the context of search algorithms monotonicity (also called consistency) is a condition applied to heuristic functions. A heuristic
h
(
n
)
{\displaystyle h(n)}
is monotonic if, for every node n and every successor n' of n generated by any action a, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n' plus the estimated cost of reaching the goal from n',
h
(
n
)
≤
c
(
n
,
a
,
n
′
)
+
h
(
n
′
)
.
{\displaystyle h(n)\leq c\left(n,a,n'\right)+h\left(n'\right).}
This is a form of triangle inequality, with n, n', and the goal Gn closest to n. Because every monotonic heuristic is also admissible, monotonicity is a stricter requirement than admissibility. Some heuristic algorithms such as A* can be proven optimal provided that the heuristic they use is monotonic.
== In Boolean functions ==
In Boolean algebra, a monotonic function is one such that for all ai and bi in {0,1}, if a1 ≤ b1, a2 ≤ b2, ..., an ≤ bn (i.e. the Cartesian product {0, 1}n is ordered coordinatewise), then f(a1, ..., an) ≤ f(b1, ..., bn). In other words, a Boolean function is monotonic if, for every combination of inputs, switching one of the inputs from false to true can only cause the output to switch from false to true and not from true to false. Graphically, this means that an n-ary Boolean function is monotonic when its representation as an n-cube labelled with truth values has no upward edge from true to false. (This labelled Hasse diagram is the dual of the function's labelled Venn diagram, which is the more common representation for n ≤ 3.)
The monotonic Boolean functions are precisely those that can be defined by an expression combining the inputs (which may appear more than once) using only the operators and and or (in particular not is forbidden). For instance "at least two of a, b, c hold" is a monotonic function of a, b, c, since it can be written for instance as ((a and b) or (a and c) or (b and c)).
The number of such functions on n variables is known as the Dedekind number of n.
SAT solving, generally an NP-hard task, can be achieved efficiently when all involved functions and predicates are monotonic and Boolean.
== See also ==
Monotone cubic interpolation
Pseudo-monotone operator
Spearman's rank correlation coefficient - measure of monotonicity in a set of data
Total monotonicity
Cyclical monotonicity
Operator monotone function
Monotone set function
Absolutely and completely monotonic functions and sequences
== Notes ==
== Bibliography ==
Bartle, Robert G. (1976). The elements of real analysis (second ed.).
Grätzer, George (1971). Lattice theory: first concepts and distributive lattices. W. H. Freeman. ISBN 0-7167-0442-0.
Pemberton, Malcolm; Rau, Nicholas (2001). Mathematics for economists: an introductory textbook. Manchester University Press. ISBN 0-7190-3341-1.
Renardy, Michael & Rogers, Robert C. (2004). An introduction to partial differential equations. Texts in Applied Mathematics 13 (Second ed.). New York: Springer-Verlag. p. 356. ISBN 0-387-00444-0.
Riesz, Frigyes & Béla Szőkefalvi-Nagy (1990). Functional Analysis. Courier Dover Publications. ISBN 978-0-486-66289-3.
Russell, Stuart J.; Norvig, Peter (2010). Artificial Intelligence: A Modern Approach (3rd ed.). Upper Saddle River, New Jersey: Prentice Hall. ISBN 978-0-13-604259-4.
Simon, Carl P.; Blume, Lawrence (April 1994). Mathematics for Economists (first ed.). Norton. ISBN 978-0-393-95733-4. (Definition 9.31)
== External links ==
"Monotone function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Convergence of a Monotonic Sequence by Anik Debnath and Thomas Roxlo (The Harker School), Wolfram Demonstrations Project.
Weisstein, Eric W. "Monotonic Function". MathWorld. | Wikipedia/Monotonic_function |
In mathematics, given two partially ordered sets P and Q, a function f: P → Q between them is Scott-continuous (named after the mathematician Dana Scott) if it preserves all directed suprema. That is, for every directed subset D of P with supremum in P, its image has a supremum in Q, and that supremum is the image of the supremum of D, i.e.
⊔
f
[
D
]
=
f
(
⊔
D
)
{\displaystyle \sqcup f[D]=f(\sqcup D)}
, where
⊔
{\displaystyle \sqcup }
is the directed join. When
Q
{\displaystyle Q}
is the poset of truth values, i.e. Sierpiński space, then Scott-continuous functions are characteristic functions of open sets, and thus Sierpiński space is the classifying space for open sets.
A subset O of a partially ordered set P is called Scott-open if it is an upper set and if it is inaccessible by directed joins, i.e. if all directed sets D with supremum in O have non-empty intersection with O. The Scott-open subsets of a partially ordered set P form a topology on P, the Scott topology. A function between partially ordered sets is Scott-continuous if and only if it is continuous with respect to the Scott topology.
The Scott topology was first defined by Dana Scott for complete lattices and later defined for arbitrary partially ordered sets.
Scott-continuous functions are used in the study of models for lambda calculi and the denotational semantics of computer programs.
== Properties ==
A Scott-continuous function is always monotonic, meaning that if
A
≤
P
B
{\displaystyle A\leq _{P}B}
for
A
,
B
⊂
P
{\displaystyle A,B\subset P}
, then
f
(
A
)
≤
Q
f
(
B
)
{\displaystyle f(A)\leq _{Q}f(B)}
.
A subset of a directed complete partial order is closed with respect to the Scott topology induced by the partial order if and only if it is a lower set and closed under suprema of directed subsets.
A directed complete partial order (dcpo) with the Scott topology is always a Kolmogorov space (i.e., it satisfies the T0 separation axiom). However, a dcpo with the Scott topology is a Hausdorff space if and only if the order is trivial. The Scott-open sets form a complete lattice when ordered by inclusion.
For any Kolmogorov space, the topology induces an order relation on that space, the specialization order: x ≤ y if and only if every open neighbourhood of x is also an open neighbourhood of y. The order relation of a dcpo D can be reconstructed from the Scott-open sets as the specialization order induced by the Scott topology. However, a dcpo equipped with the Scott topology need not be sober: the specialization order induced by the topology of a sober space makes that space into a dcpo, but the Scott topology derived from this order is finer than the original topology.
== Examples ==
The open sets in a given topological space when ordered by inclusion form a lattice on which the Scott topology can be defined. A subset X of a topological space T is compact with respect to the topology on T (in the sense that every open cover of X contains a finite subcover of X) if and only if the set of open neighbourhoods of X is open with respect to the Scott topology.
For CPO, the cartesian closed category of dcpo's, two particularly notable examples of Scott-continuous functions are curry and apply.
Nuel Belnap used Scott continuity to extend logical connectives to a four-valued logic.
== See also ==
Alexandrov topology
Upper topology
== Footnotes ==
== References ==
"Scott Topology". PlanetMath. | Wikipedia/Scott_topology |
In mathematics, the lexicographic or lexicographical order (also known as lexical order, or dictionary order) is a generalization of the alphabetical order of the dictionaries to sequences of ordered symbols or, more generally, of elements of a totally ordered set.
There are several variants and generalizations of the lexicographical ordering. One variant applies to sequences of different lengths by comparing the lengths of the sequences before considering their elements.
Another variant, widely used in combinatorics, orders subsets of a given finite set by assigning a total order to the finite set, and converting subsets into increasing sequences, to which the lexicographical order is applied.
A generalization defines an order on an n-ary Cartesian product of partially ordered sets; this order is a total order if and only if all factors of the Cartesian product are totally ordered.
== Definition ==
The words in a lexicon (the set of words used in some language) have a conventional ordering, used in dictionaries and encyclopedias, that depends on the underlying ordering of the alphabet of symbols used to build the words. The lexicographical order is one way of formalizing word order given the order of the underlying symbols.
The formal notion starts with a finite set A, often called the alphabet, which is totally ordered. That is, for any two symbols a and b in A that are not the same symbol, either a < b or b < a.
The words of A are the finite sequences of symbols from A, including words of length 1 containing a single symbol, words of length 2 with 2 symbols, and so on, even including the empty sequence
ε
{\displaystyle \varepsilon }
with no symbols at all. The lexicographical order on the set of all these finite words orders the words as follows:
Given two different words of the same length, say a = a1a2...ak and b = b1b2...bk, the order of the two words depends on the alphabetic order of the symbols in the first place i where the two words differ (counting from the beginning of the words): a < b if and only if ai < bi in the underlying order of the alphabet A.
If two words have different lengths, the usual lexicographical order pads the shorter one with "blanks" (a special symbol that is treated as smaller than every element of A) at the end until the words are the same length, and then the words are compared as in the previous case.
However, in combinatorics, another convention is frequently used for the second case, whereby a shorter sequence is always smaller than a longer sequence. This variant of the lexicographical order is sometimes called shortlex order.
In lexicographical order, the word "Thomas" appears before "Thompson" because they first differ at the fifth letter ('a' and 'p'), and letter 'a' comes before the letter 'p' in the alphabet. Because it is the first difference, in this case the 5th letter is the "most significant difference" for alphabetical ordering.
An important property of the lexicographical order is that for each n, the set of words of length n is well-ordered by the lexicographical order (provided the alphabet is finite); that is, every decreasing sequence of words of length n is finite (or equivalently, every non-empty subset has a least element). It is not true that the set of all finite words is well-ordered; for example, the infinite set of words {b, ab, aab, aaab, ... } has no lexicographically earliest element.
== Numeral systems and dates ==
The lexicographical order is used not only in dictionaries, but also commonly for numbers and dates.
One of the drawbacks of the Roman numeral system is that it is not always immediately obvious which of two numbers is the smaller. On the other hand, with the positional notation of the Hindu–Arabic numeral system, comparing numbers is easy, because the natural order on natural numbers is the same as the variant shortlex of the lexicographic order. In fact, with positional notation, a natural number is represented by a sequence of numerical digits, and a natural number is larger than another one if either it has more digits (ignoring leading zeroes) or the number of digits is the same and the first (most significant) digit which differs is larger.
For real numbers written in decimal notation, a slightly different variant of the lexicographical order is used: the parts on the left of the decimal point are compared as before; if they are equal, the parts at the right of the decimal point are compared with the lexicographical order. The padding 'blank' in this context is a trailing "0" digit.
When negative numbers are also considered, one has to reverse the order for comparing negative numbers. This is not usually a problem for humans, but it may be for computers (testing the sign takes some time). This is one of the reasons for adopting two's complement representation for representing signed integers in computers.
Another example of a non-dictionary use of lexicographical ordering appears in the ISO 8601 standard for dates, which expresses a date as YYYY-MM-DD. This formatting scheme has the advantage that the lexicographical order on sequences of characters that represent dates coincides with the chronological order: an earlier CE date is smaller in the lexicographical order than a later date up to year 9999. This date ordering makes computerized sorting of dates easier by avoiding the need for a separate sorting algorithm.
== Monoid of words ==
The monoid of words over an alphabet A is the free monoid over A. That is, the elements of the monoid are the finite sequences (words) of elements of A (including the empty sequence, of length 0), and the operation (multiplication) is the concatenation of words. A word u is a prefix (or 'truncation') of another word v if there exists a word w such that v = uw. By this definition, the empty word (
ε
{\displaystyle \varepsilon }
) is a prefix of every word, and every word is a prefix of itself (with w
=
ε
{\displaystyle =\varepsilon }
); care must be taken if these cases are to be excluded.
With this terminology, the above definition of the lexicographical order becomes more concise: Given a partially or totally ordered set A, and two words a and b over A such that b is non-empty, then one has a < b under lexicographical order, if at least one of the following conditions is satisfied:
a is a prefix of b
there exists words u, v, w (possibly empty) and elements x and y of A such that
x < y
a = uxv
b = uyw
Notice that, due to the prefix condition in this definition,
ε
<
b
for all
b
≠
ε
,
{\displaystyle \varepsilon <b\,\,{\text{ for all }}b\neq \varepsilon ,}
where
ε
{\displaystyle \varepsilon }
is the empty word.
If
<
{\displaystyle \,<\,}
is a total order on
A
,
{\displaystyle A,}
then so is the lexicographic order on the words of
A
.
{\displaystyle A.}
However, in general this is not a well-order, even if the alphabet
A
{\displaystyle A}
is well-ordered. For instance, if A = {a, b}, the language {anb | n ≥ 0, b > ε} has no least element in the lexicographical order: ... < aab < ab < b.
Since many applications require well orders, a variant of the lexicographical orders is often used. This well-order, sometimes called shortlex or quasi-lexicographical order, consists in considering first the lengths of the words (if length(a) < length(b), then
a
<
b
{\displaystyle a<b}
), and, if the lengths are equal, using the lexicographical order. If the order on A is a well-order, the same is true for the shortlex order.
== Cartesian products ==
The lexicographical order defines an order on an n-ary Cartesian product of ordered sets, which is a total order when all these sets are themselves totally ordered. An element of a Cartesian product
E
1
×
⋯
×
E
n
{\displaystyle E_{1}\times \cdots \times E_{n}}
is a sequence whose
i
{\displaystyle i}
th element belongs to
E
i
{\displaystyle E_{i}}
for every
i
.
{\displaystyle i.}
As evaluating the lexicographical order of sequences compares only elements which have the same rank in the sequences, the lexicographical order extends to Cartesian products of ordered sets.
Specifically, given two partially ordered sets
A
{\displaystyle A}
and
B
,
{\displaystyle B,}
the lexicographical order on the Cartesian product
A
×
B
{\displaystyle A\times B}
is defined as
(
a
,
b
)
≤
(
a
′
,
b
′
)
if and only if
a
<
a
′
or
(
a
=
a
′
and
b
≤
b
′
)
,
{\displaystyle (a,b)\leq \left(a^{\prime },b^{\prime }\right){\text{ if and only if }}a<a^{\prime }{\text{ or }}\left(a=a^{\prime }{\text{ and }}b\leq b^{\prime }\right),}
The result is a partial order. If
A
{\displaystyle A}
and
B
{\displaystyle B}
are each totally ordered, then the result is a total order as well. The lexicographical order of two totally ordered sets is thus a linear extension of their product order.
One can define similarly the lexicographic order on the Cartesian product of an infinite family of ordered sets, if the family is indexed by the natural numbers, or more generally by a well-ordered set. This generalized lexicographical order is a total order if each factor set is totally ordered.
Unlike the finite case, an infinite product of well-orders is not necessarily well-ordered by the lexicographical order. For instance, the set of countably infinite binary sequences (by definition, the set of functions from natural numbers to
{
0
,
1
}
,
{\displaystyle \{0,1\},}
also known as the Cantor space
{
0
,
1
}
ω
{\displaystyle \{0,1\}^{\omega }}
) is not well-ordered; the subset of sequences that have precisely one
1
{\displaystyle 1}
(that is, { 100000..., 010000..., 001000..., ... }) does not have a least element under the lexicographical order induced by
0
<
1
,
{\displaystyle 0<1,}
because 100000... > 010000... > 001000... > ... is an infinite descending chain. Similarly, the infinite lexicographic product is not Noetherian either because 011111... < 101111... < 110111 ... < ... is an infinite ascending chain.
== Functions over a well-ordered set ==
The functions from a well-ordered set
X
{\displaystyle X}
to a totally ordered set
Y
{\displaystyle Y}
may be identified with sequences indexed by
X
{\displaystyle X}
of elements of
Y
.
{\displaystyle Y.}
They can thus be ordered by the lexicographical order, and for two such functions
f
{\displaystyle f}
and
g
,
{\displaystyle g,}
the lexicographical order is thus determined by their values for the smallest
x
{\displaystyle x}
such that
f
(
x
)
≠
g
(
x
)
.
{\displaystyle f(x)\neq g(x).}
If
Y
{\displaystyle Y}
is also well-ordered and
X
{\displaystyle X}
is finite, then the resulting order is a well-order. As shown above, if
X
{\displaystyle X}
is infinite this is not the case.
== Finite subsets ==
In combinatorics, one has often to enumerate, and therefore to order the finite subsets of a given set
S
.
{\displaystyle S.}
For this, one usually chooses an order on
S
.
{\displaystyle S.}
Then, sorting a subset of
S
{\displaystyle S}
is equivalent to convert it into an increasing sequence. The lexicographic order on the resulting sequences induces thus an order on the subsets, which is also called the lexicographical order.
In this context, one generally prefer to sort first the subsets by cardinality, such as in the shortlex order. Therefore, in the following, we will consider only orders on subsets of fixed cardinal.
For example, using the natural order of the integers, the lexicographical ordering on the subsets of three elements of
S
=
{
1
,
2
,
3
,
4
,
5
,
6
}
{\displaystyle S=\{1,2,3,4,5,6\}}
is
123 < 124 < 125 < 126 < 134 < 135 < 136 < 145 < 146 < 156 <
234 < 235 < 236 < 245 < 246 < 256 < 345 < 346 < 356 < 456.
For ordering finite subsets of a given cardinality of the natural numbers, the colexicographical order (see below) is often more convenient, because all initial segments are finite, and thus the colexicographical order defines an order isomorphism between the natural numbers and the set of sets of
n
{\displaystyle n}
natural numbers. This is not the case for the lexicographical order, as, with the lexicographical order, we have, for example,
12
n
<
134
{\displaystyle 12n<134}
for every
n
>
2.
{\displaystyle n>2.}
== Group orders of Zn ==
Let
Z
n
{\displaystyle \mathbb {Z} ^{n}}
be the free Abelian group of rank
n
,
{\displaystyle n,}
whose elements are sequences of
n
{\displaystyle n}
integers, and operation is the addition. A group order on
Z
n
{\displaystyle \mathbb {Z} ^{n}}
is a total order, which is compatible with addition, that is
a
<
b
if and only if
a
+
c
<
b
+
c
.
{\displaystyle a<b\quad {\text{ if and only if }}\quad a+c<b+c.}
The lexicographical ordering is a group order on
Z
n
.
{\displaystyle \mathbb {Z} ^{n}.}
The lexicographical ordering may also be used to characterize all group orders on
Z
n
.
{\displaystyle \mathbb {Z} ^{n}.}
In fact,
n
{\displaystyle n}
linear forms with real coefficients, define a map from
Z
n
{\displaystyle \mathbb {Z} ^{n}}
into
R
n
,
{\displaystyle \mathbb {R} ^{n},}
which is injective if the forms are linearly independent (it may be also injective if the forms are dependent, see below). The lexicographic order on the image of this map induces a group order on
Z
n
.
{\displaystyle \mathbb {Z} ^{n}.}
Robbiano's theorem is that every group order may be obtained in this way.
More precisely, given a group order on
Z
n
,
{\displaystyle \mathbb {Z} ^{n},}
there exist an integer
s
≤
n
{\displaystyle s\leq n}
and
s
{\displaystyle s}
linear forms with real coefficients, such that the induced map
φ
{\displaystyle \varphi }
from
Z
n
{\displaystyle \mathbb {Z} ^{n}}
into
R
s
{\displaystyle \mathbb {R} ^{s}}
has the following properties;
φ
{\displaystyle \varphi }
is injective;
the resulting isomorphism from
Z
n
{\displaystyle \mathbb {Z} ^{n}}
to the image of
φ
{\displaystyle \varphi }
is an order isomorphism when the image is equipped with the lexicographical order on
R
s
.
{\displaystyle \mathbb {R} ^{s}.}
== Colexicographic order ==
The colexicographic or colex order is a variant of the lexicographical order that is obtained by reading finite sequences from the right to the left instead of reading them from the left to the right. More precisely, whereas the lexicographical order between two sequences is defined by
a1a2...ak <lex b1b2 ... bk if ai < bi for the first i where ai and bi differ,
the colexicographical order is defined by
a1a2...ak <colex b1b2...bk if ai < bi for the last i where ai and bi differ
In general, the difference between the colexicographical order and the lexicographical order is not very significant. However, when considering increasing sequences, typically for coding subsets, the two orders differ significantly.
For example, for ordering the increasing sequences (or the sets) of two natural integers, the lexicographical order begins by
12 < 13 < 14 < 15 < ... < 23 < 24 < 25 < ... < 34 < 35 < ... < 45 < ...,
and the colexicographic order begins by
12 < 13 < 23 < 14 < 24 < 34 < 15 < 25 < 35 < 45 < ....
The main property of the colexicographical order for increasing sequences of a given length is that every initial segment is finite. In other words, the colexicographical order for increasing sequences of a given length induces an order isomorphism with the natural numbers, and allows enumerating these sequences. This is frequently used in combinatorics, for example in the proof of the Kruskal–Katona theorem.
== Monomials ==
When considering polynomials, the order of the terms does not matter in general, as the addition is commutative. However, some algorithms, such as polynomial long division, require the terms to be in a specific order. Many of the main algorithms for multivariate polynomials are related with Gröbner bases, concept that requires the choice of a monomial order, that is a total order, which is compatible with the monoid structure of the monomials. Here "compatible" means that
a
<
b
implies
a
c
<
b
c
,
{\displaystyle a<b{\text{ implies }}ac<bc,}
if the monoid operation is denoted multiplicatively. This compatibility implies that the product of a polynomial by a monomial does not change the order of the terms. For Gröbner bases, a further condition must be satisfied, namely that every non-constant monomial is greater than the monomial 1. However this condition is not needed for other related algorithms, such as the algorithms for the computation of the tangent cone.
As Gröbner bases are defined for polynomials in a fixed number of variables, it is common to identify monomials (for example
x
1
x
2
3
x
4
x
5
2
{\displaystyle x_{1}x_{2}^{3}x_{4}x_{5}^{2}}
) with their exponent vectors (here [1, 3, 0, 1, 2]). If n is the number of variables, every monomial order is thus the restriction to
N
n
{\displaystyle \mathbb {N} ^{n}}
of a monomial order of
Z
n
{\displaystyle \mathbb {Z} ^{n}}
(see above § Group orders of Zn
Z
n
,
{\displaystyle \mathbb {Z} ^{n},}
for a classification).
One of these admissible orders is the lexicographical order. It is, historically, the first to have been used for defining Gröbner bases, and is sometimes called pure lexicographical order for distinguishing it from other orders that are also related to a lexicographical order.
Another one consists in comparing first the total degrees, and then resolving the conflicts by using the lexicographical order. This order is not widely used, as either the lexicographical order or the degree reverse lexicographical order have generally better properties.
The degree reverse lexicographical order consists also in comparing first the total degrees, and, in case of equality of the total degrees, using the reverse of the colexicographical order. That is, given two exponent vectors, one has
[
a
1
,
…
,
a
n
]
<
[
b
1
,
…
,
b
n
]
{\displaystyle [a_{1},\ldots ,a_{n}]<[b_{1},\ldots ,b_{n}]}
if either
a
1
+
⋯
+
a
n
<
b
1
+
⋯
+
b
n
,
{\displaystyle a_{1}+\cdots +a_{n}<b_{1}+\cdots +b_{n},}
or
a
1
+
⋯
+
a
n
=
b
1
+
⋯
+
b
n
and
a
i
>
b
i
for the largest
i
for which
a
i
≠
b
i
.
{\displaystyle a_{1}+\cdots +a_{n}=b_{1}+\cdots +b_{n}\quad {\text{ and }}\quad a_{i}>b_{i}{\text{ for the largest }}i{\text{ for which }}a_{i}\neq b_{i}.}
For this ordering, the monomials of degree one have the same order as the corresponding indeterminates (this would not be the case if the reverse lexicographical order would be used). For comparing monomials in two variables of the same total degree, this order is the same as the lexicographic order. This is not the case with more variables. For example, for exponent vectors of monomials of degree two in three variables, one has for the degree reverse lexicographic order:
[
0
,
0
,
2
]
<
[
0
,
1
,
1
]
<
[
1
,
0
,
1
]
<
[
0
,
2
,
0
]
<
[
1
,
1
,
0
]
<
[
2
,
0
,
0
]
{\displaystyle [0,0,2]<[0,1,1]<[1,0,1]<[0,2,0]<[1,1,0]<[2,0,0]}
For the lexicographical order, the same exponent vectors are ordered as
[
0
,
0
,
2
]
<
[
0
,
1
,
1
]
<
[
0
,
2
,
0
]
<
[
1
,
0
,
1
]
<
[
1
,
1
,
0
]
<
[
2
,
0
,
0
]
.
{\displaystyle [0,0,2]<[0,1,1]<[0,2,0]<[1,0,1]<[1,1,0]<[2,0,0].}
A useful property of the degree reverse lexicographical order is that a homogeneous polynomial is a multiple of the least indeterminate if and only if its leading monomial (its greater monomial) is a multiple of this least indeterminate.
== See also ==
Collation
Kleene–Brouwer order
Lexicographic preferences - an application of lexicographic order in economics.
Lexicographic optimization - an algorithmic problem of finding a lexicographically-maximal element.
Lexicographic order topology on the unit square
Lexicographic ordering in tensor abstract index notation
Lexicographically minimal string rotation
Leximin order
Long line (topology)
Lyndon word
Pre-order - the name of the lexicographical order (of bits) in a binary tree traversal
Star product, a different way of combining partial orders
Shortlex order
Orders on the Cartesian product of totally ordered sets
== References ==
== External links ==
Learning materials related to Lexicographic and colexicographic order at Wikiversity | Wikipedia/Lexicographic_order |
In mathematics and theoretical computer science, the Lawson topology, named after Jimmie D. Lawson, is a topology on partially ordered sets (posets) used in the study of domain theory. The lower topology on a poset P is generated by the subbasis consisting of all complements of principal filters on P. The Lawson topology on P is the smallest common refinement of the lower topology and the Scott topology on P.
== Properties ==
If P is a complete upper semilattice, the Lawson topology on P is always a complete T1 topology.
== See also ==
Formal ball
== References ==
G. Gierz, K. H. Hofmann, K. Keimel, J. D. Lawson, M. Mislove, D. S. Scott (2003), Continuous Lattices and Domains, Encyclopedia of Mathematics and its Applications, Cambridge University Press. ISBN 0-521-80338-1
== External links ==
"How Do Domains Model Topologies?," Paweł Waszkiewicz, Electronic Notes in Theoretical Computer Science 83 (2004) | Wikipedia/Lawson_topology |
In the mathematical area of order theory, there are various notions of the common concept of distributivity, applied to the formation of suprema and infima. Most of these apply to partially ordered sets that are at least lattices, but the concept can in fact reasonably be generalized to semilattices as well.
== Distributive lattices ==
Probably the most common type of distributivity is the one defined for lattices, where the formation of binary suprema and infima provide the total operations of join (
∨
{\displaystyle \vee }
) and meet (
∧
{\displaystyle \wedge }
). Distributivity of these two operations is then expressed by requiring that the identity
x
∧
(
y
∨
z
)
=
(
x
∧
y
)
∨
(
x
∧
z
)
{\displaystyle x\wedge (y\vee z)=(x\wedge y)\vee (x\wedge z)}
hold for all elements x, y, and z. This distributivity law defines the class of distributive lattices. Note that this requirement can be rephrased by saying that binary meets preserve binary joins. The above statement is known to be equivalent to its order dual
x
∨
(
y
∧
z
)
=
(
x
∨
y
)
∧
(
x
∨
z
)
{\displaystyle x\vee (y\wedge z)=(x\vee y)\wedge (x\vee z)}
such that one of these properties suffices to define distributivity for lattices. Typical examples of distributive lattice are totally ordered sets, Boolean algebras, and Heyting algebras. Every finite distributive lattice is isomorphic to a lattice of sets, ordered by inclusion (Birkhoff's representation theorem).
== Distributivity for semilattices ==
A semilattice is partially ordered set with only one of the two lattice operations, either a meet- or a join-semilattice. Given that there is only one binary operation, distributivity obviously cannot be defined in the standard way. Nevertheless, because of the interaction of the single operation with the given order, the following definition of distributivity remains possible. A meet-semilattice is distributive, if for all a, b, and x:
If a ∧ b ≤ x then there exist a′ and b′ such that a ≤ a′, b ≤ b' and x = a′ ∧ b' .
Distributive join-semilattices are defined dually: a join-semilattice is distributive, if for all a, b, and x:
If x ≤ a ∨ b then there exist a′ and b′ such that a′ ≤ a, b′ ≤ b and x = a′ ∨ b' .
In either case, a' and b' need not be unique.
These definitions are justified by the fact that given any lattice L, the following statements are all equivalent:
L is distributive as a meet-semilattice
L is distributive as a join-semilattice
L is a distributive lattice.
Thus any distributive meet-semilattice in which binary joins exist is a distributive lattice.
A join-semilattice is distributive if and only if the lattice of its ideals (under inclusion) is distributive.
This definition of distributivity allows generalizing some statements about distributive lattices to distributive semilattices.
== Distributivity laws for complete lattices ==
For a complete lattice, arbitrary subsets have both infima and suprema and thus infinitary meet and join operations are available. Several extended notions of distributivity can thus be described. For example, for the infinite distributive law, finite meets may distribute over arbitrary joins, i.e.
x
∧
⋁
S
=
⋁
{
x
∧
s
∣
s
∈
S
}
{\displaystyle x\wedge \bigvee S=\bigvee \{x\wedge s\mid s\in S\}}
may hold for all elements x and all subsets S of the lattice. Complete lattices with this property are called frames, locales or complete Heyting algebras. They arise in connection with pointless topology and Stone duality. This distributive law is not equivalent to its dual statement
x
∨
⋀
S
=
⋀
{
x
∨
s
∣
s
∈
S
}
{\displaystyle x\vee \bigwedge S=\bigwedge \{x\vee s\mid s\in S\}}
which defines the class of dual frames or complete co-Heyting algebras.
Now one can go even further and define orders where arbitrary joins distribute over arbitrary meets. Such structures are called completely distributive lattices. However, expressing this requires formulations that are a little more technical. Consider a doubly indexed family {xj,k | j in J, k in K(j)} of elements of a complete lattice, and let F be the set of choice functions f choosing for each index j of J some index f(j) in K(j). A complete lattice is completely distributive if for all such data the following statement holds:
⋀
j
∈
J
⋁
k
∈
K
(
j
)
x
j
,
k
=
⋁
f
∈
F
⋀
j
∈
J
x
j
,
f
(
j
)
{\displaystyle \bigwedge _{j\in J}\bigvee _{k\in K(j)}x_{j,k}=\bigvee _{f\in F}\bigwedge _{j\in J}x_{j,f(j)}}
Complete distributivity is again a self-dual property, i.e. dualizing the above statement yields the same class of complete lattices. Completely distributive complete lattices (also called completely distributive lattices for short) are indeed highly special structures. See the article on completely distributive lattices.
== Distributive elements in arbitrary lattices ==
In an arbitrary lattice, an element x is called a distributive element if ∀y,z: x ∨ (y ∧ z) = (x ∨ y) ∧ (x ∨ z).
An element x is called a dual distributive element if ∀y,z: x ∧ (y ∨ z) = (x ∧ y) ∨ (x ∧ z).
In a distributive lattice, every element is of course both distributive and dual distributive.
In a non-distributive lattice, there may be elements that are distributive, but not dual distributive (and vice versa).
For example, in the depicted pentagon lattice N5, the element x is distributive, but not dual distributive, since x ∧ (y ∨ z) = x ∧ 1 = x ≠ z = 0 ∨ z = (x ∧ y) ∨ (x ∧ z).
In an arbitrary lattice L, the following are equivalent:
x is a distributive element;
The map φ defined by φ(y) = x ∨ y is a lattice homomorphism from L to the upper closure ↑x = { y ∈ L: x ≤ y };
The binary relation Θx on L defined by y Θx z if x ∨ y = x ∨ z is a congruence relation, that is, an equivalence relation compatible with ∧ and ∨.
In an arbitrary lattice, if x1 and x2 are distributive elements, then so is x1 ∨ x2.
== Literature ==
Distributivity is a basic concept that is treated in any textbook on lattice and order theory. See the literature given for the articles on order theory and lattice theory. More specific literature includes:
G. N. Raney, Completely distributive complete lattices, Proceedings of the American Mathematical Society, 3: 677 - 680, 1952.
== References == | Wikipedia/Distributivity_(order_theory) |
In the mathematical area of order theory, completeness properties assert the existence of certain infima or suprema of a given partially ordered set (poset). The most familiar example is the completeness of the real numbers. A special use of the term refers to complete partial orders or complete lattices. However, many other interesting notions of completeness exist.
The motivation for considering completeness properties derives from the great importance of suprema (least upper bounds, joins, "
∨
{\displaystyle \vee }
") and infima (greatest lower bounds, meets, "
∧
{\displaystyle \wedge }
") to the theory of partial orders. Finding a supremum means to single out one distinguished least element from the set of upper bounds. On the one hand, these special elements often embody certain concrete properties that are interesting for the given application (such as being the least common multiple of a set of numbers or the union of a collection of sets). On the other hand, the knowledge that certain types of subsets are guaranteed to have suprema or infima enables us to consider the evaluation of these elements as total operations on a partially ordered set. For this reason, posets with certain completeness properties can often be described as algebraic structures of a certain kind. In addition, studying the properties of the newly obtained operations yields further interesting subjects.
== Types of completeness properties ==
All completeness properties are described along a similar scheme: one describes a certain class of subsets of a partially ordered set that are required to have a supremum or required to have an infimum. Hence every completeness property has its dual, obtained by inverting the order-dependent definitions in the given statement. Some of the notions are usually not dualized while others may be self-dual (i.e. equivalent to their dual statements).
=== Least and greatest elements ===
The easiest example of a supremum is the empty one, i.e. the supremum of the empty set. By definition, this is the least element among all elements that are greater than each member of the empty set. But this is just the least element of the whole poset, if it has one, since the empty subset of a poset P is conventionally considered to be both bounded from above and from below, with every element of P being both an upper and lower bound of the empty subset. Other common names for the least element are bottom and zero (0). The dual notion, the empty lower bound, is the greatest element, top, or unit (1).
Posets that have a bottom are sometimes called pointed, while posets with a top are called unital or topped. An order that has both a least and a greatest element is bounded. However, this should not be confused with the notion of bounded completeness given below.
=== Finite completeness ===
Further simple completeness conditions arise from the consideration of all non-empty finite sets. An order in which all non-empty finite sets have both a supremum and an infimum is called a lattice. It suffices to require that all suprema and infima of two elements exist to obtain all non-empty finite ones; a straightforward induction argument shows that every finite non-empty supremum/infimum can be decomposed into a finite number of binary suprema/infima. Thus the central operations of lattices are binary suprema
∨
{\displaystyle \vee }
and infima
∧
{\displaystyle \wedge }
. It is in this context that the terms meet for
∧
{\displaystyle \wedge }
and join for
∨
{\displaystyle \vee }
are most common.
A poset in which only non-empty finite suprema are known to exist is therefore called a join-semilattice. The dual notion is meet-semilattice.
=== Further completeness conditions ===
The strongest form of completeness is the existence of all suprema and all infima. The posets with this property are the complete lattices. However, using the given order, one can restrict to further classes of (possibly infinite) subsets, that do not yield this strong completeness at once.
If all directed subsets of a poset have a supremum, then the order is a directed-complete partial order (dcpo). These are especially important in domain theory. The seldom-considered dual notion to a dcpo is the filtered-complete poset. Dcpos with a least element ("pointed dcpos") are one of the possible meanings of the phrase complete partial order (cpo).
If every subset that has some upper bound has also a least upper bound, then the respective poset is called bounded complete. The term is used widely with this definition that focuses on suprema and there is no common name for the dual property. However, bounded completeness can be expressed in terms of other completeness conditions that are easily dualized (see below). Although concepts with the names "complete" and "bounded" were already defined, confusion is unlikely to occur since one would rarely speak of a "bounded complete poset" when meaning a "bounded cpo" (which is just a "cpo with greatest element"). Likewise, "bounded complete lattice" is almost unambiguous, since one would not state the boundedness property for complete lattices, where it is implied anyway. Also note that the empty set usually has upper bounds (if the poset is non-empty) and thus a bounded-complete poset has a least element.
One may also consider the subsets of a poset which are totally ordered, i.e. the chains. If all chains have a supremum, the order is called chain complete. Again, this concept is rarely needed in the dual form.
== Relationships between completeness properties ==
It was already observed that binary meets/joins yield all non-empty finite meets/joins. Likewise, many other (combinations) of the above conditions are equivalent.
The best-known example is the existence of all suprema, which is in fact equivalent to the existence of all infima. Indeed, for any subset X of a poset, one can consider its set of lower bounds B. The supremum of B is then equal to the infimum of X: since each element of X is an upper bound of B, sup B is smaller than all elements of X, i.e. sup B is in B. It is the greatest element of B and hence the infimum of X. In a dual way, the existence of all infima implies the existence of all suprema.
Bounded completeness can also be characterized differently. By an argument similar to the above, one finds that the supremum of a set with upper bounds is the infimum of the set of upper bounds. Consequently, bounded completeness is equivalent to the existence of all non-empty infima.
A poset is a complete lattice if and only if it is a cpo and a join-semilattice. Indeed, for any subset X, the set of all finite suprema (joins) of X is directed and the supremum of this set (which exists by directed completeness) is equal to the supremum of X. Thus every set has a supremum and by the above observation we have a complete lattice. The other direction of the proof is trivial.
Assuming the axiom of choice, a poset is chain complete if and only if it is a dcpo.
== Completeness in terms of universal algebra ==
As explained above, the presence of certain completeness conditions allows to regard the formation of certain suprema and infima as total operations of a partially ordered set. It turns out that in many cases it is possible to characterize completeness solely by considering appropriate algebraic structures in the sense of universal algebra, which are equipped with operations like
∨
{\displaystyle \vee }
or
∧
{\displaystyle \wedge }
. By imposing additional conditions (in form of suitable identities) on these operations, one can then indeed derive the underlying partial order exclusively from such algebraic structures. Details on this characterization can be found in the articles on the "lattice-like" structures for which this is typically considered: see semilattice, lattice, Heyting algebra, and Boolean algebra. Note that the latter two structures extend the application of these principles beyond mere completeness requirements by introducing an additional operation of negation.
== Completeness in terms of adjunctions ==
Another interesting way to characterize completeness properties is provided through the concept of (monotone) Galois connections, i.e. adjunctions between partial orders. In fact this approach offers additional insights both into the nature of many completeness properties and into the importance of Galois connections for order theory. The general observation on which this reformulation of completeness is based is that the construction of certain suprema or infima provides left or right adjoint parts of suitable Galois connections.
Consider a partially ordered set (X, ≤). As a first simple example, let 1 = {*} be a specified one-element set with the only possible partial ordering. There is an obvious mapping j: X → 1 with j(x) = * for all x in X. X has a least element if and only if the function j has a lower adjoint j*: 1 → X. Indeed the definition for Galois connections yields that in this case j*(*) ≤ x if and only if * ≤ j(x), where the right hand side obviously holds for any x. Dually, the existence of an upper adjoint for j is equivalent to X having a greatest element.
Another simple mapping is the function q: X → X × X given by q(x) = (x, x). Naturally, the intended ordering relation for X × X is just the usual product order. q has a lower adjoint q* if and only if all binary joins in X exist. Conversely, the join operation
∨
{\displaystyle \vee }
: X × X → X can always provide the (necessarily unique) lower adjoint for q. Dually, q allows for an upper adjoint if and only if X has all binary meets. Thus the meet operation
∧
{\displaystyle \wedge }
, if it exists, always is an upper adjoint. If both
∨
{\displaystyle \vee }
and
∧
{\displaystyle \wedge }
exist and, in addition,
∧
{\displaystyle \wedge }
is also a lower adjoint, then the poset X is a Heyting algebra—another important special class of partial orders.
Further completeness statements can be obtained by exploiting suitable completion procedures. For example, it is well known that the collection of all lower sets of a poset X, ordered by subset inclusion, yields a complete lattice D(X) (the downset-lattice). Furthermore, there is an obvious embedding e: X → D(X) that maps each element x of X to its principal ideal {y in X | y ≤ x}. A little reflection now shows that e has a lower adjoint if and only if X is a complete lattice. In fact, this lower adjoint will map any lower set of X to its supremum in X. Composing this lower adjoint with the function that maps any subset of X to its lower closure (again an adjunction for the inclusion of lower sets in the powerset), one obtains the usual supremum map from the powerset 2X to X. As before, another important situation occurs whenever this supremum map is also an upper adjoint: in this case the complete lattice X is constructively completely distributive. See also the articles on complete distributivity and distributivity (order theory).
The considerations in this section suggest a reformulation of (parts of) order theory in terms of category theory, where properties are usually expressed by referring to the relationships (morphisms, more specifically: adjunctions) between objects, instead of considering their internal structure. For more detailed considerations of this relationship see the article on the categorical formulation of order theory.
== See also ==
Completely distributive lattice
Total order – Order whose elements are all comparable
== Notes ==
== References ==
G. Markowsky and B.K. Rosen. Bases for chain-complete posets IBM Journal of Research and Development. March 1976.
Stephen Bloom. Varieties of ordered algebras Journal of Computer and System Sciences. October 1976.
Michael Smyth. Power domains Journal of Computer and System Sciences. 1978.
Daniel Lehmann. On the algebra of order Journal of Computer and System Sciences. August 1980. | Wikipedia/Completeness_(order_theory) |
Domain theory is a branch of mathematics that studies special kinds of partially ordered sets (posets) commonly called domains. Consequently, domain theory can be considered as a branch of order theory. The field has major applications in computer science, where it is used to specify denotational semantics, especially for functional programming languages. Domain theory formalizes the intuitive ideas of approximation and convergence in a very general way and is closely related to topology.
== Motivation and intuition ==
The primary motivation for the study of domains, which was initiated by Dana Scott in the late 1960s, was the search for a denotational semantics of the lambda calculus. In this formalism, one considers "functions" specified by certain terms in the language. In a purely syntactic way, one can go from simple functions to functions that take other functions as their input arguments. Using again just the syntactic transformations available in this formalism, one can obtain so-called fixed-point combinators (the best-known of which is the Y combinator); these, by definition, have the property that f(Y(f)) = Y(f) for all functions f.
To formulate such a denotational semantics, one might first try to construct a model for the lambda calculus, in which a genuine (total) function is associated with each lambda term. Such a model would formalize a link between the lambda calculus as a purely syntactic system and the lambda calculus as a notational system for manipulating concrete mathematical functions. The combinator calculus is such a model. However, the elements of the combinator calculus are functions from functions to functions; in order for the elements of a model of the lambda calculus to be of arbitrary domain and range, they could not be true functions, only partial functions.
Scott got around this difficulty by formalizing a notion of "partial" or "incomplete" information to represent computations that have not yet returned a result. This was modeled by considering, for each domain of computation (e.g. the natural numbers), an additional element that represents an undefined output, i.e. the "result" of a computation that never ends. In addition, the domain of computation is equipped with an ordering relation, in which the "undefined result" is the least element.
The important step to finding a model for the lambda calculus is to consider only those functions (on such a partially ordered set) that are guaranteed to have least fixed points. The set of these functions, together with an appropriate ordering, is again a "domain" in the sense of the theory. But the restriction to a subset of all available functions has another great benefit: it is possible to obtain domains that contain their own function spaces, i.e. one gets functions that can be applied to themselves.
Beside these desirable properties, domain theory also allows for an appealing intuitive interpretation. As mentioned above, the domains of computation are always partially ordered. This ordering represents a hierarchy of information or knowledge. The higher an element is within the order, the more specific it is and the more information it contains. Lower elements represent incomplete knowledge or intermediate results.
Computation then is modeled by applying monotone functions repeatedly on elements of the domain in order to refine a result. Reaching a fixed point is equivalent to finishing a calculation. Domains provide a superior setting for these ideas since fixed points of monotone functions can be guaranteed to exist and, under additional restrictions, can be approximated from below.
== A guide to the formal definitions ==
In this section, the central concepts and definitions of domain theory will be introduced. The above intuition of domains being information orderings will be emphasized to motivate the mathematical formalization of the theory. The precise formal definitions are to be found in the dedicated articles for each concept. A list of general order-theoretic definitions, which include domain theoretic notions as well can be found in the order theory glossary. The most important concepts of domain theory will nonetheless be introduced below.
=== Directed sets as converging specifications ===
As mentioned before, domain theory deals with partially ordered sets to model a domain of computation. The goal is to interpret the elements of such an order as pieces of information or (partial) results of a computation, where elements that are higher in the order extend the information of the elements below them in a consistent way. From this simple intuition it is already clear that domains often do not have a greatest element, since this would mean that there is an element that contains the information of all other elements—a rather uninteresting situation.
A concept that plays an important role in the theory is that of a directed subset of a domain; a directed subset is a non-empty subset of the order in which any two elements have an upper bound that is an element of this subset. In view of our intuition about domains, this means that any two pieces of information within the directed subset are consistently extended by some other element in the subset. Hence we can view directed subsets as consistent specifications, i.e. as sets of partial results in which no two elements are contradictory. This interpretation can be compared with the notion of a convergent sequence in analysis, where each element is more specific than the preceding one. Indeed, in the theory of metric spaces, sequences play a role that is in many aspects analogous to the role of directed sets in domain theory.
Now, as in the case of sequences, we are interested in the limit of a directed set. According to what was said above, this would be an element that is the most general piece of information that extends the information of all elements of the directed set, i.e. the unique element that contains exactly the information that was present in the directed set, and nothing more. In the formalization of order theory, this is just the least upper bound of the directed set. As in the case of the limit of a sequence, the least upper bound of a directed set does not always exist.
Naturally, one has a special interest in those domains of computations in which all consistent specifications converge, i.e. in orders in which all directed sets have a least upper bound. This property defines the class of directed-complete partial orders, or dcpo for short. Indeed, most considerations of domain theory do only consider orders that are at least directed complete.
From the underlying idea of partially specified results as representing incomplete knowledge, one derives another desirable property: the existence of a least element. Such an element models that state of no information—the place where most computations start. It also can be regarded as the output of a computation that does not return any result at all.
=== Computations and domains ===
Now that we have some basic formal descriptions of what a domain of computation should be, we can turn to the computations themselves. Clearly, these have to be functions, taking inputs from some computational domain and returning outputs in some (possibly different) domain. However, one would also expect that the output of a function will contain more information when the information content of the input is increased. Formally, this means that we want a function to be monotonic.
When dealing with dcpos, one might also want computations to be compatible with the formation of limits of a directed set. Formally, this means that, for some function f, the image f(D) of a directed set D (i.e. the set of the images of each element of D) is again directed and has as a least upper bound the image of the least upper bound of D. One could also say that f preserves directed suprema. Also note that, by considering directed sets of two elements, such a function also has to be monotonic. These properties give rise to the notion of a Scott-continuous function. Since this often is not ambiguous one also may speak of continuous functions.
=== Approximation and finiteness ===
Domain theory is a purely qualitative approach to modeling the structure of information states. One can say that something contains more information, but the amount of additional information is not specified. Yet, there are some situations in which one wants to speak about elements that are in a sense much simpler (or much more incomplete) than a given state of information. For example, in the natural subset-inclusion ordering on some powerset, any infinite element (i.e. set) is much more "informative" than any of its finite subsets.
If one wants to model such a relationship, one may first want to consider the induced strict order < of a domain with order ≤. However, while this is a useful notion in the case of total orders, it does not tell us much in the case of partially ordered sets. Considering again inclusion-orders of sets, a set is already strictly smaller than another, possibly infinite, set if it contains just one less element. One would, however, hardly agree that this captures the notion of being "much simpler".
=== Way-below relation ===
A more elaborate approach leads to the definition of the so-called order of approximation, which is more suggestively also called the way-below relation. An element x is way below an element y, if, for every directed set D with supremum such that
y
⊑
sup
D
{\displaystyle y\sqsubseteq \sup D}
,
there is some element d in D such that
x
⊑
d
{\displaystyle x\sqsubseteq d}
.
Then one also says that x approximates y and writes
x
≪
y
{\displaystyle x\ll y}
.
This does imply that
x
⊑
y
{\displaystyle x\sqsubseteq y}
,
since the singleton set {y} is directed. For an example, in an ordering of sets, an infinite set is way above any of its finite subsets. On the other hand, consider the directed set (in fact, the chain) of finite sets
{
0
}
,
{
0
,
1
}
,
{
0
,
1
,
2
}
,
…
{\displaystyle \{0\},\{0,1\},\{0,1,2\},\ldots }
Since the supremum of this chain is the set of all natural numbers N, this shows that no infinite set is way below N.
However, being way below some element is a relative notion and does not reveal much about an element alone. For example, one would like to characterize finite sets in an order-theoretic way, but even infinite sets can be way below some other set. The special property of these finite elements x is that they are way below themselves, i.e.
x
≪
x
{\displaystyle x\ll x}
.
An element with this property is also called compact. Yet, such elements do not have to be "finite" nor "compact" in any other mathematical usage of the terms. The notation is nonetheless motivated by certain parallels to the respective notions in set theory and topology. The compact elements of a domain have the important special property that they cannot be obtained as a limit of a directed set in which they did not already occur.
Many other important results about the way-below relation support the claim that this definition is appropriate to capture many important aspects of a domain.
=== Bases of domains ===
The previous thoughts raise another question: is it possible to guarantee that all elements of a domain can be obtained as a limit of much simpler elements? This is quite relevant in practice, since we cannot compute infinite objects but we may still hope to approximate them arbitrarily closely.
More generally, we would like to restrict to a certain subset of elements as being sufficient for getting all other elements as least upper bounds. Hence, one defines a base of a poset P as being a subset B of P, such that, for each x in P, the set of elements in B that are way below x contains a directed set with supremum x. The poset P is a continuous poset if it has some base. Especially, P itself is a base in this situation. In many applications, one restricts to continuous (d)cpos as a main object of study.
Finally, an even stronger restriction on a partially ordered set is given by requiring the existence of a base of finite elements. Such a poset is called algebraic. From the viewpoint of denotational semantics, algebraic posets are particularly well-behaved, since they allow for the approximation of all elements even when restricting to finite ones. As remarked before, not every finite element is "finite" in a classical sense and it may well be that the finite elements constitute an uncountable set.
In some cases, however, the base for a poset is countable. In this case, one speaks of an ω-continuous poset. Accordingly, if the countable base consists entirely of finite elements, we obtain an order that is ω-algebraic.
=== Special types of domains ===
A simple special case of a domain is known as an elementary or flat domain. This consists of a set of incomparable elements, such as the integers, along with a single "bottom" element considered smaller than all other elements.
One can obtain a number of other interesting special classes of ordered structures that could be suitable as "domains". We already mentioned continuous posets and algebraic posets. More special versions of both are continuous and algebraic cpos. Adding even further completeness properties one obtains continuous lattices and algebraic lattices, which are just complete lattices with the respective properties. For the algebraic case, one finds broader classes of posets that are still worth studying: historically, the Scott domains were the first structures to be studied in domain theory. Still wider classes of domains are constituted by SFP-domains, L-domains, and bifinite domains.
All of these classes of orders can be cast into various categories of dcpos, using functions that are monotone, Scott-continuous, or even more specialized as morphisms. Finally, note that the term domain itself is not exact and thus is only used as an abbreviation when a formal definition has been given before or when the details are irrelevant.
== Important results ==
A poset D is a dcpo if and only if each chain in D has a supremum. (The 'if' direction relies on the axiom of choice.)
If f is a continuous function on a domain D then it has a least fixed point, given as the least upper bound of all finite iterations of f on the least element ⊥:
fix
(
f
)
=
⨆
n
∈
N
f
n
(
⊥
)
{\displaystyle \operatorname {fix} (f)=\bigsqcup _{n\in \mathbb {N} }f^{n}(\bot )}
.
This is the Kleene fixed-point theorem. The
⊔
{\displaystyle \sqcup }
symbol is the directed join.
== Generalizations ==
A continuity space is a generalization of metric spaces and posets that can be used to unify the notions of metric spaces and domains.
== See also ==
Category theory
Denotational semantics
Scott domain
Scott information system
Type theory
== Further reading ==
G. Gierz; K. H. Hofmann; K. Keimel; J. D. Lawson; M. Mislove; D. S. Scott (2003). "Continuous Lattices and Domains". Encyclopedia of Mathematics and its Applications. Vol. 93. Cambridge University Press. ISBN 0-521-80338-1.
Samson Abramsky, Achim Jung (1994). "Domain theory" (PDF). In S. Abramsky; D. M. Gabbay; T. S. E. Maibaum (eds.). Handbook of Logic in Computer Science. Vol. III. Oxford University Press. pp. 1–168. ISBN 0-19-853762-X. Retrieved 2007-10-13.
Alex Simpson (2001–2002). "Part III: Topological Spaces from a Computational Perspective". Mathematical Structures for Semantics. Archived from the original on 2005-04-27. Retrieved 2007-10-13.
D. S. Scott (1975). "Data types as lattices". In Müller, G.H.; Oberschelp, A.; Potthoff, K. (eds.). ISILC Logic Conference. Lecture Notes in Mathematics. Vol. 499. Springer-Verlag. pp. 579–651. doi:10.1007/BFb0079432. ISBN 978-3-540-07534-9.
Scott, Dana (1976). "Data Types as Lattices". SIAM Journal on Computing. 5 (3): 522–587. doi:10.1137/0205037.
Carl A. Gunter (1992). Semantics of Programming Languages. MIT Press. ISBN 9780262570954.
B. A. Davey; H. A. Priestley (2002). Introduction to Lattices and Order (2nd ed.). Cambridge University Press. ISBN 0-521-78451-4.
Carl Hewitt; Henry Baker (August 1977). "Actors and Continuous Functionals" (PDF). Proceedings of IFIP Working Conference on Formal Description of Programming Concepts. Archived (PDF) from the original on April 12, 2019.
V. Stoltenberg-Hansen; I. Lindstrom; E. R. Griffor (1994). Mathematical Theory of Domains. Cambridge University Press. ISBN 0-521-38344-7.
== External links ==
Introduction to Domain Theory by Graham Hutton, University of Nottingham | Wikipedia/Domain_theory |
In the mathematical area of order theory, every partially ordered set P gives rise to a dual (or opposite) partially ordered set which is often denoted by Pop or Pd. This dual order Pop is defined to be the same set, but with the inverse order, i.e. x ≤ y holds in Pop if and only if y ≤ x holds in P. It is easy to see that this construction, which can be depicted by flipping the Hasse diagram for P upside down, will indeed yield a partially ordered set. In a broader sense, two partially ordered sets are also said to be duals if they are dually isomorphic, i.e. if one poset is order isomorphic to the dual of the other.
The importance of this simple definition stems from the fact that every definition and theorem of order theory can readily be transferred to the dual order. Formally, this is captured by the Duality Principle for ordered sets:
If a given statement is valid for all partially ordered sets, then its dual statement, obtained by inverting the direction of all order relations and by dualizing all order theoretic definitions involved, is also valid for all partially ordered sets.
If a statement or definition is equivalent to its dual then it is said to be self-dual. Note that the consideration of dual orders is so fundamental that it often occurs implicitly when writing ≥ for the dual order of ≤ without giving any prior definition of this "new" symbol.
== Examples ==
Naturally, there are a great number of examples for concepts that are dual:
Greatest elements and least elements
Maximal elements and minimal elements
Least upper bounds (suprema, ∨) and greatest lower bounds (infima, ∧)
Upper sets and lower sets
Ideals and filters
Closure operators and kernel operators.
Examples of notions which are self-dual include:
Being a (complete) lattice
Monotonicity of functions
Distributivity of lattices, i.e. the lattices for which ∀x,y,z: x ∧ (y ∨ z) = (x ∧ y) ∨ (x ∧ z) holds are exactly those for which the dual statement ∀x,y,z: x ∨ (y ∧ z) = (x ∨ y) ∧ (x ∨ z) holds
Being a Boolean algebra
Being an order isomorphism.
Since partial orders are antisymmetric, the only ones that are self-dual are the equivalence relations (but the notion of partial order is self-dual).
== See also ==
Converse relation
List of Boolean algebra topics
Transpose graph
Duality in category theory, of which duality in order theory is a special case
== References ==
Davey, B.A.; Priestley, H. A. (2002), Introduction to Lattices and Order (2nd ed.), Cambridge University Press, ISBN 978-0-521-78451-1 | Wikipedia/Duality_(order_theory) |
In general topology, an Alexandrov topology is a topology in which the intersection of an arbitrary family of open sets is open (while the definition of a topology only requires this for a finite family). Equivalently, an Alexandrov topology is one whose open sets are the upper sets for some preorder on the space.
Spaces with an Alexandrov topology are also known as Alexandrov-discrete spaces or finitely generated spaces. The latter name stems from the fact that their topology is uniquely determined by the family of all finite subspaces. This makes them a generalization of finite topological spaces.
Alexandrov-discrete spaces are named after the Russian topologist Pavel Alexandrov. They should not be confused with Alexandrov spaces from Riemannian geometry introduced by the Russian mathematician Aleksandr Danilovich Aleksandrov.
== Characterizations of Alexandrov topologies ==
Alexandrov topologies have numerous characterizations. In a topological space
X
{\displaystyle X}
, the following conditions are equivalent:
Open and closed set characterizations:
An arbitrary intersection of open sets is open.
An arbitrary union of closed sets is closed.
Neighbourhood characterizations:
Every point has a smallest neighbourhood.
The neighbourhood filter of every point is closed under arbitrary intersections.
Interior and closure algebraic characterizations:
The interior operator distributes over arbitrary intersections of subsets.
The closure operator distributes over arbitrary unions of subsets.
Preorder characterizations:
The topology is the finest topology among topologies on
X
{\displaystyle X}
with the same specialization preorder.
The open sets are precisely the upper sets for some preorder on
X
{\displaystyle X}
.
Finite generation and category theoretic characterizations:
The closure of a subset is the union of the closures of its finite subsets (and thus also the union of the closures of its singleton subsets).
The topology is coherent with the finite subspaces of
X
{\displaystyle X}
.
The inclusion maps of the finite subspaces of
X
{\displaystyle X}
form a final sink.
X
{\displaystyle X}
is finitely generated, i.e., it is in the final hull of its finite spaces. (This means that there is a final sink
f
i
:
X
i
→
X
{\displaystyle f_{i}:X_{i}\to X}
where each
X
i
{\displaystyle X_{i}}
is a finite topological space.)
== Correspondence with preordered sets ==
An Alexandrov topology is canonically associated to a preordered set by taking the open sets to be the upper sets. Conversely, the preordered set can be recovered from the Alexandrov topology as its specialization preorder. (We use the convention that the specialization preorder is defined by
x
≤
y
{\displaystyle x\leq y}
whenever
x
∈
cl
{
y
}
,
{\displaystyle x\in \operatorname {cl} \{y\},}
that is, when every open set that contains
x
{\displaystyle x}
also contains
y
{\displaystyle y}
, to match our convention that the open sets in the Alexandrov topology are the upper sets rather than the lower sets; the opposite convention also exists.)
The following dictionary holds between order-theoretic notions and topological notions:
Open sets are upper sets,
Closed sets are lower sets,
The interior of a subset
S
{\displaystyle S}
is the set of elements
x
∈
S
{\displaystyle x\in S}
such that
y
∈
S
{\displaystyle y\in S}
whenever
x
≤
y
{\displaystyle x\leq y}
.
The closure of a subset is its lower closure.
A map
f
:
X
→
Y
{\displaystyle f:X\to Y}
between two spaces with Alexandrov topologies is continuous if and only if it is order preserving as a function between the underlying preordered sets.
From the point of view of category theory, let Top denote the category of topological spaces consisting of topological spaces with continuous maps as morphisms. Let Alex denote its full subcategory consisting of Alexandrov-discrete spaces. Let Preord denote the category of preordered sets consisting of preordered sets with order preserving functions as morphisms. The correspondence above is an isomorphism of categories between Alex and PreOrd.
Furthermore, the functor
A
:
P
r
e
O
r
d
→
T
o
p
{\displaystyle A:\mathbf {PreOrd} \to \mathbf {Top} }
that sends a preordered set to its associated Alexandrov-discrete space is fully faithful and left adjoint to the specialization preorder functor
S
:
T
o
p
→
P
r
e
O
r
d
{\displaystyle S:\mathbf {Top} \to \mathbf {PreOrd} }
, making Alex a coreflective subcategory of Top. Moreover, the reflection morphisms
A
(
S
(
X
)
)
→
X
{\displaystyle A(S(X))\to X}
, whose underlying maps are the identities (but with different topologies at the source and target), are bijective continuous maps, thus bimorphisms.
== Properties ==
A subspace of an Alexandrov-discrete space is Alexandrov-discrete. So is a quotient of an Alexandrov-discrete space (because inverse images are compatible with arbitrary unions and intersections).
The product of two Alexandrov-discrete spaces is Alexandrov-discrete.
More generally, the box product of an arbitrary number of Alexandrov-discrete spaces is Alexandrov-discrete.
Every Alexandrov topology is first countable (since every point has a smallest neighborhood).
Every Alexandrov topology is locally compact in the sense that every point has a local base of compact neighbourhoods, since the smallest neighbourhood of a point is always compact. Indeed, if
U
{\displaystyle U}
is the smallest (open) neighbourhood of a point
x
{\displaystyle x}
, in
U
{\displaystyle U}
itself with the subspace topology any open cover of
U
{\displaystyle U}
contains a neighbourhood of
x
{\displaystyle x}
included in
U
{\displaystyle U}
. Such a neighbourhood is necessarily equal to
U
{\displaystyle U}
, so the open cover admits
{
U
}
{\displaystyle \{U\}}
as a finite subcover.
Every Alexandrov topology is locally path connected.
Considering the interior operator and closure operator to be modal operators on the power set Boolean algebra of an Alexandroff-discrete space, their construction is a special case of the construction of a modal algebra from a modal frame i.e. from a set with a single binary relation. (The latter construction is itself a special case of a more general construction of a complex algebra from a relational structure i.e. a set with relations defined on it.) The class of modal algebras that we obtain in the case of a preordered set is the class of interior algebras—the algebraic abstractions of topological spaces.
== History ==
Alexandrov spaces were first introduced in 1937 by P. S. Alexandrov under the name discrete spaces, where he provided the characterizations in terms of sets and neighbourhoods. The name discrete spaces later came to be used for topological spaces in which every subset is open and the original concept lay forgotten in the topological literature. On the other hand, Alexandrov spaces played a relevant role in Øystein Ore's pioneering studies on closure systems and their relationships
with lattice theory and topology.
With the advancement of categorical topology in the 1980s, Alexandrov spaces were rediscovered when the concept of finite generation was applied to general topology and the name finitely generated spaces was adopted for them. Alexandrov spaces were also rediscovered around the same time in the context of topologies resulting from denotational semantics and domain theory in computer science.
In 1966 Michael C. McCord and A. K. Steiner each independently observed an equivalence between partially ordered sets and spaces that were precisely the T0 versions of the spaces that Alexandrov had introduced. P. T. Johnstone referred to such topologies as Alexandrov topologies. F. G. Arenas independently proposed this name for the general version of these topologies. McCord also showed that these spaces are weak homotopy equivalent to the order complex of the corresponding partially ordered set. Steiner demonstrated that the equivalence is a contravariant lattice isomorphism preserving arbitrary meets and joins as well as complementation.
It was also a well-known result in the field of modal logic that an equivalence exists between finite topological spaces and preorders on finite sets (the finite modal frames for the modal logic S4). A. Grzegorczyk observed that this extended to a equivalence between what he referred to as totally distributive spaces and preorders. C. Naturman observed that these spaces were the Alexandrov-discrete spaces and extended the result to a category-theoretic equivalence between the category of Alexandrov-discrete spaces and (open) continuous maps, and the category of preorders and (bounded) monotone maps, providing the preorder characterizations as well as the interior and closure algebraic characterizations.
A systematic investigation of these spaces from the point of view of general topology, which had been neglected since the original paper by Alexandrov was taken up by F. G. Arenas.
== See also ==
P-space, a space satisfying the weaker condition that countable intersections of open sets are open
== References == | Wikipedia/Alexandrov_topology |
In mathematics, specifically in order theory and functional analysis, the order topology of an ordered vector space
(
X
,
≤
)
{\displaystyle (X,\leq )}
is the finest locally convex topological vector space (TVS) topology on
X
{\displaystyle X}
for which every order interval is bounded, where an order interval in
X
{\displaystyle X}
is a set of the form
[
a
,
b
]
:=
{
z
∈
X
:
a
≤
z
and
z
≤
b
}
{\displaystyle [a,b]:=\left\{z\in X:a\leq z{\text{ and }}z\leq b\right\}}
where
a
{\displaystyle a}
and
b
{\displaystyle b}
belong to
X
.
{\displaystyle X.}
The order topology is an important topology that is used frequently in the theory of ordered topological vector spaces because the topology stems directly from the algebraic and order theoretic properties of
(
X
,
≤
)
,
{\displaystyle (X,\leq ),}
rather than from some topology that
X
{\displaystyle X}
starts out having.
This allows for establishing intimate connections between this topology and the algebraic and order theoretic properties of
(
X
,
≤
)
.
{\displaystyle (X,\leq ).}
For many ordered topological vector spaces that occur in analysis, their topologies are identical to the order topology.
== Definitions ==
The family of all locally convex topologies on
X
{\displaystyle X}
for which every order interval is bounded is non-empty (since it contains the coarsest possible topology on
X
{\displaystyle X}
) and the order topology is the upper bound of this family.
A subset of
X
{\displaystyle X}
is a neighborhood of the origin in the order topology if and only if it is convex and absorbs every order interval in
X
.
{\displaystyle X.}
A neighborhood of the origin in the order topology is necessarily an absorbing set because
[
x
,
x
]
:=
{
x
}
{\displaystyle [x,x]:=\{x\}}
for all
x
∈
X
.
{\displaystyle x\in X.}
For every
a
≥
0
,
{\displaystyle a\geq 0,}
let
X
a
=
⋃
n
=
1
∞
n
[
−
a
,
a
]
{\displaystyle X_{a}=\bigcup _{n=1}^{\infty }n[-a,a]}
and endow
X
a
{\displaystyle X_{a}}
with its order topology (which makes it into a normable space).
The set of all
X
a
{\displaystyle X_{a}}
's is directed under inclusion and if
X
a
⊆
X
b
{\displaystyle X_{a}\subseteq X_{b}}
then the natural inclusion of
X
a
{\displaystyle X_{a}}
into
X
b
{\displaystyle X_{b}}
is continuous.
If
X
{\displaystyle X}
is a regularly ordered vector space over the reals and if
H
{\displaystyle H}
is any subset of the positive cone
C
{\displaystyle C}
of
X
{\displaystyle X}
that is cofinal in
C
{\displaystyle C}
(e.g.
H
{\displaystyle H}
could be
C
{\displaystyle C}
), then
X
{\displaystyle X}
with its order topology is the inductive limit of
{
X
a
:
a
≥
0
}
{\displaystyle \left\{X_{a}:a\geq 0\right\}}
(where the bonding maps are the natural inclusions).
The lattice structure can compensate in part for any lack of an order unit:
In particular, if
(
X
,
τ
)
{\displaystyle (X,\tau )}
is an ordered Fréchet lattice over the real numbers then
τ
{\displaystyle \tau }
is the ordered topology on
X
{\displaystyle X}
if and only if the positive cone of
X
{\displaystyle X}
is a normal cone in
(
X
,
τ
)
.
{\displaystyle (X,\tau ).}
If
X
{\displaystyle X}
is a regularly ordered vector lattice then the ordered topology is the finest locally convex TVS topology on
X
{\displaystyle X}
making
X
{\displaystyle X}
into a locally convex vector lattice. If in addition
X
{\displaystyle X}
is order complete then
X
{\displaystyle X}
with the order topology is a barreled space and every band decomposition of
X
{\displaystyle X}
is a topological direct sum for this topology.
In particular, if the order of a vector lattice
X
{\displaystyle X}
is regular then the order topology is generated by the family of all lattice seminorms on
X
.
{\displaystyle X.}
== Properties ==
Throughout,
(
X
,
≤
)
{\displaystyle (X,\leq )}
will be an ordered vector space and
τ
≤
{\displaystyle \tau _{\leq }}
will denote the order topology on
X
.
{\displaystyle X.}
The dual of
(
X
,
τ
≤
)
{\displaystyle \left(X,\tau _{\leq }\right)}
is the order bound dual
X
b
{\displaystyle X_{b}}
of
X
.
{\displaystyle X.}
If
X
b
{\displaystyle X_{b}}
separates points in
X
{\displaystyle X}
(such as if
(
X
,
≤
)
{\displaystyle (X,\leq )}
is regular) then
(
X
,
τ
≤
)
{\displaystyle \left(X,\tau _{\leq }\right)}
is a bornological locally convex TVS.
Each positive linear operator between two ordered vector spaces is continuous for the respective order topologies.
Each order unit of an ordered TVS is interior to the positive cone for the order topology.
If the order of an ordered vector space
X
{\displaystyle X}
is a regular order and if each positive sequence of type
ℓ
1
{\displaystyle \ell ^{1}}
in
X
{\displaystyle X}
is order summable, then
X
{\displaystyle X}
endowed with its order topology is a barreled space.
If the order of an ordered vector space
X
{\displaystyle X}
is a regular order and if for all
x
≥
0
{\displaystyle x\geq 0}
and
y
≥
0
{\displaystyle y\geq 0}
[
0
,
x
]
+
[
0
,
y
]
=
[
0
,
x
+
y
]
{\displaystyle [0,x]+[0,y]=[0,x+y]}
holds, then the positive cone of
X
{\displaystyle X}
is a normal cone in
X
{\displaystyle X}
when
X
{\displaystyle X}
is endowed with the order topology. In particular, the continuous dual space of
X
{\displaystyle X}
with the order topology will be the order dual
X
{\displaystyle X}
+.
If
(
X
,
≤
)
{\displaystyle (X,\leq )}
is an Archimedean ordered vector space over the real numbers having an order unit and let
τ
≤
{\displaystyle \tau _{\leq }}
denote the order topology on
X
.
{\displaystyle X.}
Then
(
X
,
τ
≤
)
{\displaystyle \left(X,\tau _{\leq }\right)}
is an ordered TVS that is normable,
τ
≤
{\displaystyle \tau _{\leq }}
is the finest locally convex TVS topology on
X
{\displaystyle X}
such that the positive cone is normal, and the following are equivalent:
(
X
,
τ
≤
)
{\displaystyle \left(X,\tau _{\leq }\right)}
is complete.
Each positive sequence of type
ℓ
1
{\displaystyle \ell ^{1}}
in
X
{\displaystyle X}
is order summable.
In particular, if
(
X
,
≤
)
{\displaystyle (X,\leq )}
is an Archimedean ordered vector space having an order unit then the order
≤
{\displaystyle \,\leq \,}
is a regular order and
X
b
=
X
+
.
{\displaystyle X_{b}=X^{+}.}
If
X
{\displaystyle X}
is a Banach space and an ordered vector space with an order unit then
X
{\displaystyle X}
's topological is identical to the order topology if and only if the positive cone of
X
{\displaystyle X}
is a normal cone in
X
.
{\displaystyle X.}
A vector lattice homomorphism from
X
{\displaystyle X}
into
Y
{\displaystyle Y}
is a topological homomorphism when
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are given their respective order topologies.
== Relation to subspaces, quotients, and products ==
If
M
{\displaystyle M}
is a solid vector subspace of a vector lattice
X
,
{\displaystyle X,}
then the order topology of
X
/
M
{\displaystyle X/M}
is the quotient of the order topology on
X
.
{\displaystyle X.}
== Examples ==
The order topology of a finite product of ordered vector spaces (this product having its canonical order) is identical to the product topology of the topological product of the constituent ordered vector spaces (when each is given its order topology).
== See also ==
Generalised metric – Metric geometry
Order topology – Certain topology in mathematics
Ordered topological vector space
Ordered vector space – Vector space with a partial order
Vector lattice – Partially ordered vector space, ordered as a latticePages displaying short descriptions of redirect targets
== References ==
== Bibliography ==
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. | Wikipedia/Order_topology_(functional_analysis) |
In mathematics, particularly in functional analysis, the closed graph theorem is a result connecting the continuity of a linear operator to a topological property of their graph. Precisely, the theorem states that a linear operator between two Banach spaces is continuous if and only if the graph of the operator is closed (such an operator is called a closed linear operator; see also closed graph property).
An important question in functional analysis is whether a given linear operator is continuous (or bounded). The closed graph theorem gives one answer to that question.
== Explanation ==
Let
T
:
X
→
Y
{\displaystyle T:X\to Y}
be a linear operator between Banach spaces (or more generally Fréchet spaces). Then the continuity of
T
{\displaystyle T}
means that
T
x
i
→
T
x
{\displaystyle Tx_{i}\to Tx}
for each convergent sequence
x
i
→
x
{\displaystyle x_{i}\to x}
. On the other hand, the closedness of the graph of
T
{\displaystyle T}
means that for each convergent sequence
x
i
→
x
{\displaystyle x_{i}\to x}
such that
T
x
i
→
y
{\displaystyle Tx_{i}\to y}
, we have
y
=
T
x
{\displaystyle y=Tx}
. Hence, the closed graph theorem says that in order to check the continuity of
T
{\displaystyle T}
, one can show
T
x
i
→
T
x
{\displaystyle Tx_{i}\to Tx}
under the additional assumption that
T
x
i
{\displaystyle Tx_{i}}
is convergent.
In fact, for the graph of T to be closed, it is enough that if
x
i
→
0
,
T
x
i
→
y
{\displaystyle x_{i}\to 0,\,Tx_{i}\to y}
, then
y
=
0
{\displaystyle y=0}
. Indeed, assuming that condition holds, if
(
x
i
,
T
x
i
)
→
(
x
,
y
)
{\displaystyle (x_{i},Tx_{i})\to (x,y)}
, then
x
i
−
x
→
0
{\displaystyle x_{i}-x\to 0}
and
T
(
x
i
−
x
)
→
y
−
T
x
{\displaystyle T(x_{i}-x)\to y-Tx}
. Thus,
y
=
T
x
{\displaystyle y=Tx}
; i.e.,
(
x
,
y
)
{\displaystyle (x,y)}
is in the graph of T.
Note, to check the closedness of a graph, it’s not even necessary to use the norm topology: if the graph of T is closed in some topology coarser than the norm topology, then it is closed in the norm topology. In practice, this works like this: T is some operator on some function space. One shows T is continuous with respect to the distribution topology; thus, the graph is closed in that topology, which implies closedness in the norm topology and then T is a bounded by the closed graph theorem (when the theorem applies). See § Example for an explicit example.
== Statement ==
The usual proof of the closed graph theorem employs the open mapping theorem. It simply uses a general recipe of obtaining the closed graph theorem from the open mapping theorem; see closed graph theorem § Relation to the open mapping theorem (this deduction is formal and does not use linearity; the linearity is needed to appeal to the open mapping theorem which relies on the linearity.)
In fact, the open mapping theorem can in turn be deduced from the closed graph theorem as follows. As noted in Open mapping theorem (functional analysis) § Statement and proof, it is enough to prove the open mapping theorem for a continuous linear operator that is bijective (not just surjective). Let T be such an operator. Then by continuity, the graph
Γ
T
{\displaystyle \Gamma _{T}}
of T is closed. Then
Γ
T
≃
Γ
T
−
1
{\displaystyle \Gamma _{T}\simeq \Gamma _{T^{-1}}}
under
(
x
,
y
)
↦
(
y
,
x
)
{\displaystyle (x,y)\mapsto (y,x)}
. Hence, by the closed graph theorem,
T
−
1
{\displaystyle T^{-1}}
is continuous; i.e., T is an open mapping.
Since the closed graph theorem is equivalent to the open mapping theorem, one knows that the theorem fails without the completeness assumption. But more concretely, an operator with closed graph that is not bounded (see unbounded operator) exists and thus serves as a counterexample.
== Example ==
The Hausdorff–Young inequality says that the Fourier transformation
⋅
^
:
L
p
(
R
n
)
→
L
p
′
(
R
n
)
{\displaystyle {\widehat {\cdot }}:L^{p}(\mathbb {R} ^{n})\to L^{p'}(\mathbb {R} ^{n})}
is a well-defined bounded operator with operator norm one when
1
/
p
+
1
/
p
′
=
1
{\displaystyle 1/p+1/p'=1}
. This result is usually proved using the Riesz–Thorin interpolation theorem and is highly nontrivial. The closed graph theorem can be used to prove a soft version of this result; i.e., the Fourier transformation is a bounded operator with the unknown operator norm.
Here is how the argument would go. Let T denote the Fourier transformation. First we show
T
:
L
p
→
Z
{\displaystyle T:L^{p}\to Z}
is a continuous linear operator for Z = the space of tempered distributions on
R
n
{\displaystyle \mathbb {R} ^{n}}
. Second, we note that T maps the space of Schwarz functions to itself (in short, because smoothness and rapid decay transform to rapid decay and smoothness, respectively). This implies that the graph of T is contained in
L
p
×
L
p
′
{\displaystyle L^{p}\times L^{p'}}
and
T
:
L
p
→
L
p
′
{\displaystyle T:L^{p}\to L^{p'}}
is defined but with unknown bounds. Since
T
:
L
p
→
Z
{\displaystyle T:L^{p}\to Z}
is continuous, the graph of
T
:
L
p
→
L
p
′
{\displaystyle T:L^{p}\to L^{p'}}
is closed in the distribution topology; thus in the norm topology. Finally, by the closed graph theorem,
T
:
L
p
→
L
p
′
{\displaystyle T:L^{p}\to L^{p'}}
is a bounded operator.
== Generalization ==
=== Complete metrizable codomain ===
The closed graph theorem can be generalized from Banach spaces to more abstract topological vector spaces in the following ways.
==== Between F-spaces ====
There are versions that does not require
Y
{\displaystyle Y}
to be locally convex.
This theorem is restated and extend it with some conditions that can be used to determine if a graph is closed:
=== Complete pseudometrizable codomain ===
Every metrizable topological space is pseudometrizable. A pseudometrizable space is metrizable if and only if it is Hausdorff.
=== Codomain not complete or (pseudo) metrizable ===
An even more general version of the closed graph theorem is
== Borel graph theorem ==
The Borel graph theorem, proved by L. Schwartz, shows that the closed graph theorem is valid for linear maps defined on and valued in most spaces encountered in analysis.
Recall that a topological space is called a Polish space if it is a separable complete metrizable space and that a Souslin space is the continuous image of a Polish space. The weak dual of a separable Fréchet space and the strong dual of a separable Fréchet-Montel space are Souslin spaces. Also, the space of distributions and all Lp-spaces over open subsets of Euclidean space as well as many other spaces that occur in analysis are Souslin spaces.
The Borel graph theorem states:
An improvement upon this theorem, proved by A. Martineau, uses K-analytic spaces.
A topological space
X
{\displaystyle X}
is called a
K
σ
δ
{\displaystyle K_{\sigma \delta }}
if it is the countable intersection of countable unions of compact sets.
A Hausdorff topological space
Y
{\displaystyle Y}
is called K-analytic if it is the continuous image of a
K
σ
δ
{\displaystyle K_{\sigma \delta }}
space (that is, if there is a
K
σ
δ
{\displaystyle K_{\sigma \delta }}
space
X
{\displaystyle X}
and a continuous map of
X
{\displaystyle X}
onto
Y
{\displaystyle Y}
).
Every compact set is K-analytic so that there are non-separable K-analytic spaces. Also, every Polish, Souslin, and reflexive Fréchet space is K-analytic as is the weak dual of a Frechet space.
The generalized Borel graph theorem states:
== Related results ==
If
F
:
X
→
Y
{\displaystyle F:X\to Y}
is closed linear operator from a Hausdorff locally convex TVS
X
{\displaystyle X}
into a Hausdorff finite-dimensional TVS
Y
{\displaystyle Y}
then
F
{\displaystyle F}
is continuous.
== See also ==
Almost open linear map – Map that satisfies a condition similar to that of being an open map.Pages displaying short descriptions of redirect targets
Barrelled space – Type of topological vector space
Closed graph – Graph of a map closed in the product spacePages displaying short descriptions of redirect targets
Closed linear operator – Linear operator whose graph is closed
Densely defined operator – Function that is defined almost everywhere (mathematics)
Discontinuous linear map
Kakutani fixed-point theorem – Fixed-point theorem for set-valued functions
Open mapping theorem (functional analysis) – Condition for a linear operator to be open
Ursescu theorem – Generalization of closed graph, open mapping, and uniform boundedness theorem
Webbed space – Space where open mapping and closed graph theorems hold
== References ==
Notes
== Bibliography ==
Adasch, Norbert; Ernst, Bruno; Keim, Dieter (1978). Topological Vector Spaces: The Theory Without Convexity Conditions. Lecture Notes in Mathematics. Vol. 639. Berlin New York: Springer-Verlag. ISBN 978-3-540-08662-8. OCLC 297140003.
Banach, Stefan (1932). Théorie des Opérations Linéaires [Theory of Linear Operations] (PDF). Monografie Matematyczne (in French). Vol. 1. Warszawa: Subwencji Funduszu Kultury Narodowej. Zbl 0005.20901. Archived from the original (PDF) on 2014-01-11. Retrieved 2020-07-11.
Berberian, Sterling K. (1974). Lectures in Functional Analysis and Operator Theory. Graduate Texts in Mathematics. Vol. 15. New York: Springer. ISBN 978-0-387-90081-0. OCLC 878109401.
Bourbaki, Nicolas (1987) [1981]. Topological Vector Spaces: Chapters 1–5. Éléments de mathématique. Translated by Eggleston, H.G.; Madan, S. Berlin New York: Springer-Verlag. ISBN 3-540-13627-4. OCLC 17499190.
Conway, John (1990). A course in functional analysis. Graduate Texts in Mathematics. Vol. 96 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-97245-9. OCLC 21195908.
Edwards, Robert E. (1995). Functional Analysis: Theory and Applications. New York: Dover Publications. ISBN 978-0-486-68143-6. OCLC 30593138.
Dolecki, Szymon; Mynard, Frédéric (2016). Convergence Foundations Of Topology. New Jersey: World Scientific Publishing Company. ISBN 978-981-4571-52-4. OCLC 945169917.
Dubinsky, Ed (1979). The Structure of Nuclear Fréchet Spaces. Lecture Notes in Mathematics. Vol. 720. Berlin New York: Springer-Verlag. ISBN 978-3-540-09504-0. OCLC 5126156.
Grothendieck, Alexander (1973). Topological Vector Spaces. Translated by Chaljub, Orlando. New York: Gordon and Breach Science Publishers. ISBN 978-0-677-30020-7. OCLC 886098.
Husain, Taqdir; Khaleelulla, S. M. (1978). Barrelledness in Topological and Ordered Vector Spaces. Lecture Notes in Mathematics. Vol. 692. Berlin, New York, Heidelberg: Springer-Verlag. ISBN 978-3-540-09096-0. OCLC 4493665.
Jarchow, Hans (1981). Locally convex spaces. Stuttgart: B.G. Teubner. ISBN 978-3-519-02224-4. OCLC 8210342.
Köthe, Gottfried (1983) [1969]. Topological Vector Spaces I. Grundlehren der mathematischen Wissenschaften. Vol. 159. Translated by Garling, D.J.H. New York: Springer Science & Business Media. ISBN 978-3-642-64988-2. MR 0248498. OCLC 840293704.
Kriegl, Andreas; Michor, Peter W. (1997). The Convenient Setting of Global Analysis (PDF). Mathematical Surveys and Monographs. Vol. 53. Providence, R.I: American Mathematical Society. ISBN 978-0-8218-0780-4. OCLC 37141279.
Munkres, James R. (2000). Topology (2nd ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260. (accessible to patrons with print disabilities)
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
Robertson, Alex P.; Robertson, Wendy J. (1980). Topological Vector Spaces. Cambridge Tracts in Mathematics. Vol. 53. Cambridge England: Cambridge University Press. ISBN 978-0-521-29882-7. OCLC 589250.
Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Swartz, Charles (1992). An introduction to Functional Analysis. New York: M. Dekker. ISBN 978-0-8247-8643-4. OCLC 24909067.
Tao, Terence, 245B, Notes 9: The Baire category theorem and its Banach space consequences
Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
Vogt, Dietmar (2000), Lectures on Fréchet spaces (PDF)
Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
"Proof of closed graph theorem". PlanetMath. | Wikipedia/Closed_graph_theorem_(functional_analysis) |
In mathematics, specifically functional analysis, a Banach space is said to have the approximation property (AP), if every compact operator is a limit of finite-rank operators. The converse is always true.
Every Hilbert space has this property. There are, however, Banach spaces which do not; Per Enflo published the first counterexample in a 1973 article. However, much work in this area was done by Grothendieck (1955).
Later many other counterexamples were found. The space
L
(
H
)
{\displaystyle {\mathcal {L}}(H)}
of bounded operators on an infinite-dimensional Hilbert space
H
{\displaystyle H}
does not have the approximation property. The spaces
ℓ
p
{\displaystyle \ell ^{p}}
for
p
≠
2
{\displaystyle p\neq 2}
and
c
0
{\displaystyle c_{0}}
(see Sequence space) have closed subspaces that do not have the approximation property.
== Definition ==
A locally convex topological vector space X is said to have the approximation property, if the identity map can be approximated, uniformly on precompact sets, by continuous linear maps of finite rank.
For a locally convex space X, the following are equivalent:
X has the approximation property;
the closure of
X
′
⊗
X
{\displaystyle X^{\prime }\otimes X}
in
L
p
(
X
,
X
)
{\displaystyle \operatorname {L} _{p}(X,X)}
contains the identity map
Id
:
X
→
X
{\displaystyle \operatorname {Id} :X\to X}
;
X
′
⊗
X
{\displaystyle X^{\prime }\otimes X}
is dense in
L
p
(
X
,
X
)
{\displaystyle \operatorname {L} _{p}(X,X)}
;
for every locally convex space Y,
X
′
⊗
Y
{\displaystyle X^{\prime }\otimes Y}
is dense in
L
p
(
X
,
Y
)
{\displaystyle \operatorname {L} _{p}(X,Y)}
;
for every locally convex space Y,
Y
′
⊗
X
{\displaystyle Y^{\prime }\otimes X}
is dense in
L
p
(
Y
,
X
)
{\displaystyle \operatorname {L} _{p}(Y,X)}
;
where
L
p
(
X
,
Y
)
{\displaystyle \operatorname {L} _{p}(X,Y)}
denotes the space of continuous linear operators from X to Y endowed with the topology of uniform convergence on pre-compact subsets of X.
If X is a Banach space this requirement becomes that for every compact set
K
⊂
X
{\displaystyle K\subset X}
and every
ε
>
0
{\displaystyle \varepsilon >0}
, there is an operator
T
:
X
→
X
{\displaystyle T\colon X\to X}
of finite rank so that
‖
T
x
−
x
‖
≤
ε
{\displaystyle \|Tx-x\|\leq \varepsilon }
, for every
x
∈
K
{\displaystyle x\in K}
.
== Related definitions ==
Some other flavours of the AP are studied:
Let
X
{\displaystyle X}
be a Banach space and let
1
≤
λ
<
∞
{\displaystyle 1\leq \lambda <\infty }
. We say that X has the
λ
{\displaystyle \lambda }
-approximation property (
λ
{\displaystyle \lambda }
-AP), if, for every compact set
K
⊂
X
{\displaystyle K\subset X}
and every
ε
>
0
{\displaystyle \varepsilon >0}
, there is an operator
T
:
X
→
X
{\displaystyle T\colon X\to X}
of finite rank so that
‖
T
x
−
x
‖
≤
ε
{\displaystyle \|Tx-x\|\leq \varepsilon }
, for every
x
∈
K
{\displaystyle x\in K}
, and
‖
T
‖
≤
λ
{\displaystyle \|T\|\leq \lambda }
.
A Banach space is said to have bounded approximation property (BAP), if it has the
λ
{\displaystyle \lambda }
-AP for some
λ
{\displaystyle \lambda }
.
A Banach space is said to have metric approximation property (MAP), if it is 1-AP.
A Banach space is said to have compact approximation property (CAP), if in the
definition of AP an operator of finite rank is replaced with a compact operator.
== Examples ==
Every subspace of an arbitrary product of Hilbert spaces possesses the approximation property. In particular,
every Hilbert space has the approximation property.
every projective limit of Hilbert spaces, as well as any subspace of such a projective limit, possesses the approximation property.
every nuclear space possesses the approximation property.
Every separable Frechet space that contains a Schauder basis possesses the approximation property.
Every space with a Schauder basis has the AP (we can use the projections associated to the base as the
T
{\displaystyle T}
's in the definition), thus many spaces with the AP can be found. For example, the
ℓ
p
{\displaystyle \ell ^{p}}
spaces, or the symmetric Tsirelson space.
== References ==
== Bibliography ==
Bartle, R. G. (1977). "MR0402468 (53 #6288) (Review of Per Enflo's "A counterexample to the approximation problem in Banach spaces" Acta Mathematica 130 (1973), 309–317)". Mathematical Reviews. MR 0402468.
Enflo, P.: A counterexample to the approximation property in Banach spaces. Acta Math. 130, 309–317(1973).
Grothendieck, A.: Produits tensoriels topologiques et espaces nucleaires. Memo. Amer. Math. Soc. 16 (1955).
Halmos, Paul R. (1978). "Schauder bases". American Mathematical Monthly. 85 (4): 256–257. doi:10.2307/2321165. JSTOR 2321165. MR 0488901.
Paul R. Halmos, "Has progress in mathematics slowed down?" Amer. Math. Monthly 97 (1990), no. 7, 561—588. MR1066321
William B. Johnson "Complementably universal separable Banach spaces" in Robert G. Bartle (ed.), 1980 Studies in functional analysis, Mathematical Association of America.
Kwapień, S. "On Enflo's example of a Banach space without the approximation property". Séminaire Goulaouic–Schwartz 1972—1973: Équations aux dérivées partielles et analyse fonctionnelle, Exp. No. 8, 9 pp. Centre de Math., École Polytech., Paris, 1973. MR407569
Lindenstrauss, J.; Tzafriri, L.: Classical Banach Spaces I, Sequence spaces, 1977.
Nedevski, P.; Trojanski, S. (1973). "P. Enflo solved in the negative Banach's problem on the existence of a basis for every separable Banach space". Fiz.-Mat. Spis. Bulgar. Akad. Nauk. 16 (49): 134–138. MR 0458132.
Pietsch, Albrecht (2007). History of Banach spaces and linear operators. Boston, MA: Birkhäuser Boston, Inc. pp. xxiv+855 pp. ISBN 978-0-8176-4367-6. MR 2300779.
Karen Saxe, Beginning Functional Analysis, Undergraduate Texts in Mathematics, 2002 Springer-Verlag, New York.
Schaefer, Helmut H.; Wolff, M.P. (1999). Topological Vector Spaces. GTM. Vol. 3. New York: Springer-Verlag. ISBN 9780387987262.
Singer, Ivan. Bases in Banach spaces. II. Editura Academiei Republicii Socialiste România, Bucharest; Springer-Verlag, Berlin-New York, 1981. viii+880 pp. ISBN 3-540-10394-5. MR610799 | Wikipedia/Approximation_property |
In mathematics, Choquet theory, named after Gustave Choquet, is an area of functional analysis and convex analysis concerned with measures which have support on the extreme points of a convex set C. Roughly speaking, every vector of C should appear as a weighted average of extreme points, a concept made more precise by generalizing the notion of weighted average from a convex combination to an integral taken over the set E of extreme points. Here C is a subset of a real vector space V, and the main thrust of the theory is to treat the cases where V is an infinite-dimensional (locally convex Hausdorff) topological vector space along lines similar to the finite-dimensional case. The main concerns of Gustave Choquet were in potential theory. Choquet theory has become a general paradigm, particularly for treating convex cones as determined by their extreme rays, and so for many different notions of positivity in mathematics.
The two ends of a line segment determine the points in between: in vector terms the segment from v to w consists of the λv + (1 − λ)w with 0 ≤ λ ≤ 1. The classical result of Hermann Minkowski says that in Euclidean space, a bounded, closed convex set C is the convex hull of its extreme point set E, so that any c in C is a (finite) convex combination of points e of E. Here E may be a finite or an infinite set. In vector terms, by assigning non-negative weights w(e) to the e in E, almost all 0, we can represent any c in C as
c
=
∑
e
∈
E
w
(
e
)
e
{\displaystyle c=\sum _{e\in E}w(e)e\ }
with
∑
e
∈
E
w
(
e
)
=
1.
{\displaystyle \sum _{e\in E}w(e)=1.\ }
In any case the w(e) give a probability measure supported on a finite subset of E. For any affine function f on C, its value at the point c is
f
(
c
)
=
∫
f
(
e
)
d
w
(
e
)
.
{\displaystyle f(c)=\int f(e)dw(e).}
In the infinite dimensional setting, one would like to make a similar statement.
== Choquet's theorem ==
Choquet's theorem states that for a compact convex subset C of a normed space V, given c in C there exists a probability measure w supported on the set E of extreme points of C such that, for any affine function f on C,
f
(
c
)
=
∫
f
(
e
)
d
w
(
e
)
.
{\displaystyle f(c)=\int f(e)dw(e).}
In practice V will be a Banach space. The original Krein–Milman theorem follows from Choquet's result. Another corollary is the Riesz representation theorem for states on the continuous functions on a metrizable compact Hausdorff space.
More generally, for V a locally convex topological vector space, the Choquet–Bishop–de Leeuw theorem gives the same formal statement.
In addition to the existence of a probability measure supported on the extreme boundary that represents a given point c, one might also consider the uniqueness of such measures. It is easy to see that uniqueness does not hold even in the finite dimensional setting. One can take, for counterexamples, the convex set to be a cube or a ball in R3. Uniqueness does hold, however, when the convex set is a finite dimensional simplex. A finite dimensional simplex is a special case of a Choquet simplex. Any point in a Choquet simplex is represented by a unique probability measure on the extreme points.
== See also ==
Carathéodory's theorem – Point in the convex hull of a set P in Rd, is the convex combination of d+1 points in P
Helly's theorem – Theorem about the intersections of d-dimensional convex sets
Krein–Milman theorem – On when a space equals the closed convex hull of its extreme points
List of convexity topics
Shapley–Folkman lemma – Sums of sets of vectors are nearly convex
== Notes ==
== References ==
Asimow, L.; Ellis, A. J. (1980). Convexity theory and its applications in functional analysis. London Mathematical Society Monographs. Vol. 16. London-New York: Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers]. pp. x+266. ISBN 0-12-065340-0. MR 0623459.
Bourgin, Richard D. (1983). Geometric aspects of convex sets with the Radon-Nikodým property. Lecture Notes in Mathematics. Vol. 993. Berlin: Springer-Verlag. pp. xii+474. ISBN 3-540-12296-6. MR 0704815.
Phelps, Robert R. (2001). Lectures on Choquet's theorem. Lecture Notes in Mathematics. Vol. 1757 (Second edition of 1966 ed.). Berlin: Springer-Verlag. pp. viii+124. ISBN 3-540-41834-2. MR 1835574.
"Choquet simplex", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Choquet_theory |
In functional analysis, the open mapping theorem, also known as the Banach–Schauder theorem or the Banach theorem (named after Stefan Banach and Juliusz Schauder), is a fundamental result that states that if a bounded or continuous linear operator between Banach spaces is surjective then it is an open map.
A special case is also called the bounded inverse theorem (also called inverse mapping theorem or Banach isomorphism theorem), which states that a bijective bounded linear operator
T
{\displaystyle T}
from one Banach space to another has bounded inverse
T
−
1
{\displaystyle T^{-1}}
.
== Statement and proof ==
The proof here uses the Baire category theorem, and completeness of both
E
{\displaystyle E}
and
F
{\displaystyle F}
is essential to the theorem. The statement of the theorem is no longer true if either space is assumed to be only a normed vector space; see § Counterexample.
The proof is based on the following lemmas, which are also somewhat of independent interest. A linear map
f
:
E
→
F
{\displaystyle f:E\to F}
between topological vector spaces is said to be nearly open if, for each neighborhood
U
{\displaystyle U}
of zero, the closure
f
(
U
)
¯
{\displaystyle {\overline {f(U)}}}
contains a neighborhood of zero. The next lemma may be thought of as a weak version of the open mapping theorem.
Proof: Shrinking
U
{\displaystyle U}
, we can assume
U
{\displaystyle U}
is an open ball centered at zero. We have
f
(
E
)
=
f
(
⋃
n
∈
N
n
U
)
=
⋃
n
∈
N
f
(
n
U
)
{\displaystyle f(E)=f\left(\bigcup _{n\in \mathbb {N} }nU\right)=\bigcup _{n\in \mathbb {N} }f(nU)}
. Thus, some
f
(
n
U
)
¯
{\displaystyle {\overline {f(nU)}}}
contains an interior point
y
{\displaystyle y}
; that is, for some radius
r
>
0
{\displaystyle r>0}
,
B
(
y
,
r
)
⊂
f
(
n
U
)
¯
.
{\displaystyle B(y,r)\subset {\overline {f(nU)}}.}
Then for any
v
{\displaystyle v}
in
F
{\displaystyle F}
with
‖
v
‖
<
r
{\displaystyle \|v\|<r}
, by linearity, convexity and
(
−
1
)
U
⊂
U
{\displaystyle (-1)U\subset U}
,
v
=
v
−
y
+
y
∈
f
(
−
n
U
)
¯
+
f
(
n
U
)
¯
⊂
f
(
2
n
U
)
¯
{\displaystyle v=v-y+y\in {\overline {f(-nU)}}+{\overline {f(nU)}}\subset {\overline {f(2nU)}}}
,
which proves the lemma by dividing by
2
n
{\displaystyle 2n}
.
◻
{\displaystyle \square }
(The same proof works if
E
,
F
{\displaystyle E,F}
are pre-Fréchet spaces.)
The completeness on the domain then allows to upgrade nearly open to open.
Proof: Let
y
{\displaystyle y}
be in
B
(
0
,
δ
)
{\displaystyle B(0,\delta )}
and
c
n
>
0
{\displaystyle c_{n}>0}
some sequence. We have:
B
(
0
,
δ
)
¯
⊂
f
(
B
(
0
,
1
)
)
¯
{\displaystyle {\overline {B(0,\delta )}}\subset {\overline {f(B(0,1))}}}
. Thus, for each
ϵ
>
0
{\displaystyle \epsilon >0}
and
z
{\displaystyle z}
in
F
{\displaystyle F}
, we can find an
x
{\displaystyle x}
with
‖
x
‖
<
δ
−
1
‖
z
‖
{\displaystyle \|x\|<\delta ^{-1}\|z\|}
and
z
{\displaystyle z}
in
B
(
f
(
x
)
,
ϵ
)
{\displaystyle B(f(x),\epsilon )}
. Thus, taking
z
=
y
{\displaystyle z=y}
, we find an
x
1
{\displaystyle x_{1}}
such that
‖
y
−
f
(
x
1
)
‖
<
c
1
,
‖
x
1
‖
<
δ
−
1
‖
y
‖
.
{\displaystyle \|y-f(x_{1})\|<c_{1},\,\|x_{1}\|<\delta ^{-1}\|y\|.}
Applying the same argument with
z
=
y
−
f
(
x
1
)
{\displaystyle z=y-f(x_{1})}
, we then find an
x
2
{\displaystyle x_{2}}
such that
‖
y
−
f
(
x
1
)
−
f
(
x
2
)
‖
<
c
2
,
‖
x
2
‖
<
δ
−
1
c
1
{\displaystyle \|y-f(x_{1})-f(x_{2})\|<c_{2},\,\|x_{2}\|<\delta ^{-1}c_{1}}
where we observed
‖
x
2
‖
<
δ
−
1
‖
z
‖
<
δ
−
1
c
1
{\displaystyle \|x_{2}\|<\delta ^{-1}\|z\|<\delta ^{-1}c_{1}}
. Then so on. Thus, if
c
:=
∑
c
n
<
∞
{\displaystyle c:=\sum c_{n}<\infty }
, we found a sequence
x
n
{\displaystyle x_{n}}
such that
x
=
∑
1
∞
x
n
{\displaystyle x=\sum _{1}^{\infty }x_{n}}
converges and
f
(
x
)
=
y
{\displaystyle f(x)=y}
. Also,
‖
x
‖
≤
∑
1
∞
‖
x
n
‖
≤
δ
−
1
‖
y
‖
+
δ
−
1
c
.
{\displaystyle \|x\|\leq \sum _{1}^{\infty }\|x_{n}\|\leq \delta ^{-1}\|y\|+\delta ^{-1}c.}
Since
δ
−
1
‖
y
‖
<
1
{\displaystyle \delta ^{-1}\|y\|<1}
, by making
c
{\displaystyle c}
small enough, we can achieve
‖
x
‖
<
1
{\displaystyle \|x\|<1}
.
◻
{\displaystyle \square }
(Again the same proof is valid if
E
,
F
{\displaystyle E,F}
are pre-Fréchet spaces.)
Proof of the theorem: By Baire's category theorem, the first lemma applies. Then the conclusion of the theorem follows from the second lemma.
◻
{\displaystyle \square }
In general, a continuous bijection between topological spaces is not necessarily a homeomorphism. The open mapping theorem, when it applies, implies the bijectivity is enough:
Even though the above bounded inverse theorem is a special case of the open mapping theorem, the open mapping theorem in turn follows from that. Indeed, a surjective continuous linear operator
T
:
E
→
F
{\displaystyle T:E\to F}
factors as
T
:
E
→
p
E
/
ker
T
→
T
0
F
.
{\displaystyle T:E{\overset {p}{\to }}E/\operatorname {ker} T{\overset {T_{0}}{\to }}F.}
Here,
T
0
{\displaystyle T_{0}}
is continuous and bijective and thus is a homeomorphism by the bounded inverse theorem; in particular, it is an open mapping. As a quotient map for topological groups is open,
T
{\displaystyle T}
is open then.
Because the open mapping theorem and the bounded inverse theorem are essentially the same result, they are often simply called Banach's theorem.
=== Transpose formulation ===
Here is a formulation of the open mapping theorem in terms of the transpose of an operator.
Proof: The idea of 1.
⇒
{\displaystyle \Rightarrow }
2. is to show:
y
∉
T
(
B
X
)
¯
⇒
‖
y
‖
>
δ
,
{\displaystyle y\notin {\overline {T(B_{X})}}\Rightarrow \|y\|>\delta ,}
and that follows from the Hahn–Banach theorem. 2.
⇒
{\displaystyle \Rightarrow }
3. is exactly the second lemma in § Statement and proof. Finally, 3.
⇒
{\displaystyle \Rightarrow }
4. is trivial and 4.
⇒
{\displaystyle \Rightarrow }
1. easily follows from the open mapping theorem.
◻
{\displaystyle \square }
Alternatively, 1. implies that
T
′
{\displaystyle T'}
is injective and has closed image and then by the closed range theorem, that implies
T
{\displaystyle T}
has dense image and closed image, respectively; i.e.,
T
{\displaystyle T}
is surjective. Hence, the above result is a variant of a special case of the closed range theorem.
=== Quantative formulation ===
Terence Tao gives the following quantitative formulation of the theorem:
The proof follows a cycle of implications
1
⇒
4
⇒
3
⇒
2
⇒
1
{\displaystyle 1\Rightarrow 4\Rightarrow 3\Rightarrow 2\Rightarrow 1}
. Here
2
⇒
1
{\displaystyle 2\Rightarrow 1}
is the usual open mapping theorem.
1
⇒
4
{\displaystyle 1\Rightarrow 4}
: For some
r
>
0
{\displaystyle r>0}
, we have
B
(
0
,
2
)
⊂
T
(
B
(
0
,
r
)
)
{\displaystyle B(0,2)\subset T(B(0,r))}
where
B
{\displaystyle B}
means an open ball. Then
f
‖
f
‖
=
T
(
u
‖
f
‖
)
{\displaystyle {\frac {f}{\|f\|}}=T\left({\frac {u}{\|f\|}}\right)}
for some
u
‖
f
‖
{\displaystyle {\frac {u}{\|f\|}}}
in
B
(
0
,
r
)
{\displaystyle B(0,r)}
. That is,
T
u
=
f
{\displaystyle Tu=f}
with
‖
u
‖
<
r
‖
f
‖
{\displaystyle \|u\|<r\|f\|}
.
4
⇒
3
{\displaystyle 4\Rightarrow 3}
: We can write
f
=
∑
0
∞
f
j
{\displaystyle f=\sum _{0}^{\infty }f_{j}}
with
f
j
{\displaystyle f_{j}}
in the dense subspace and the sum converging in norm. Then, since
E
{\displaystyle E}
is complete,
u
=
∑
0
∞
u
j
{\displaystyle u=\sum _{0}^{\infty }u_{j}}
with
‖
u
j
‖
≤
C
‖
f
j
‖
{\displaystyle \|u_{j}\|\leq C\|f_{j}\|}
and
T
u
j
=
f
j
{\displaystyle Tu_{j}=f_{j}}
is a required solution.
Finally,
3
⇒
2
{\displaystyle 3\Rightarrow 2}
is trivial.
◻
{\displaystyle \square }
== Counterexample ==
The open mapping theorem may not hold for normed spaces that are not complete. A quickest way to see this is to note that the closed graph theorem, a consequence of the open mapping theorem, fails without completeness. But here is a more concrete counterexample. Consider the space X of sequences x : N → R with only finitely many non-zero terms equipped with the supremum norm. The map T : X → X defined by
T
x
=
(
x
1
,
x
2
2
,
x
3
3
,
…
)
{\displaystyle Tx=\left(x_{1},{\frac {x_{2}}{2}},{\frac {x_{3}}{3}},\dots \right)}
is bounded, linear and invertible, but T−1 is unbounded.
This does not contradict the bounded inverse theorem since X is not complete, and thus is not a Banach space.
To see that it's not complete, consider the sequence of sequences x(n) ∈ X given by
x
(
n
)
=
(
1
,
1
2
,
…
,
1
n
,
0
,
0
,
…
)
{\displaystyle x^{(n)}=\left(1,{\frac {1}{2}},\dots ,{\frac {1}{n}},0,0,\dots \right)}
converges as n → ∞ to the sequence x(∞) given by
x
(
∞
)
=
(
1
,
1
2
,
…
,
1
n
,
…
)
,
{\displaystyle x^{(\infty )}=\left(1,{\frac {1}{2}},\dots ,{\frac {1}{n}},\dots \right),}
which has all its terms non-zero, and so does not lie in X.
The completion of X is the space
c
0
{\displaystyle c_{0}}
of all sequences that converge to zero, which is a (closed) subspace of the ℓp space ℓ∞(N), which is the space of all bounded sequences.
However, in this case, the map T is not onto, and thus not a bijection. To see this, one need simply note that the sequence
x
=
(
1
,
1
2
,
1
3
,
…
)
,
{\displaystyle x=\left(1,{\frac {1}{2}},{\frac {1}{3}},\dots \right),}
is an element of
c
0
{\displaystyle c_{0}}
, but is not in the range of
T
:
c
0
→
c
0
{\displaystyle T:c_{0}\to c_{0}}
. Same reasoning applies to show
T
{\displaystyle T}
is also not onto in
l
∞
{\displaystyle l^{\infty }}
, for example
x
=
(
1
,
1
,
1
,
…
)
{\displaystyle x=\left(1,1,1,\dots \right)}
is not in the range of
T
{\displaystyle T}
.
== Consequences ==
The open mapping theorem has several important consequences:
If
T
:
X
→
Y
{\displaystyle T:X\to Y}
is a bijective continuous linear operator between the Banach spaces
X
{\displaystyle X}
and
Y
,
{\displaystyle Y,}
then the inverse operator
T
−
1
:
Y
→
X
{\displaystyle T^{-1}:Y\to X}
is continuous as well (this is called the bounded inverse theorem).
If
T
:
X
→
Y
{\displaystyle T:X\to Y}
is a linear operator between the Banach spaces
X
{\displaystyle X}
and
Y
,
{\displaystyle Y,}
and if for every sequence
(
x
n
)
{\displaystyle \left(x_{n}\right)}
in
X
{\displaystyle X}
with
x
n
→
0
{\displaystyle x_{n}\to 0}
and
T
x
n
→
y
{\displaystyle Tx_{n}\to y}
it follows that
y
=
0
,
{\displaystyle y=0,}
then
T
{\displaystyle T}
is continuous (the closed graph theorem).
Given a bounded operator
T
:
E
→
F
{\displaystyle T:E\to F}
between normed spaces, if the image of
T
{\displaystyle T}
is non-meager and if
E
{\displaystyle E}
is complete, then
T
{\displaystyle T}
is open and surjective and
F
{\displaystyle F}
is complete (to see this, use the two lemmas in the proof of the theorem).
An exact sequence of Banach spaces (or more generally Fréchet spaces) is topologically exact.
The closed range theorem, which says an operator (under some assumption) has closed image if and only if its transpose has closed image (see closed range theorem#Sketch of proof).
The open mapping theorem does not imply that a continuous surjective linear operator admits a continuous linear section. What we have is:
A surjective continuous linear operator between Banach spaces admits a continuous linear section if and only if the kernel is topologically complemented.
In particular, the above applies to an operator between Hilbert spaces or an operator with finite-dimensional kernel (by the Hahn–Banach theorem). If one drops the requirement that a section be linear, a surjective continuous linear operator between Banach spaces admits a continuous section; this is the Bartle–Graves theorem.
== Generalizations ==
Local convexity of
X
{\displaystyle X}
or
Y
{\displaystyle Y}
is not essential to the proof, but completeness is: the theorem remains true in the case when
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are F-spaces. Furthermore, the theorem can be combined with the Baire category theorem in the following manner:
(The proof is essentially the same as the Banach or Fréchet cases; we modify the proof slightly to avoid the use of convexity,)
Furthermore, in this latter case if
N
{\displaystyle N}
is the kernel of
A
,
{\displaystyle A,}
then there is a canonical factorization of
A
{\displaystyle A}
in the form
X
→
X
/
N
→
α
Y
{\displaystyle X\to X/N{\overset {\alpha }{\to }}Y}
where
X
/
N
{\displaystyle X/N}
is the quotient space (also an F-space) of
X
{\displaystyle X}
by the closed subspace
N
.
{\displaystyle N.}
The quotient mapping
X
→
X
/
N
{\displaystyle X\to X/N}
is open, and the mapping
α
{\displaystyle \alpha }
is an isomorphism of topological vector spaces.
An important special case of this theorem can also be stated as
On the other hand, a more general formulation, which implies the first, can be given:
Nearly/Almost open linear maps
A linear map
A
:
X
→
Y
{\displaystyle A:X\to Y}
between two topological vector spaces (TVSs) is called a nearly open map (or sometimes, an almost open map) if for every neighborhood
U
{\displaystyle U}
of the origin in the domain, the closure of its image
cl
A
(
U
)
{\displaystyle \operatorname {cl} A(U)}
is a neighborhood of the origin in
Y
.
{\displaystyle Y.}
Many authors use a different definition of "nearly/almost open map" that requires that the closure of
A
(
U
)
{\displaystyle A(U)}
be a neighborhood of the origin in
A
(
X
)
{\displaystyle A(X)}
rather than in
Y
,
{\displaystyle Y,}
but for surjective maps these definitions are equivalent.
A bijective linear map is nearly open if and only if its inverse is continuous.
Every surjective linear map from locally convex TVS onto a barrelled TVS is nearly open. The same is true of every surjective linear map from a TVS onto a Baire TVS.
Webbed spaces are a class of topological vector spaces for which the open mapping theorem and the closed graph theorem hold.
== See also ==
Closed graph – Graph of a map closed in the product spacePages displaying short descriptions of redirect targets
Closed graph theorem – Theorem relating continuity to graphs
Closed graph theorem (functional analysis) – Theorems connecting continuity to closure of graphs
Open mapping theorem (complex analysis) – Theorem on holomorphic functions
Surjection of Fréchet spaces – Characterization of surjectivity
Ursescu theorem – Generalization of closed graph, open mapping, and uniform boundedness theorem
Webbed space – Space where open mapping and closed graph theorems hold
== References ==
== Bibliography ==
Adasch, Norbert; Ernst, Bruno; Keim, Dieter (1978). Topological Vector Spaces: The Theory Without Convexity Conditions. Lecture Notes in Mathematics. Vol. 639. Berlin New York: Springer-Verlag. ISBN 978-3-540-08662-8. OCLC 297140003.
Banach, Stefan (1932). Théorie des Opérations Linéaires [Theory of Linear Operations] (PDF). Monografie Matematyczne (in French). Vol. 1. Warszawa: Subwencji Funduszu Kultury Narodowej. Zbl 0005.20901. Archived from the original (PDF) on 2014-01-11. Retrieved 2020-07-11.
Berberian, Sterling K. (1974). Lectures in Functional Analysis and Operator Theory. Graduate Texts in Mathematics. Vol. 15. New York: Springer. ISBN 978-0-387-90081-0. OCLC 878109401.
Bourbaki, Nicolas (1987) [1981]. Topological Vector Spaces: Chapters 1–5. Éléments de mathématique. Translated by Eggleston, H.G.; Madan, S. Berlin New York: Springer-Verlag. ISBN 3-540-13627-4. OCLC 17499190.
Conway, John (1990). A course in functional analysis. Graduate Texts in Mathematics. Vol. 96 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-97245-9. OCLC 21195908.
Dieudonné, Jean (1970). Treatise on Analysis, Volume II. Academic Press.
Edwards, Robert E. (1995). Functional Analysis: Theory and Applications. New York: Dover Publications. ISBN 978-0-486-68143-6. OCLC 30593138.
Grothendieck, Alexander (1973). Topological Vector Spaces. Translated by Chaljub, Orlando. New York: Gordon and Breach Science Publishers. ISBN 978-0-677-30020-7. OCLC 886098.
Jarchow, Hans (1981). Locally convex spaces. Stuttgart: B.G. Teubner. ISBN 978-3-519-02224-4. OCLC 8210342.
Köthe, Gottfried (1983) [1969]. Topological Vector Spaces I. Grundlehren der mathematischen Wissenschaften. Vol. 159. Translated by Garling, D.J.H. New York: Springer Science & Business Media. ISBN 978-3-642-64988-2. MR 0248498. OCLC 840293704.
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
Robertson, Alex P.; Robertson, Wendy J. (1980). Topological Vector Spaces. Cambridge Tracts in Mathematics. Vol. 53. Cambridge England: Cambridge University Press. ISBN 978-0-521-29882-7. OCLC 589250.
Rudin, Walter (1973). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 25 (First ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 9780070542259.
Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Swartz, Charles (1992). An introduction to Functional Analysis. New York: M. Dekker. ISBN 978-0-8247-8643-4. OCLC 24909067.
Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
Vogt, Dietmar (2000). "Lectures on Fréchet spaces" (PDF). Bergische Universität Wuppertal.
Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
This article incorporates material from Proof of open mapping theorem on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
== Further reading ==
"When is a complex of Banach spaces exact as condensed abelian groups?". MathOverflow. February 6, 2021. | Wikipedia/Open_mapping_theorem_(functional_analysis) |
In mathematics, a von Neumann algebra or W*-algebra is a *-algebra of bounded operators on a Hilbert space that is closed in the weak operator topology and contains the identity operator. It is a special type of C*-algebra.
Von Neumann algebras were originally introduced by John von Neumann, motivated by his study of single operators, group representations, ergodic theory and quantum mechanics. His double commutant theorem shows that the analytic definition is equivalent to a purely algebraic definition as an algebra of symmetries.
Two basic examples of von Neumann algebras are as follows:
The ring
L
∞
(
R
)
{\displaystyle L^{\infty }(\mathbb {R} )}
of essentially bounded measurable functions on the real line is a commutative von Neumann algebra, whose elements act as multiplication operators by pointwise multiplication on the Hilbert space
L
2
(
R
)
{\displaystyle L^{2}(\mathbb {R} )}
of square-integrable functions.
The algebra
B
(
H
)
{\displaystyle {\mathcal {B}}({\mathcal {H}})}
of all bounded operators on a Hilbert space
H
{\displaystyle {\mathcal {H}}}
is a von Neumann algebra, non-commutative if the Hilbert space has dimension at least
2
{\displaystyle 2}
.
Von Neumann algebras were first studied by von Neumann (1930) in 1929; he and Francis Murray developed the basic theory, under the original name of rings of operators, in a series of papers written in the 1930s and 1940s (F.J. Murray & J. von Neumann 1936, 1937, 1943; J. von Neumann 1938, 1940, 1943, 1949), reprinted in the collected works of von Neumann (1961).
Introductory accounts of von Neumann algebras are given in the online notes of Jones (2003) and Wassermann (1991) and the books by Dixmier (1981), Schwartz (1967), Blackadar (2005) and Sakai (1971). The three volume work by Takesaki (1979) gives an encyclopedic account of the theory. The book by Connes (1994) discusses more advanced topics.
== Definitions ==
There are three common ways to define von Neumann algebras.
The first and most common way is to define them as weakly closed *-algebras of bounded operators (on a Hilbert space) containing the identity. In this definition the weak (operator) topology can be replaced by many other common topologies including the strong, ultrastrong or ultraweak operator topologies. The *-algebras of bounded operators that are closed in the norm topology are C*-algebras, so in particular any von Neumann algebra is a C*-algebra.
The second definition is that a von Neumann algebra is a subalgebra of the bounded operators closed under involution (the *-operation) and equal to its double commutant, or equivalently the commutant of some subalgebra closed under *. The von Neumann double commutant theorem (von Neumann 1930) says that the first two definitions are equivalent.
The first two definitions describe a von Neumann algebra concretely as a set of operators acting on some given Hilbert space. Sakai (1971) showed that von Neumann algebras can also be defined abstractly as C*-algebras that have a predual; in other words the von Neumann algebra, considered as a Banach space, is the dual of some other Banach space called the predual. The predual of a von Neumann algebra is in fact unique up to isomorphism. Some authors use "von Neumann algebra" for the algebras together with a Hilbert space action, and "W*-algebra" for the abstract concept, so a von Neumann algebra is a W*-algebra together with a Hilbert space and a suitable faithful unital action on the Hilbert space. The concrete and abstract definitions of a von Neumann algebra are similar to the concrete and abstract definitions of a C*-algebra, which can be defined either as norm-closed *-algebras of operators on a Hilbert space, or as Banach *-algebras such that
|
|
a
a
∗
|
|
=
|
|
a
|
|
|
|
a
∗
|
|
{\displaystyle ||aa^{*}||=||a||\ ||a^{*}||}
.
== Terminology ==
Some of the terminology in von Neumann algebra theory can be confusing, and the terms often have different meanings outside the subject.
A factor is a von Neumann algebra with trivial center, i.e. a center consisting only of scalar operators.
A finite von Neumann algebra is one which is the direct integral of finite factors (meaning the von Neumann algebra has a faithful normal tracial state
τ
:
M
→
C
{\displaystyle \tau :M\rightarrow \mathbb {C} }
). Similarly, properly infinite von Neumann algebras are the direct integral of properly infinite factors.
A von Neumann algebra that acts on a separable Hilbert space is called separable. Note that such algebras are rarely separable in the norm topology.
The von Neumann algebra generated by a set of bounded operators on a Hilbert space is the smallest von Neumann algebra containing all those operators.
The tensor product of two von Neumann algebras acting on two Hilbert spaces is defined to be the von Neumann algebra generated by their algebraic tensor product, considered as operators on the Hilbert space tensor product of the Hilbert spaces.
By forgetting about the topology on a von Neumann algebra, we can consider it a (unital) *-algebra, or just a ring. Von Neumann algebras are semihereditary: every finitely generated submodule of a projective module is itself projective. There have been several attempts to axiomatize the underlying rings of von Neumann algebras, including Baer *-rings and AW*-algebras. The *-algebra of affiliated operators of a finite von Neumann algebra is a von Neumann regular ring. (The von Neumann algebra itself is in general not von Neumann regular.)
== Commutative von Neumann algebras ==
The relationship between commutative von Neumann algebras and measure spaces is analogous to that between commutative C*-algebras and locally compact Hausdorff spaces. Every commutative von Neumann algebra is isomorphic to L∞(X) for some measure space (X, μ) and conversely, for every σ-finite measure space X, the *-algebra L∞(X) is a von Neumann algebra.
Due to this analogy, the theory of von Neumann algebras has been called noncommutative measure theory, while the theory of C*-algebras is sometimes called noncommutative topology (Connes 1994).
== Projections ==
Operators E in a von Neumann algebra for which E = EE = E* are called projections; they are exactly the operators which give an orthogonal projection of H onto some closed subspace. A subspace of the Hilbert space H is said to belong to the von Neumann algebra M if it is the image of some projection in M. This establishes a 1:1 correspondence between projections of M and subspaces that belong to M. Informally these are the closed subspaces that can be described using elements of M, or that M "knows" about.
It can be shown that the closure of the image of any operator in M and the kernel of any operator in M belongs to M. Also, the closure of the image under an operator of M of any subspace belonging to M also belongs to M. (These results are a consequence of the polar decomposition).
=== Comparison theory of projections ===
The basic theory of projections was worked out by Murray & von Neumann (1936). Two subspaces belonging to M are called (Murray–von Neumann) equivalent if there is a partial isometry mapping the first isomorphically onto the other that is an element of the von Neumann algebra (informally, if M "knows" that the subspaces are isomorphic). This induces a natural equivalence relation on projections by defining E to be equivalent to F if the corresponding subspaces are equivalent, or in other words if there is a partial isometry of H that maps the image of E isometrically to the image of F and is an element of the von Neumann algebra. Another way of stating this is that E is equivalent to F if E=uu* and F=u*u for some partial isometry u in M.
The equivalence relation ~ thus defined is additive in the following sense: Suppose E1 ~ F1 and E2 ~ F2. If E1 ⊥ E2 and F1 ⊥ F2, then E1 + E2 ~ F1 + F2. Additivity would not generally hold if one were to require unitary equivalence in the definition of ~, i.e. if we say E is equivalent to F if u*Eu = F for some unitary u. The Schröder–Bernstein theorems for operator algebras gives a sufficient condition for Murray-von Neumann equivalence.
The subspaces belonging to M are partially ordered by inclusion, and this induces a partial order ≤ of projections. There is also a natural partial order on the set of equivalence classes of projections, induced by the partial order ≤ of projections. If M is a factor, ≤ is a total order on equivalence classes of projections, described in the section on traces below.
A projection (or subspace belonging to M) E is said to be a finite projection if there is no projection F < E (meaning F ≤ E and F ≠ E) that is equivalent to E. For example, all finite-dimensional projections (or subspaces) are finite (since isometries between Hilbert spaces leave the dimension fixed), but the identity operator on an infinite-dimensional Hilbert space is not finite in the von Neumann algebra of all bounded operators on it, since it is isometrically isomorphic to a proper subset of itself. However it is possible for infinite dimensional subspaces to be finite.
Orthogonal projections are noncommutative analogues of indicator functions in L∞(R). L∞(R) is the ||·||∞-closure of the subspace generated by the indicator functions. Similarly, a von Neumann algebra is generated by its projections; this is a consequence of the spectral theorem for self-adjoint operators.
The projections of a finite factor form a continuous geometry.
== Factors ==
A von Neumann algebra N whose center consists only of multiples of the identity operator is called a factor. As von Neumann (1949) showed, every von Neumann algebra on a separable Hilbert space is isomorphic to a direct integral of factors. This decomposition is essentially unique. Thus, the problem of classifying isomorphism classes of von Neumann algebras on separable Hilbert spaces can be reduced to that of classifying isomorphism classes of factors.
Murray & von Neumann (1936) showed that every factor has one of 3 types as described below. The type classification can be extended to von Neumann algebras that are not factors, and a von Neumann algebra is of type X if it can be decomposed as a direct integral of type X factors; for example, every commutative von Neumann algebra has type I1. Every von Neumann algebra can be written uniquely as a sum of von Neumann algebras of types I, II, and III.
There are several other ways to divide factors into classes that are sometimes used:
A factor is called discrete (or occasionally tame) if it has type I, and continuous (or occasionally wild) if it has type II or III.
A factor is called semifinite if it has type I or II, and purely infinite if it has type III.
A factor is called finite if the projection 1 is finite and properly infinite otherwise. Factors of types I and II may be either finite or properly infinite, but factors of type III are always properly infinite.
=== Type I factors ===
A factor is said to be of type I if there is a minimal projection E ≠ 0, i.e. a projection E such that there is no other projection F with 0 < F < E. Any factor of type I is isomorphic to the von Neumann algebra of all bounded operators on some Hilbert space; since there is one Hilbert space for every cardinal number, isomorphism classes of factors of type I correspond exactly to the cardinal numbers. Since many authors consider von Neumann algebras only on separable Hilbert spaces, it is customary to call the bounded operators on a Hilbert space of finite dimension n a factor of type In, and the bounded operators on a separable infinite-dimensional Hilbert space, a factor of type I∞.
=== Type II factors ===
A factor is said to be of type II if there are no minimal projections but there are non-zero finite projections. This implies that every projection E can be "halved" in the sense that there are two projections F and G that are Murray–von Neumann equivalent and satisfy E = F + G. If the identity operator in a type II factor is finite, the factor is said to be of type II1; otherwise, it is said to be of type II∞. The best understood factors of type II are the hyperfinite type II1 factor and the hyperfinite type II∞ factor, found by Murray & von Neumann (1936). These are the unique hyperfinite factors of types II1 and II∞; there are an uncountable number of other factors of these types that are the subject of intensive study. Murray & von Neumann (1937) proved the fundamental result that a factor of type II1 has a unique finite tracial state, and the set of traces of projections is [0,1].
A factor of type II∞ has a semifinite trace, unique up to rescaling, and the set of traces of projections is [0,∞]. The set of real numbers λ such that there is an automorphism rescaling the trace by a factor of λ is called the fundamental group of the type II∞ factor.
The tensor product of a factor of type II1 and an infinite type I factor has type II∞, and conversely any factor of type II∞ can be constructed like this. The fundamental group of a type II1 factor is defined to be the fundamental group of its tensor product with the infinite (separable) factor of type I. For many years it was an open problem to find a type II factor whose fundamental group was not the group of positive reals, but Connes then showed that the von Neumann group algebra of a countable discrete group with Kazhdan's property (T) (the trivial representation is isolated in the dual space), such as SL(3,Z), has a countable fundamental group. Subsequently, Sorin Popa showed that the fundamental group can be trivial for certain groups, including the semidirect product of Z2 by SL(2,Z).
An example of a type II1 factor is the von Neumann group algebra of a countable infinite discrete group such that every non-trivial conjugacy class is infinite.
McDuff (1969) found an uncountable family of such groups with non-isomorphic von Neumann group algebras, thus showing the existence of uncountably many different separable type II1 factors.
=== Type III factors ===
Lastly, type III factors are factors that do not contain any nonzero finite projections at all. In their first paper Murray & von Neumann (1936) were unable to decide whether or not they existed; the first examples were later found by von Neumann (1940). Since the identity operator is always infinite in those factors, they were sometimes called type III∞ in the past, but recently that notation has been superseded by the notation IIIλ, where λ is a real number in the interval [0,1]. More precisely, if the Connes spectrum (of its modular group) is 1 then the factor is of type III0, if the Connes spectrum is all integral powers of λ for 0 < λ < 1, then the type is IIIλ, and if the Connes spectrum is all positive reals then the type is III1. (The Connes spectrum is a closed subgroup of the positive reals, so these are the only possibilities.) The only trace on type III factors takes value ∞ on all non-zero positive elements, and any two non-zero projections are equivalent. At one time type III factors were considered to be intractable objects, but Tomita–Takesaki theory has led to a good structure theory. In particular, any type III factor can be written in a canonical way as the crossed product of a type II∞ factor and the real numbers.
== The predual ==
Any von Neumann algebra M has a predual M∗, which is the Banach space of all ultraweakly continuous linear functionals on M. As the name suggests, M is (as a Banach space) the dual of its predual. The predual is unique in the sense that any other Banach space whose dual is M is canonically isomorphic to M∗. Sakai (1971) showed that the existence of a predual characterizes von Neumann algebras among C* algebras.
The definition of the predual given above seems to depend on the choice of Hilbert space that M acts on, as this determines the ultraweak topology. However the predual can also be defined without using the Hilbert space that M acts on, by defining it to be the space generated by all positive normal linear functionals on M. (Here "normal" means that it preserves suprema when applied to increasing nets of self adjoint operators; or equivalently to increasing sequences of projections.)
The predual M∗ is a closed subspace of the dual M* (which consists of all norm-continuous linear functionals on M) but is generally smaller. The proof that M∗ is (usually) not the same as M* is nonconstructive and uses the axiom of choice in an essential way; it is very hard to exhibit explicit elements of M* that are not in M∗. For example, exotic positive linear forms on the von Neumann algebra l∞(Z) are given by free ultrafilters; they correspond to exotic *-homomorphisms into C and describe the Stone–Čech compactification of Z.
Examples:
The predual of the von Neumann algebra L∞(R) of essentially bounded functions on R is the Banach space L1(R) of integrable functions. The dual of L∞(R) is strictly larger than L1(R) For example, a functional on L∞(R) that extends the Dirac measure δ0 on the closed subspace of bounded continuous functions C0b(R) cannot be represented as a function in L1(R).
The predual of the von Neumann algebra B(H) of bounded operators on a Hilbert space H is the Banach space of all trace class operators with the trace norm ||A||= Tr(|A|). The Banach space of trace class operators is itself the dual of the C*-algebra of compact operators (which is not a von Neumann algebra).
== Weights, states, and traces ==
Weights and their special cases states and traces are discussed in detail in (Takesaki 1979).
A weight ω on a von Neumann algebra is a linear map from the set of positive elements (those of the form a*a) to [0,∞].
A positive linear functional is a weight with ω(1) finite (or rather the extension of ω to the whole algebra by linearity).
A state is a weight with ω(1) = 1.
A trace is a weight with ω(aa*) = ω(a*a) for all a.
A tracial state is a trace with ω(1) = 1.
Any factor has a trace such that the trace of a non-zero projection is non-zero and the trace of a projection is infinite if and only if the projection is infinite. Such a trace is unique up to rescaling. For factors that are separable or finite, two projections are equivalent if and only if they have the same trace. The type of a factor can be read off from the possible values of this trace over the projections of the factor, as follows:
Type In: 0, x, 2x, ....,nx for some positive x (usually normalized to be 1/n or 1).
Type I∞: 0, x, 2x, ....,∞ for some positive x (usually normalized to be 1).
Type II1: [0,x] for some positive x (usually normalized to be 1).
Type II∞: [0,∞].
Type III: {0,∞}.
If a von Neumann algebra acts on a Hilbert space containing a norm 1 vector v, then the functional a → (av,v) is a normal state. This construction can be reversed to give an action on a Hilbert space from a normal state: this is the GNS construction for normal states.
== Modules over a factor ==
Given an abstract separable factor, one can ask for a classification of its modules, meaning the separable Hilbert spaces that it acts on. The answer is given as follows: every such module H can be given an M-dimension dimM(H) (not its dimension as a complex vector space) such that modules are isomorphic if and only if they have the same M-dimension. The M-dimension is additive, and a module is isomorphic to a subspace of another module if and only if it has smaller or equal M-dimension.
A module is called standard if it has a cyclic separating vector. Each factor has a standard representation, which is unique up to isomorphism. The standard representation has an antilinear involution J such that JMJ = M′. For finite factors the standard module is given by the GNS construction applied to the unique normal tracial state and the M-dimension is normalized so that the standard module has M-dimension 1, while for infinite factors the standard module is the module with M-dimension equal to ∞.
The possible M-dimensions of modules are given as follows:
Type In (n finite): The M-dimension can be any of 0/n, 1/n, 2/n, 3/n, ..., ∞. The standard module has M-dimension 1 (and complex dimension n2.)
Type I∞ The M-dimension can be any of 0, 1, 2, 3, ..., ∞. The standard representation of B(H) is H⊗H; its M-dimension is ∞.
Type II1: The M-dimension can be anything in [0, ∞]. It is normalized so that the standard module has M-dimension 1. The M-dimension is also called the coupling constant of the module H.
Type II∞: The M-dimension can be anything in [0, ∞]. There is in general no canonical way to normalize it; the factor may have outer automorphisms multiplying the M-dimension by constants. The standard representation is the one with M-dimension ∞.
Type III: The M-dimension can be 0 or ∞. Any two non-zero modules are isomorphic, and all non-zero modules are standard.
== Amenable von Neumann algebras ==
Connes (1976) and others proved that the following conditions on a von Neumann algebra M on a separable Hilbert space H are all equivalent:
M is hyperfinite or AFD or approximately finite dimensional or approximately finite: this means the algebra contains an ascending sequence of finite dimensional subalgebras with dense union. (Warning: some authors use "hyperfinite" to mean "AFD and finite".)
M is amenable: this means that the derivations of M with values in a normal dual Banach bimodule are all inner.
M has Schwartz's property P: for any bounded operator T on H the weak operator closed convex hull of the elements uTu* contains an element commuting with M.
M is semidiscrete: this means the identity map from M to M is a weak pointwise limit of completely positive maps of finite rank.
M has property E or the Hakeda–Tomiyama extension property: this means that there is a projection of norm 1 from bounded operators on H to M '.
M is injective: any completely positive linear map from any self adjoint closed subspace containing 1 of any unital C*-algebra A to M can be extended to a completely positive map from A to M.
There is no generally accepted term for the class of algebras above; Connes has suggested that amenable should be the standard term.
The amenable factors have been classified: there is a unique one of each of the types In, I∞, II1, II∞, IIIλ, for 0 < λ ≤ 1, and the ones of type III0 correspond to certain ergodic flows. (For type III0 calling this a classification is a little misleading, as it is known that there is no easy way to classify the corresponding ergodic flows.) The ones of type I and II1 were classified by Murray & von Neumann (1943), and the remaining ones were classified by Connes (1976), except for the type III1 case which was completed by Haagerup.
All amenable factors can be constructed using the group-measure space construction of Murray and von Neumann for a single ergodic transformation. In fact they are precisely the factors arising as crossed products by free ergodic actions of Z or Z/nZ on abelian von Neumann algebras L∞(X). Type I factors occur when the measure space X is atomic and the action transitive. When X is diffuse or non-atomic, it is equivalent to [0,1] as a measure space. Type II factors occur when X admits an equivalent finite (II1) or infinite (II∞) measure, invariant under an action of Z. Type III factors occur in the remaining cases where there is no invariant measure, but only an invariant measure class: these factors are called Krieger factors.
== Tensor products of von Neumann algebras ==
The Hilbert space tensor product of two Hilbert spaces is the completion of their algebraic tensor product. One can define a tensor product of von Neumann algebras (a completion of the algebraic tensor product of the algebras considered as rings), which is again a von Neumann algebra, and act on the tensor product of the corresponding Hilbert spaces. The tensor product of two finite algebras is finite, and the tensor product of an infinite algebra and a non-zero algebra is infinite. The type of the tensor product of two von Neumann algebras (I, II, or III) is the maximum of their types. The commutation theorem for tensor products states that
(
M
⊗
N
)
′
=
M
′
⊗
N
′
,
{\displaystyle (M\otimes N)^{\prime }=M^{\prime }\otimes N^{\prime },}
where M′ denotes the commutant of M.
The tensor product of an infinite number of von Neumann algebras, if done naively, is usually a ridiculously large non-separable algebra. Instead von Neumann (1938) showed that one should choose a state on each of the von Neumann algebras, use this to define a state on the algebraic tensor product, which can be used to produce a Hilbert space and a (reasonably small) von Neumann algebra. Araki & Woods (1968) studied the case where all the factors are finite matrix algebras; these factors are called Araki–Woods factors or ITPFI factors (ITPFI stands for "infinite tensor product of finite type I factors"). The type of the infinite tensor product can vary dramatically as the states are changed; for example, the infinite tensor product of an infinite number of type I2 factors can have any type depending on the choice of states. In particular Powers (1967) found an uncountable family of non-isomorphic hyperfinite type IIIλ factors for 0 < λ < 1, called Powers factors, by taking an infinite tensor product of type I2 factors, each with the state given by:
x
↦
T
r
(
1
λ
+
1
0
0
λ
λ
+
1
)
x
.
{\displaystyle x\mapsto {\rm {Tr}}{\begin{pmatrix}{1 \over \lambda +1}&0\\0&{\lambda \over \lambda +1}\\\end{pmatrix}}x.}
All hyperfinite von Neumann algebras not of type III0 are isomorphic to Araki–Woods factors, but there are uncountably many of type III0 that are not.
== Bimodules and subfactors ==
A bimodule (or correspondence) is a Hilbert space H with module actions of two commuting von Neumann algebras. Bimodules have a much richer structure than that of modules. Any bimodule over two factors always gives a subfactor since one of the factors is always contained in the commutant of the other. There is also a subtle relative tensor product operation due to Connes on bimodules. The theory of subfactors, initiated by Vaughan Jones, reconciles these two seemingly different points of view.
Bimodules are also important for the von Neumann group algebra M of a discrete group Γ. Indeed, if V is any unitary representation of Γ, then, regarding Γ as the diagonal subgroup of Γ × Γ, the corresponding induced representation on l2 (Γ, V) is naturally a bimodule for two commuting copies of M. Important representation theoretic properties of Γ can be formulated entirely in terms of bimodules and therefore make sense for the von Neumann algebra itself. For example, Connes and Jones gave a definition of an analogue of Kazhdan's property (T) for von Neumann algebras in this way.
== Non-amenable factors ==
Von Neumann algebras of type I are always amenable, but for the other types there are an uncountable number of different non-amenable factors, which seem very hard to classify, or even distinguish from each other. Nevertheless, Voiculescu has shown that the class of non-amenable factors coming from the group-measure space construction is disjoint from the class coming from group von Neumann algebras of free groups. Later Narutaka Ozawa proved that group von Neumann algebras of hyperbolic groups yield prime type II1 factors, i.e. ones that cannot be factored as tensor products of type II1 factors, a result first proved by Leeming Ge for free group factors using Voiculescu's free entropy. Popa's work on fundamental groups of non-amenable factors represents another significant advance. The theory of factors "beyond the hyperfinite" is rapidly expanding at present, with many new and surprising results; it has close links with rigidity phenomena in geometric group theory and ergodic theory.
== Examples ==
The essentially bounded functions on a σ-finite measure space form a commutative (type I1) von Neumann algebra acting on the L2 functions. For certain non-σ-finite measure spaces, usually considered pathological, L∞(X) is not a von Neumann algebra; for example, the σ-algebra of measurable sets might be the countable-cocountable algebra on an uncountable set. A fundamental approximation theorem can be represented by the Kaplansky density theorem.
The bounded operators on any Hilbert space form a von Neumann algebra, indeed a factor, of type I.
If we have any unitary representation of a group G on a Hilbert space H then the bounded operators commuting with G form a von Neumann algebra G′, whose projections correspond exactly to the closed subspaces of H invariant under G. Equivalent subrepresentations correspond to equivalent projections in G′. The double commutant G′′ of G is also a von Neumann algebra.
The von Neumann group algebra of a discrete group G is the algebra of all bounded operators on H = l2(G) commuting with the action of G on H through right multiplication. One can show that this is the von Neumann algebra generated by the operators corresponding to multiplication from the left with an element g ∈ G. It is a factor (of type II1) if every non-trivial conjugacy class of G is infinite (for example, a non-abelian free group), and is the hyperfinite factor of type II1 if in addition G is a union of finite subgroups (for example, the group of all permutations of the integers fixing all but a finite number of elements).
The tensor product of two von Neumann algebras, or of a countable number with states, is a von Neumann algebra as described in the section above.
The crossed product of a von Neumann algebra by a discrete (or more generally locally compact) group can be defined, and is a von Neumann algebra. Special cases are the group-measure space construction of Murray and von Neumann and Krieger factors.
The von Neumann algebras of a measurable equivalence relation and a measurable groupoid can be defined. These examples generalise von Neumann group algebras and the group-measure space construction.
== Applications ==
Von Neumann algebras have found applications in diverse areas of mathematics like knot theory, statistical mechanics, quantum field theory, local quantum physics, free probability, noncommutative geometry, representation theory, differential geometry, and dynamical systems.
For instance, C*-algebra provides an alternative axiomatization to probability theory. In this case the method goes by the name of Gelfand–Naimark–Segal construction. This is analogous to the two approaches to measure and integration, where one has the choice to construct measures of sets first and define integrals later, or construct integrals first and define set measures as integrals of characteristic functions.
== See also ==
AW*-algebra – algebraic generalization of a W*-algebraPages displaying wikidata descriptions as a fallback
Central carrier
Tomita–Takesaki theory – Mathematical method in functional analysis
== References ==
Araki, H.; Woods, E. J. (1968), "A classification of factors", Publ. Res. Inst. Math. Sci. Ser. A, 4 (1): 51–130, doi:10.2977/prims/1195195263MR0244773
Blackadar, B. (2005), Operator algebras, Springer, ISBN 3-540-28486-9, corrected manuscript (PDF), 2013
Connes, A. (1976), "Classification of Injective Factors", Annals of Mathematics, Second Series, 104 (1): 73–115, doi:10.2307/1971057, JSTOR 1971057
Connes, A. (1994), Non-commutative geometry, Academic Press, ISBN 0-12-185860-X.
Dixmier, J. (1981), Von Neumann algebras, 凡異出版社, ISBN 0-444-86308-7 (A translation of Dixmier, J. (1957), Les algèbres d'opérateurs dans l'espace hilbertien: algèbres de von Neumann, Gauthier-Villars, the first book about von Neumann algebras.)
Jones, V.F.R. (2003), von Neumann algebras (PDF); incomplete notes from a course.
Kostecki, R.P. (2013), W*-algebras and noncommutative integration, arXiv:1307.4818, Bibcode:2013arXiv1307.4818P.
McDuff, Dusa (1969), "Uncountably many II1 factors", Annals of Mathematics, Second Series, 90 (2): 372–377, doi:10.2307/1970730, JSTOR 1970730
Murray, F. J. (2006), "The rings of operators papers", The legacy of John von Neumann (Hempstead, NY, 1988), Proc. Sympos. Pure Math., vol. 50, Providence, RI.: Amer. Math. Soc., pp. 57–60, ISBN 0-8218-4219-6 A historical account of the discovery of von Neumann algebras.
Murray, F.J.; von Neumann, J. (1936), "On rings of operators", Annals of Mathematics, Second Series, 37 (1): 116–229, doi:10.2307/1968693, JSTOR 1968693. This paper gives their basic properties and the division into types I, II, and III, and in particular finds factors not of type I.
Murray, F.J.; von Neumann, J. (1937), "On rings of operators II", Trans. Amer. Math. Soc., 41 (2), American Mathematical Society: 208–248, doi:10.2307/1989620, JSTOR 1989620. This is a continuation of the previous paper, that studies properties of the trace of a factor.
Murray, F.J.; von Neumann, J. (1943), "On rings of operators IV", Annals of Mathematics, Second Series, 44 (4): 716–808, doi:10.2307/1969107, JSTOR 1969107. This studies when factors are isomorphic, and in particular shows that all approximately finite factors of type II1 are isomorphic.
Powers, Robert T. (1967), "Representations of Uniformly Hyperfinite Algebras and Their Associated von Neumann Rings", Annals of Mathematics, Second Series, 86 (1): 138–171, doi:10.2307/1970364, JSTOR 1970364
Sakai, S. (1971), C*-algebras and W*-algebras, Springer, ISBN 3-540-63633-1
Schwartz, Jacob (1967), W-* Algebras, Gordon & Breach Publishing, ISBN 0-677-00670-5
Shtern, A.I. (2001) [1994], "von Neumann algebra", Encyclopedia of Mathematics, EMS Press
Takesaki, M. (1979), Theory of Operator Algebras I, II, III, Springer, ISBN 3-540-42248-X
von Neumann, J. (1930), "Zur Algebra der Funktionaloperationen und Theorie der normalen Operatoren", Math. Ann., 102 (1): 370–427, Bibcode:1930MatAn.102..685E, doi:10.1007/BF01782352, S2CID 121141866. The original paper on von Neumann algebras.
von Neumann, J. (1936), "On a Certain Topology for Rings of Operators", Annals of Mathematics, Second Series, 37 (1): 111–115, doi:10.2307/1968692, JSTOR 1968692. This defines the ultrastrong topology.
von Neumann, J. (1938), "On infinite direct products", Compos. Math., 6: 1–77. This discusses infinite tensor products of Hilbert spaces and the algebras acting on them.
von Neumann, J. (1940), "On rings of operators III", Annals of Mathematics, Second Series, 41 (1): 94–161, doi:10.2307/1968823, JSTOR 1968823. This shows the existence of factors of type III.
von Neumann, J. (1943), "On Some Algebraical Properties of Operator Rings", Annals of Mathematics, Second Series, 44 (4): 709–715, doi:10.2307/1969106, JSTOR 1969106. This shows that some apparently topological properties in von Neumann algebras can be defined purely algebraically.
von Neumann, J. (1949), "On Rings of Operators. Reduction Theory", Annals of Mathematics, Second Series, 50 (2): 401–485, doi:10.2307/1969463, JSTOR 1969463. This discusses how to write a von Neumann algebra as a sum or integral of factors.
von Neumann, John (1961), Taub, A.H. (ed.), Collected Works, Volume III: Rings of Operators, NY: Pergamon Press. Reprints von Neumann's papers on von Neumann algebras.
Wassermann, A. J. (1991), Operators on Hilbert space | Wikipedia/Von_Neumann_algebra |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.