text
stringlengths
9
3.55k
source
stringlengths
31
280
In numerical analysis, an incomplete Cholesky factorization of a symmetric positive definite matrix is a sparse approximation of the Cholesky factorization. An incomplete Cholesky factorization is often used as a preconditioner for algorithms like the conjugate gradient method. The Cholesky factorization of a positive definite matrix A is A = LL* where L is a lower triangular matrix.
https://en.wikipedia.org/wiki/Incomplete_Cholesky_factorization
An incomplete Cholesky factorization is given by a sparse lower triangular matrix K that is in some sense close to L. The corresponding preconditioner is KK*. One popular way to find such a matrix K is to use the algorithm for finding the exact Cholesky decomposition in which K has the same sparsity pattern as A (any entry of K is set to zero if the corresponding entry in A is also zero). This gives an incomplete Cholesky factorization which is as sparse as the matrix A.
https://en.wikipedia.org/wiki/Incomplete_Cholesky_factorization
In numerical analysis, an iterative method is called locally convergent if the successive approximations produced by the method are guaranteed to converge to a solution when the initial approximation is already close enough to the solution. Iterative methods for nonlinear equations and their systems, such as Newton's method are usually only locally convergent. An iterative method that converges for an arbitrary initial approximation is called globally convergent. Iterative methods for systems of linear equations are usually globally convergent.
https://en.wikipedia.org/wiki/Local_convergence
In numerical analysis, catastrophic cancellation is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers. For example, if there are two studs, one L 1 = 253.5 cm {\displaystyle L_{1}=253.5\,{\text{cm}}} long and the other L 2 = 252.5 cm {\displaystyle L_{2}=252.5\,{\text{cm}}} long, and they are measured with a ruler that is good only to the centimeter, then the approximations could come out to be L ~ 1 = 254 cm {\displaystyle {\tilde {L}}_{1}=254\,{\text{cm}}} and L ~ 2 = 252 cm {\displaystyle {\tilde {L}}_{2}=252\,{\text{cm}}} . These may be good approximations, in relative error, to the true lengths: the approximations are in error by less than 2% of the true lengths, | L 1 − L ~ 1 | / | L 1 | < 2 % {\displaystyle |L_{1}-{\tilde {L}}_{1}|/|L_{1}|<2\%} . However, if the approximate lengths are subtracted, the difference will be L ~ 1 − L ~ 2 = 254 cm − 252 cm = 2 cm {\displaystyle {\tilde {L}}_{1}-{\tilde {L}}_{2}=254\,{\text{cm}}-252\,{\text{cm}}=2\,{\text{cm}}} , even though the true difference between the lengths is L 1 − L 2 = 253.5 cm − 252.5 cm = 1 cm {\displaystyle L_{1}-L_{2}=253.5\,{\text{cm}}-252.5\,{\text{cm}}=1\,{\text{cm}}} .
https://en.wikipedia.org/wiki/Catastrophic_cancellation
The difference of the approximations, 2 cm {\displaystyle 2\,{\text{cm}}} , is in error by 100% of the magnitude of the difference of the true values, 1 cm {\displaystyle 1\,{\text{cm}}} . Catastrophic cancellation isn't affected by how large the inputs are—it applies just as much to large and small inputs. It depends only on how large the difference is, and on the error of the inputs.
https://en.wikipedia.org/wiki/Catastrophic_cancellation
Exactly the same error would arise by subtracting 52 cm {\displaystyle 52\,{\text{cm}}} from 54 cm {\displaystyle 54\,{\text{cm}}} as approximations to 52.5 cm {\displaystyle 52.5\,{\text{cm}}} and 53.5 cm {\displaystyle 53.5\,{\text{cm}}} , or by subtracting 2.00052 km {\displaystyle 2.00052\,{\text{km}}} from 2.00054 km {\displaystyle 2.00054\,{\text{km}}} as approximations to 2.000525 km {\displaystyle 2.000525\,{\text{km}}} and 2.000535 km {\displaystyle 2.000535\,{\text{km}}} . Catastrophic cancellation may happen even if the difference is computed exactly, as in the example above—it is not a property of any particular kind of arithmetic like floating-point arithmetic; rather, it is inherent to subtraction, when the inputs are approximations themselves. Indeed, in floating-point arithmetic, when the inputs are close enough, the floating-point difference is computed exactly, by the Sterbenz lemma—there is no rounding error introduced by the floating-point subtraction operation.
https://en.wikipedia.org/wiki/Catastrophic_cancellation
In numerical analysis, complicated three-dimensional shapes are commonly broken down into, or approximated by, a polygonal mesh of irregular tetrahedra in the process of setting up the equations for finite element analysis especially in the numerical solution of partial differential equations. These methods have wide applications in practical applications in computational fluid dynamics, aerodynamics, electromagnetic fields, civil engineering, chemical engineering, naval architecture and engineering, and related fields.
https://en.wikipedia.org/wiki/Tetrahedral_angle
In numerical analysis, computational physics, and simulation, discretization error is the error resulting from the fact that a function of a continuous variable is represented in the computer by a finite number of evaluations, for example, on a lattice. Discretization error can usually be reduced by using a more finely spaced lattice, with an increased computational cost.
https://en.wikipedia.org/wiki/Discretization_error
In numerical analysis, continuous wavelets are functions used by the continuous wavelet transform. These functions are defined as analytical expressions, as functions either of time or of frequency. Most of the continuous wavelets are used for both wavelet decomposition and composition transforms. That is they are the continuous counterpart of orthogonal wavelets. The following continuous wavelets have been invented for various applications: Poisson wavelet Morlet wavelet Modified Morlet wavelet Mexican hat wavelet Complex Mexican hat wavelet Shannon wavelet Meyer wavelet Difference of Gaussians Hermitian wavelet Beta wavelet Causal wavelet μ wavelets Cauchy wavelet Addison wavelet
https://en.wikipedia.org/wiki/Continuous_wavelet
In numerical analysis, different decompositions are used to implement efficient matrix algorithms. For instance, when solving a system of linear equations A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } , the matrix A can be decomposed via the LU decomposition. The LU decomposition factorizes a matrix into a lower triangular matrix L and an upper triangular matrix U. The systems L ( U x ) = b {\displaystyle L(U\mathbf {x} )=\mathbf {b} } and U x = L − 1 b {\displaystyle U\mathbf {x} =L^{-1}\mathbf {b} } require fewer additions and multiplications to solve, compared with the original system A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } , though one might require significantly more digits in inexact arithmetic such as floating point. Similarly, the QR decomposition expresses A as QR with Q an orthogonal matrix and R an upper triangular matrix. The system Q(Rx) = b is solved by Rx = QTb = c, and the system Rx = c is solved by 'back substitution'. The number of additions and multiplications required is about twice that of using the LU solver, but no more digits are required in inexact arithmetic because the QR decomposition is numerically stable.
https://en.wikipedia.org/wiki/Matrix_decomposition
In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time interval (if applicable) are discretized, or broken into a finite number of steps, and the value of the solution at these discrete points is approximated by solving algebraic equations containing finite differences and values from nearby points. Finite difference methods convert ordinary differential equations (ODE) or partial differential equations (PDE), which may be nonlinear, into a system of linear equations that can be solved by matrix algebra techniques. Modern computers can perform these linear algebra computations efficiently which, along with their relative ease of implementation, has led to the widespread use of FDM in modern numerical analysis. Today, FDM are one of the most common approaches to the numerical solution of PDE, along with finite element methods.
https://en.wikipedia.org/wiki/Finite_Difference_Method
In numerical analysis, fixed-point iteration is a method of computing fixed points of a function. More specifically, given a function f {\displaystyle f} defined on the real numbers with real values and given a point x 0 {\displaystyle x_{0}} in the domain of f {\displaystyle f} , the fixed-point iteration is which gives rise to the sequence x 0 , x 1 , x 2 , … {\displaystyle x_{0},x_{1},x_{2},\dots } of iterated function applications x 0 , f ( x 0 ) , f ( f ( x 0 ) ) , … {\displaystyle x_{0},f(x_{0}),f(f(x_{0})),\dots } which is hoped to converge to a point x fix {\displaystyle x_{\text{fix}}} . If f {\displaystyle f} is continuous, then one can prove that the obtained x fix {\displaystyle x_{\text{fix}}} is a fixed point of f {\displaystyle f} , i.e., More generally, the function f {\displaystyle f} can be defined on any metric space with values in that same space.
https://en.wikipedia.org/wiki/Fixed_point_iteration
In numerical analysis, fixed-point iteration is a method of computing fixed points of a function. Specifically, given a function f {\displaystyle f} with the same domain and codomain, a point x 0 {\displaystyle x_{0}} in the domain of f {\displaystyle f} , the fixed-point iteration is which gives rise to the sequence x 0 , x 1 , x 2 , … {\displaystyle x_{0},x_{1},x_{2},\dots } of iterated function applications x 0 , f ( x 0 ) , f ( f ( x 0 ) ) , … {\displaystyle x_{0},f(x_{0}),f(f(x_{0})),\dots } which is hoped to converge to a point x {\displaystyle x} . If f {\displaystyle f} is continuous, then one can prove that the obtained x {\displaystyle x} is a fixed point of f {\displaystyle f} . Points that come back to the same value after a finite number of iterations of the function are called periodic points. A fixed point is a periodic point with period equal to one.
https://en.wikipedia.org/wiki/Unstable_fixed_point
In numerical analysis, given a square grid in one or two dimensions, the five-point stencil of a point in the grid is a stencil made up of the point itself together with its four "neighbors". It is used to write finite difference approximations to derivatives at grid points. It is an example for numerical differentiation.
https://en.wikipedia.org/wiki/Five-point_stencil
In numerical analysis, given a square grid in two dimensions, the nine-point stencil of a point in the grid is a stencil made up of the point itself together with its eight "neighbors". It is used to write finite difference approximations to derivatives at grid points. It is an example for numerical differentiation. This stencil is often used to approximate the Laplacian of a function of two variables.
https://en.wikipedia.org/wiki/Nine-point_stencil
In numerical analysis, hill climbing is a mathematical optimization technique which belongs to the family of local search. It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by making an incremental change to the solution. If the change produces a better solution, another incremental change is made to the new solution, and so on until no further improvements can be found. For example, hill climbing can be applied to the travelling salesman problem.
https://en.wikipedia.org/wiki/Random-restart_hill_climbing
It is easy to find an initial solution that visits all the cities but will likely be very poor compared to the optimal solution. The algorithm starts with such a solution and makes small improvements to it, such as switching the order in which two cities are visited. Eventually, a much shorter route is likely to be obtained.
https://en.wikipedia.org/wiki/Random-restart_hill_climbing
Hill climbing finds optimal solutions for convex problems – for other problems it will find only local optima (solutions that cannot be improved upon by any neighboring configurations), which are not necessarily the best possible solution (the global optimum) out of all possible solutions (the search space). Examples of algorithms that solve convex problems by hill-climbing include the simplex algorithm for linear programming and binary search. : 253 To attempt to avoid getting stuck in local optima, one could use restarts (i.e. repeated local search), or more complex schemes based on iterations (like iterated local search), or on memory (like reactive search optimization and tabu search), or on memory-less stochastic modifications (like simulated annealing).
https://en.wikipedia.org/wiki/Random-restart_hill_climbing
The relative simplicity of the algorithm makes it a popular first choice amongst optimizing algorithms. It is used widely in artificial intelligence, for reaching a goal state from a starting node. Different choices for next nodes and starting nodes are used in related algorithms.
https://en.wikipedia.org/wiki/Random-restart_hill_climbing
Although more advanced algorithms such as simulated annealing or tabu search may give better results, in some situations hill climbing works just as well. Hill climbing can often produce a better result than other algorithms when the amount of time available to perform a search is limited, such as with real-time systems, so long as a small number of increments typically converges on a good solution (the optimal solution or a close approximation). At the other extreme, bubble sort can be viewed as a hill climbing algorithm (every adjacent element exchange decreases the number of disordered element pairs), yet this approach is far from efficient for even modest N, as the number of exchanges required grows quadratically. Hill climbing is an anytime algorithm: it can return a valid solution even if it's interrupted at any time before it ends.
https://en.wikipedia.org/wiki/Random-restart_hill_climbing
In numerical analysis, interpolative decomposition (ID) factors a matrix as the product of two matrices, one of which contains selected columns from the original matrix, and the other of which has a subset of columns consisting of the identity matrix and all its values are no greater than 2 in absolute value.
https://en.wikipedia.org/wiki/Interpolative_decomposition
In numerical analysis, inverse iteration (also known as the inverse power method) is an iterative eigenvalue algorithm. It allows one to find an approximate eigenvector when an approximation to a corresponding eigenvalue is already known. The method is conceptually similar to the power method. It appears to have originally been developed to compute resonance frequencies in the field of structural mechanics.
https://en.wikipedia.org/wiki/Inverse_iteration
The inverse power iteration algorithm starts with an approximation μ {\displaystyle \mu } for the eigenvalue corresponding to the desired eigenvector and a vector b 0 {\displaystyle b_{0}} , either a randomly selected vector or an approximation to the eigenvector. The method is described by the iteration where C k {\displaystyle C_{k}} are some constants usually chosen as C k = ‖ ( A − μ I ) − 1 b k ‖ . {\displaystyle C_{k}=\|(A-\mu I)^{-1}b_{k}\|.}
https://en.wikipedia.org/wiki/Inverse_iteration
Since eigenvectors are defined up to multiplication by constant, the choice of C k {\displaystyle C_{k}} can be arbitrary in theory; practical aspects of the choice of C k {\displaystyle C_{k}} are discussed below. At every iteration, the vector b k {\displaystyle b_{k}} is multiplied by the matrix ( A − μ I ) − 1 {\displaystyle (A-\mu I)^{-1}} and normalized. It is exactly the same formula as in the power method, except replacing the matrix A {\displaystyle A} by ( A − μ I ) − 1 .
https://en.wikipedia.org/wiki/Inverse_iteration
{\displaystyle (A-\mu I)^{-1}.} The closer the approximation μ {\displaystyle \mu } to the eigenvalue is chosen, the faster the algorithm converges; however, incorrect choice of μ {\displaystyle \mu } can lead to slow convergence or to the convergence to an eigenvector other than the one desired. In practice, the method is used when a good approximation for the eigenvalue is known, and hence one needs only few (quite often just one) iterations.
https://en.wikipedia.org/wiki/Inverse_iteration
In numerical analysis, inverse quadratic interpolation is a root-finding algorithm, meaning that it is an algorithm for solving equations of the form f(x) = 0. The idea is to use quadratic interpolation to approximate the inverse of f. This algorithm is rarely used on its own, but it is important because it forms part of the popular Brent's method.
https://en.wikipedia.org/wiki/Inverse_quadratic_interpolation
In numerical analysis, leapfrog integration is a method for numerically integrating differential equations of the form or equivalently of the form particularly in the case of a dynamical system of classical mechanics. The method is known by different names in different disciplines. In particular, it is similar to the velocity Verlet method, which is a variant of Verlet integration.
https://en.wikipedia.org/wiki/Leapfrog_method
Leapfrog integration is equivalent to updating positions x ( t ) {\displaystyle x(t)} and velocities v ( t ) = x ˙ ( t ) {\displaystyle v(t)={\dot {x}}(t)} at different interleaved time points, staggered in such a way that they "leapfrog" over each other. Leapfrog integration is a second-order method, in contrast to Euler integration, which is only first-order, yet requires the same number of function evaluations per step. Unlike Euler integration, it is stable for oscillatory motion, as long as the time-step Δ t {\displaystyle \Delta t} is constant, and Δ t ≤ 2 / ω {\displaystyle \Delta t\leq 2/\omega } .Using Yoshida coefficients, applying the leapfrog integrator multiple times with the correct timesteps, a much higher order integrator can be generated.
https://en.wikipedia.org/wiki/Leapfrog_method
In numerical analysis, matrices from finite element or finite difference problems are often banded. Such matrices can be viewed as descriptions of the coupling between the problem variables; the banded property corresponds to the fact that variables are not coupled over arbitrarily large distances. Such matrices can be further divided – for instance, banded matrices exist where every element in the band is nonzero.
https://en.wikipedia.org/wiki/Bandwidth_(matrix_theory)
These often arise when discretising one-dimensional problems.Problems in higher dimensions also lead to banded matrices, in which case the band itself also tends to be sparse. For instance, a partial differential equation on a square domain (using central differences) will yield a matrix with a bandwidth equal to the square root of the matrix dimension, but inside the band only 5 diagonals are nonzero. Unfortunately, applying Gaussian elimination (or equivalently an LU decomposition) to such a matrix results in the band being filled in by many non-zero elements.
https://en.wikipedia.org/wiki/Bandwidth_(matrix_theory)
In numerical analysis, mortar methods are discretization methods for partial differential equations, which use separate finite element discretization on nonoverlapping subdomains. The meshes on the subdomains do not match on the interface, and the equality of the solution is enforced by Lagrange multipliers, judiciously chosen to preserve the accuracy of the solution. Mortar discretizations lend themselves naturally to the solution by iterative domain decomposition methods such as FETI and balancing domain decomposition In the engineering practice in the finite element method, continuity of solutions between non-matching subdomains is implemented by multiple-point constraints. == References ==
https://en.wikipedia.org/wiki/Mortar_methods
In numerical analysis, multi-time-step integration, also referred to as multiple-step or asynchronous time integration, is a numerical time-integration method that uses different time-steps or time-integrators for different parts of the problem. There are different approaches to multi-time-step integration. They are based on domain decomposition and can be classified into strong (monolithic) or weak (staggered) schemes. Using different time-steps or time-integrators in the context of a weak algorithm is rather straightforward, because the numerical solvers operate independently.
https://en.wikipedia.org/wiki/Multi-time-step_integration
However, this is not the case in a strong algorithm. In the past few years a number of research articles have addressed the development of strong multi-time-step algorithms.
https://en.wikipedia.org/wiki/Multi-time-step_integration
In either case, strong or weak, the numerical accuracy and stability needs to be carefully studied. Other approaches to multi-time-step integration in the context of operator splitting methods have also been developed; i.e., multi-rate GARK method and multi-step methods for molecular dynamics simulations. == References ==
https://en.wikipedia.org/wiki/Multi-time-step_integration
In numerical analysis, multivariate interpolation is interpolation on functions of more than one variable (multivariate functions); when the variates are spatial coordinates, it is also known as spatial interpolation. The function to be interpolated is known at given points ( x i , y i , z i , … ) {\displaystyle (x_{i},y_{i},z_{i},\dots )} and the interpolation problem consists of yielding values at arbitrary points ( x , y , z , … ) {\displaystyle (x,y,z,\dots )} . Multivariate interpolation is particularly important in geostatistics, where it is used to create a digital elevation model from a set of points on the Earth's surface (for example, spot heights in a topographic survey or depths in a hydrographic survey).
https://en.wikipedia.org/wiki/Multivariate_interpolation
In numerical analysis, nested dissection is a divide and conquer heuristic for the solution of sparse symmetric systems of linear equations based on graph partitioning. Nested dissection was introduced by George (1973); the name was suggested by Garrett Birkhoff.Nested dissection consists of the following steps: Form an undirected graph in which the vertices represent rows and columns of the system of linear equations, and an edge represents a nonzero entry in the sparse matrix representing the system. Recursively partition the graph into subgraphs using separators, small subsets of vertices the removal of which allows the graph to be partitioned into subgraphs with at most a constant fraction of the number of vertices.
https://en.wikipedia.org/wiki/Nested_dissection
Perform Cholesky decomposition (a variant of Gaussian elimination for symmetric matrices), ordering the elimination of the variables by the recursive structure of the partition: each of the two subgraphs formed by removing the separator is eliminated first, and then the separator vertices are eliminated.As a consequence of this algorithm, the fill-in (the set of nonzero matrix entries created in the Cholesky decomposition that are not part of the input matrix structure) is limited to at most the square of the separator size at each level of the recursive partition. In particular, for planar graphs (frequently arising in the solution of sparse linear systems derived from two-dimensional finite element method meshes) the resulting matrix has O(n log n) nonzeros, due to the planar separator theorem guaranteeing separators of size O(√n). For arbitrary graphs there is a nested dissection that guarantees fill-in within a O ( min { d log 4 ⁡ n , m 1 / 4 log 3.5 ⁡ n } ) {\displaystyle O(\min\{{\sqrt {d}}\log ^{4}n,m^{1/4}\log ^{3.5}n\})} factor of optimal, where d is the maximum degree and m is the number of non-zeros.
https://en.wikipedia.org/wiki/Nested_dissection
In numerical analysis, numerical differentiation algorithms estimate the derivative of a mathematical function or function subroutine using values of the function and perhaps other knowledge about the function.
https://en.wikipedia.org/wiki/Adaptive_numerical_differentiation
In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.
https://en.wikipedia.org/wiki/Matrix_eigenvalue_problem
In numerical analysis, one or more guard digits can be used to reduce the amount of roundoff error. For example, suppose that the final result of a long, multi-step calculation can be safely rounded off to N decimal places. That is to say, the roundoff error introduced by this final roundoff makes a negligible contribution to the overall uncertainty. However, it is quite likely that it is not safe to round off the intermediate steps in the calculation to the same number of digits.
https://en.wikipedia.org/wiki/Guard_digit
Be aware that roundoff errors can accumulate. If M decimal places are used in the intermediate calculation, we say there are M−N guard digits. Guard digits are also used in floating point operations in most computer systems.
https://en.wikipedia.org/wiki/Guard_digit
Given 2 1 × 0.100 2 − 2 0 × 0.111 2 {\displaystyle 2^{1}\times 0.100_{2}-2^{0}\times 0.111_{2}} we have to line up the binary points. This means we must add an extra digit to the first operand—a guard digit. This gives us 2 1 × 0.1000 2 − 2 1 × 0.0111 2 {\displaystyle 2^{1}\times 0.1000_{2}-2^{1}\times 0.0111_{2}} .
https://en.wikipedia.org/wiki/Guard_digit
Performing this operation gives us 2 1 × 0.0001 2 {\displaystyle 2^{1}\times 0.0001_{2}} or 2 − 2 × 0.100 2 {\displaystyle 2^{-2}\times 0.100_{2}} . Without using a guard digit we have 2 1 × 0.100 2 − 2 1 × 0.011 2 {\displaystyle 2^{1}\times 0.100_{2}-2^{1}\times 0.011_{2}} , yielding 2 1 × 0.001 2 = {\displaystyle 2^{1}\times 0.001_{2}=} or 2 − 1 × 0.100 2 {\displaystyle 2^{-1}\times 0.100_{2}} . This gives us a relative error of 1.
https://en.wikipedia.org/wiki/Guard_digit
Therefore, we can see how important guard digits can be. An example of the error caused by floating point roundoff is illustrated in the following C code. It appears that the program should not terminate.
https://en.wikipedia.org/wiki/Guard_digit
Yet the output is: i=54, a=1.000000 Another example is: Take 2 numbers: 2.56 × 10 0 {\displaystyle 2.56\times 10^{0}} and 2.34 × 10 2 {\displaystyle 2.34\times 10^{2}} we bring the first number to the same power of 10 {\displaystyle 10} as the second one: 0.0256 × 10 2 {\displaystyle 0.0256\times 10^{2}} The addition of the 2 numbers is: 0.0256*10^2 2.3400*10^2 + ____________ 2.3656*10^2 After padding the second number (i.e., 2.34 × 10 2 {\displaystyle 2.34\times 10^{2}} ) with two 0 {\displaystyle 0} s, the bit after 4 {\displaystyle 4} is the guard digit, and the bit after is the round digit. The result after rounding is 2.37 {\displaystyle 2.37} as opposed to 2.36 {\displaystyle 2.36} , without the extra bits (guard and round bits), i.e., by considering only 0.02 + 2.34 = 2.36 {\displaystyle 0.02+2.34=2.36} . The error therefore is 0.01 {\displaystyle 0.01} .
https://en.wikipedia.org/wiki/Guard_digit
In numerical analysis, order of accuracy quantifies the rate of convergence of a numerical approximation of a differential equation to the exact solution. Consider u {\displaystyle u} , the exact solution to a differential equation in an appropriate normed space ( V , | | | | ) {\displaystyle (V,||\ ||)} . Consider a numerical approximation u h {\displaystyle u_{h}} , where h {\displaystyle h} is a parameter characterizing the approximation, such as the step size in a finite difference scheme or the diameter of the cells in a finite element method. The numerical solution u h {\displaystyle u_{h}} is said to be n {\displaystyle n} th-order accurate if the error E ( h ) := | | u − u h | | {\displaystyle E(h):=||u-u_{h}||} is proportional to the step-size h {\displaystyle h} to the n {\displaystyle n} th power: E ( h ) = | | u − u h | | ≤ C h n {\displaystyle E(h)=||u-u_{h}||\leq Ch^{n}} where the constant C {\displaystyle C} is independent of h {\displaystyle h} and usually depends on the solution u {\displaystyle u} .
https://en.wikipedia.org/wiki/Order_of_accuracy
Using the big O notation an n {\displaystyle n} th-order accurate numerical method is notated as | | u − u h | | = O ( h n ) {\displaystyle ||u-u_{h}||=O(h^{n})} This definition is strictly dependent on the norm used in the space; the choice of such norm is fundamental to estimate the rate of convergence and, in general, all numerical errors correctly. The size of the error of a first-order accurate approximation is directly proportional to h {\displaystyle h} . Partial differential equations which vary over both time and space are said to be accurate to order n {\displaystyle n} in time and to order m {\displaystyle m} in space. == References ==
https://en.wikipedia.org/wiki/Order_of_accuracy
In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence. Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation. In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm.
https://en.wikipedia.org/wiki/Pairwise_summation
Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below). In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as O(εn). Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations. If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of O ( ε log ⁡ n ) {\displaystyle O(\varepsilon {\sqrt {\log n}})} for pairwise summation.A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.
https://en.wikipedia.org/wiki/Pairwise_summation
In numerical analysis, polynomial interpolation is the interpolation of a given bivariate data set by the polynomial of lowest possible degree that passes through the points of the dataset.Given a set of n + 1 data points ( x 0 , y 0 ) , … , ( x n , y n ) {\displaystyle (x_{0},y_{0}),\ldots ,(x_{n},y_{n})} , with no two x j {\displaystyle x_{j}} the same, a polynomial function p ( x ) = a 0 + a 1 x + ⋯ + a n x n {\displaystyle p(x)=a_{0}+a_{1}x+\cdots +a_{n}x^{n}} is said to interpolate the data if p ( x j ) = y j {\displaystyle p(x_{j})=y_{j}} for each j ∈ { 0 , 1 , … , n } {\displaystyle j\in \{0,1,\dotsc ,n\}} . There is always a unique such polynomial, commonly given by two explicit formulas, the Lagrange polynomials and Newton polynomials.
https://en.wikipedia.org/wiki/Interpolating_polynomial
In numerical analysis, predictor–corrector methods belong to a class of algorithms designed to integrate ordinary differential equations – to find an unknown function that satisfies a given differential equation. All such algorithms proceed in two steps: The initial, "prediction" step, starts from a function fitted to the function-values and derivative-values at a preceding set of points to extrapolate ("anticipate") this function's value at a subsequent, new point. The next, "corrector" step refines the initial approximation by using the predicted value of the function and another method to interpolate that unknown function's value at the same subsequent point.
https://en.wikipedia.org/wiki/Predictor-corrector_method
In numerical analysis, stochastic tunneling (STUN) is an approach to global optimization based on the Monte Carlo method-sampling of the function to be objective minimized in which the function is nonlinearly transformed to allow for easier tunneling among regions containing function minima. Easier tunneling allows for faster exploration of sample space and faster convergence to a good solution.
https://en.wikipedia.org/wiki/Stochastic_tunneling
In numerical analysis, the Cash–Karp method is a method for solving ordinary differential equations (ODEs). It was proposed by Professor Jeff R. Cash from Imperial College London and Alan H. Karp from IBM Scientific Center. The method is a member of the Runge–Kutta family of ODE solvers.
https://en.wikipedia.org/wiki/Cash–Karp_method
More specifically, it uses six function evaluations to calculate fourth- and fifth-order accurate solutions. The difference between these solutions is then taken to be the error of the (fourth order) solution. This error estimate is very convenient for adaptive stepsize integration algorithms. Other similar integration methods are Fehlberg (RKF) and Dormand–Prince (RKDP). The Butcher tableau is: The first row of b coefficients gives the fifth-order accurate solution, and the second row gives the fourth-order solution.
https://en.wikipedia.org/wiki/Cash–Karp_method
In numerical analysis, the Clenshaw algorithm, also called Clenshaw summation, is a recursive method to evaluate a linear combination of Chebyshev polynomials. The method was published by Charles William Clenshaw in 1955. It is a generalization of Horner's method for evaluating a linear combination of monomials. It generalizes to more than just Chebyshev polynomials; it applies to any class of functions that can be defined by a three-term recurrence relation.
https://en.wikipedia.org/wiki/Clenshaw_algorithm
In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a second-order method in time. It is implicit in time, can be written as an implicit Runge–Kutta method, and it is numerically stable.
https://en.wikipedia.org/wiki/Crank-Nicolson_method
The method was developed by John Crank and Phyllis Nicolson in the mid 20th century.For diffusion equations (and many other equations), it can be shown the Crank–Nicolson method is unconditionally stable. However, the approximate solutions can still contain (decaying) spurious oscillations if the ratio of time step Δ t {\displaystyle \Delta t} times the thermal diffusivity to the square of space step, Δ x 2 {\displaystyle \Delta x^{2}} , is large (typically, larger than 1/2 per Von Neumann stability analysis). For this reason, whenever large time steps or high spatial resolution is necessary, the less accurate backward Euler method is often used, which is both stable and immune to oscillations.
https://en.wikipedia.org/wiki/Crank-Nicolson_method
In numerical analysis, the Dormand–Prince (RKDP) method or DOPRI method, is an embedded method for solving ordinary differential equations (ODE). The method is a member of the Runge–Kutta family of ODE solvers. More specifically, it uses six function evaluations to calculate fourth- and fifth-order accurate solutions.
https://en.wikipedia.org/wiki/Dormand–Prince_method
The difference between these solutions is then taken to be the error of the (fourth-order) solution. This error estimate is very convenient for adaptive stepsize integration algorithms. Other similar integration methods are Fehlberg (RKF) and Cash–Karp (RKCK).
https://en.wikipedia.org/wiki/Dormand–Prince_method
The Dormand–Prince method has seven stages, but it uses only six function evaluations per step because it has the "First Same As Last" (FSAL) property: the last stage is evaluated at the same point as the first stage of the next step. Dormand and Prince chose the coefficients of their method to minimize the error of the fifth-order solution. This is the main difference with the Fehlberg method, which was constructed so that the fourth-order solution has a small error. For this reason, the Dormand–Prince method is more suitable when the higher-order solution is used to continue the integration, a practice known as local extrapolation.
https://en.wikipedia.org/wiki/Dormand–Prince_method
In numerical analysis, the FTCS (forward time-centered space) method is a finite difference method used for numerically solving the heat equation and similar parabolic partial differential equations. It is a first-order method in time, explicit in time, and is conditionally stable when applied to the heat equation. When used as a method for advection equations, or more generally hyperbolic partial differential equations, it is unstable unless artificial viscosity is included. The abbreviation FTCS was first used by Patrick Roache.
https://en.wikipedia.org/wiki/FTCS_scheme
In numerical analysis, the ITP method, short for Interpolate Truncate and Project, is the first root-finding algorithm that achieves the superlinear convergence of the secant method while retaining the optimal worst-case performance of the bisection method. It is also the first method with guaranteed average performance strictly better than the bisection method under any continuous distribution. In practice it performs better than traditional interpolation and hybrid based strategies (Brent's Method, Ridders, Illinois), since it not only converges super-linearly over well behaved functions but also guarantees fast performance under ill-behaved functions where interpolations fail.The ITP method follows the same structure of standard bracketing strategies that keeps track of upper and lower bounds for the location of the root; but it also keeps track of the region where worst-case performance is kept upper-bounded.
https://en.wikipedia.org/wiki/ITP_Method
As a bracketing strategy, in each iteration the ITP queries the value of the function on one point and discards the part of the interval between two points where the function value shares the same sign. The queried point is calculated with three steps: it interpolates finding the regula falsi estimate, then it perturbes/truncates the estimate (similar to Regula falsi § Improvements in regula falsi) and then projects the perturbed estimate onto an interval in the neighbourhood of the bisection midpoint. The neighbourhood around the bisection point is calculated in each iteration in order to guarantee minmax optimality (Theorem 2.1 of ). The method depends on three hyper-parameters κ 1 ∈ ( 0 , ∞ ) , κ 2 ∈ [ 1 , 1 + ϕ ) {\displaystyle \kappa _{1}\in (0,\infty ),\kappa _{2}\in \left[1,1+\phi \right)} and n 0 ∈ [ 0 , ∞ ) {\displaystyle n_{0}\in [0,\infty )} where ϕ {\displaystyle \phi } is the golden ratio 1 2 ( 1 + 5 ) {\displaystyle {\tfrac {1}{2}}(1+{\sqrt {5}})}: the first two control the size of the truncation and the third is a slack variable that controls the size of the interval for the projection step.
https://en.wikipedia.org/wiki/ITP_Method
In numerical analysis, the Kahan summation algorithm, also known as compensated summation, significantly reduces the numerical error in the total obtained by adding a sequence of finite-precision floating-point numbers, compared to the obvious approach. This is done by keeping a separate running compensation (a variable to accumulate small errors), in effect extending the precision of the sum by the precision of the compensation variable. In particular, simply summing n {\displaystyle n} numbers in sequence has a worst-case error that grows proportional to n {\displaystyle n} , and a root mean square error that grows as n {\displaystyle {\sqrt {n}}} for random inputs (the roundoff errors form a random walk). With compensated summation, using a compensation variable with sufficiently high precision the worst-case error bound is effectively independent of n {\displaystyle n} , so a large number of values can be summed with an error that only depends on the floating-point precision of the result.The algorithm is attributed to William Kahan; Ivo Babuška seems to have come up with a similar algorithm independently (hence Kahan–Babuška summation). Similar, earlier techniques are, for example, Bresenham's line algorithm, keeping track of the accumulated error in integer operations (although first documented around the same time) and the delta-sigma modulation.
https://en.wikipedia.org/wiki/Kahan_summation
In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data. Given a data set of coordinate pairs ( x j , y j ) {\displaystyle (x_{j},y_{j})} with 0 ≤ j ≤ k , {\displaystyle 0\leq j\leq k,} the x j {\displaystyle x_{j}} are called nodes and the y j {\displaystyle y_{j}} are called values. The Lagrange polynomial L ( x ) {\displaystyle L(x)} has degree ≤ k {\textstyle \leq k} and assumes each value at the corresponding node, L ( x j ) = y j . {\displaystyle L(x_{j})=y_{j}.}
https://en.wikipedia.org/wiki/Lagrange_form
Although named after Joseph-Louis Lagrange, who published it in 1795, the method was first discovered in 1779 by Edward Waring. It is also an easy consequence of a formula published in 1783 by Leonhard Euler.Uses of Lagrange polynomials include the Newton–Cotes method of numerical integration, Shamir's secret sharing scheme in cryptography, and Reed–Solomon error correction in coding theory. For equispaced nodes, Lagrange interpolation is susceptible to Runge's phenomenon of large oscillation.
https://en.wikipedia.org/wiki/Lagrange_form
In numerical analysis, the Lax equivalence theorem is a fundamental theorem in the analysis of finite difference methods for the numerical solution of partial differential equations. It states that for a consistent finite difference method for a well-posed linear initial value problem, the method is convergent if and only if it is stable.The importance of the theorem is that while the convergence of the solution of the finite difference method to the solution of the partial differential equation is what is desired, it is ordinarily difficult to establish because the numerical method is defined by a recurrence relation while the differential equation involves a differentiable function. However, consistency—the requirement that the finite difference method approximates the correct partial differential equation—is straightforward to verify, and stability is typically much easier to show than convergence (and would be needed in any event to show that round-off error will not destroy the computation). Hence convergence is usually shown via the Lax equivalence theorem.
https://en.wikipedia.org/wiki/Lax–Richtmyer_theorem
Stability in this context means that a matrix norm of the matrix used in the iteration is at most unity, called (practical) Lax–Richtmyer stability. Often a von Neumann stability analysis is substituted for convenience, although von Neumann stability only implies Lax–Richtmyer stability in certain cases.
https://en.wikipedia.org/wiki/Lax–Richtmyer_theorem
This theorem is due to Peter Lax. It is sometimes called the Lax–Richtmyer theorem, after Peter Lax and Robert D. Richtmyer. == References ==
https://en.wikipedia.org/wiki/Lax–Richtmyer_theorem
In numerical analysis, the Newton–Cotes formulas, also called the Newton–Cotes quadrature rules or simply Newton–Cotes rules, are a group of formulas for numerical integration (also called quadrature) based on evaluating the integrand at equally spaced points. They are named after Isaac Newton and Roger Cotes. Newton–Cotes formulas can be useful if the value of the integrand at equally spaced points is given. If it is possible to change the points at which the integrand is evaluated, then other methods such as Gaussian quadrature and Clenshaw–Curtis quadrature are probably more suitable.
https://en.wikipedia.org/wiki/Quadrature_formula
In numerical analysis, the Peano kernel theorem is a general result on error bounds for a wide class of numerical approximations (such as numerical quadratures), defined in terms of linear functionals. It is attributed to Giuseppe Peano.
https://en.wikipedia.org/wiki/Peano_kernel_theorem
In numerical analysis, the Runge–Kutta methods (English: RUUNG-ə-KUUT-tah) are a family of implicit and explicit iterative methods, which include the Euler method, used in temporal discretization for the approximate solutions of simultaneous nonlinear equations. These methods were developed around 1900 by the German mathematicians Carl Runge and Wilhelm Kutta.
https://en.wikipedia.org/wiki/Butcher_tableau
In numerical analysis, the Schur complement method, named after Issai Schur, is the basic and the earliest version of non-overlapping domain decomposition method, also called iterative substructuring. A finite element problem is split into non-overlapping subdomains, and the unknowns in the interiors of the subdomains are eliminated. The remaining Schur complement system on the unknowns associated with subdomain interfaces is solved by the conjugate gradient method.
https://en.wikipedia.org/wiki/Schur_complement_method
In numerical analysis, the Shanks transformation is a non-linear series acceleration method to increase the rate of convergence of a sequence. This method is named after Daniel Shanks, who rediscovered this sequence transformation in 1955. It was first derived and published by R. Schmidt in 1941.
https://en.wikipedia.org/wiki/Shanks_transformation
In numerical analysis, the Weierstrass method or Durand–Kerner method, discovered by Karl Weierstrass in 1891 and rediscovered independently by Durand in 1960 and Kerner in 1966, is a root-finding algorithm for solving polynomial equations. In other words, the method can be used to solve numerically the equation f(x) = 0,where f is a given polynomial, which can be taken to be scaled so that the leading coefficient is 1.
https://en.wikipedia.org/wiki/Durand–Kerner_method
In numerical analysis, the balancing domain decomposition method (BDD) is an iterative method to find the solution of a symmetric positive definite system of linear algebraic equations arising from the finite element method. In each iteration, it combines the solution of local problems on non-overlapping subdomains with a coarse problem created from the subdomain nullspaces. BDD requires only solution of subdomain problems rather than access to the matrices of those problems, so it is applicable to situations where only the solution operators are available, such as in oil reservoir simulation by mixed finite elements.
https://en.wikipedia.org/wiki/Balancing_domain_decomposition_method
In its original formulation, BDD performs well only for 2nd order problems, such elasticity in 2D and 3D. For 4th order problems, such as plate bending, it needs to be modified by adding to the coarse problem special basis functions that enforce continuity of the solution at subdomain corners, which makes it however more expensive. The BDDC method uses the same corner basis functions as, but in an additive rather than multiplicative fashion.
https://en.wikipedia.org/wiki/Balancing_domain_decomposition_method
The dual counterpart to BDD is FETI, which enforces the equality of the solution between the subdomain by Lagrange multipliers. The base versions of BDD and FETI are not mathematically equivalent, though a special version of FETI designed to be robust for hard problems has the same eigenvalues and thus essentially the same performance as BDD.The operator of the system solved by BDD is the same as obtained by eliminating the unknowns in the interiors of the subdomain, thus reducing the problem to the Schur complement on the subdomain interface. Since the BDD preconditioner involves the solution of Neumann problems on all subdomain, it is a member of the Neumann–Neumann class of methods, so named because they solve a Neumann problem on both sides of the interface between subdomains. In the simplest case, the coarse space of BDD consists of functions constant on each subdomain and averaged on the interfaces. More generally, on each subdomain, the coarse space needs to only contain the nullspace of the problem as a subspace.
https://en.wikipedia.org/wiki/Balancing_domain_decomposition_method
In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input, and how much error in the output results from an error in the input. Very frequently, one is solving the inverse problem: given f ( x ) = y , {\displaystyle f(x)=y,} one is solving for x, and thus the condition number of the (local) inverse must be used.The condition number is derived from the theory of propagation of uncertainty, and is formally defined as the value of the asymptotic worst-case relative change in output for a relative change in input. The "function" is the solution of a problem and the "arguments" are the data in the problem.
https://en.wikipedia.org/wiki/Ill-conditioned_matrix
The condition number is frequently applied to questions in linear algebra, in which case the derivative is straightforward but the error could be in many different directions, and is thus computed from the geometry of the matrix. More generally, condition numbers can be defined for non-linear functions in several variables.
https://en.wikipedia.org/wiki/Ill-conditioned_matrix
A problem with a low condition number is said to be well-conditioned, while a problem with a high condition number is said to be ill-conditioned. In non-mathematical terms, an ill-conditioned problem is one where, for a small change in the inputs (the independent variables) there is a large change in the answer or dependent variable. This means that the correct solution/answer to the equation becomes hard to find.
https://en.wikipedia.org/wiki/Ill-conditioned_matrix
The condition number is a property of the problem. Paired with the problem are any number of algorithms that can be used to solve the problem, that is, to calculate the solution. Some algorithms have a property called backward stability; in general, a backward stable algorithm can be expected to accurately solve well-conditioned problems.
https://en.wikipedia.org/wiki/Ill-conditioned_matrix
Numerical analysis textbooks give formulas for the condition numbers of problems and identify known backward stable algorithms. As a rule of thumb, if the condition number κ ( A ) = 10 k {\displaystyle \kappa (A)=10^{k}} , then you may lose up to k {\displaystyle k} digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods. However, the condition number does not give the exact value of the maximum inaccuracy that may occur in the algorithm. It generally just bounds it with an estimate (whose computed value depends on the choice of the norm to measure the inaccuracy).
https://en.wikipedia.org/wiki/Ill-conditioned_matrix
In numerical analysis, the interval finite element method (interval FEM) is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of the structure. This is important in concrete structures, wood structures, geomechanics, composite structures, biomechanics and in many other areas.
https://en.wikipedia.org/wiki/Interval_FEM
The goal of the Interval Finite Element is to find upper and lower bounds of different characteristics of the model (e.g. stress, displacements, yield surface etc.) and use these results in the design process. This is so called worst case design, which is closely related to the limit state design. Worst case design requires less information than probabilistic design however the results are more conservative .
https://en.wikipedia.org/wiki/Interval_FEM
In numerical analysis, the local linearization (LL) method is a general strategy for designing numerical integrators for differential equations based on a local (piecewise) linearization of the given equation on consecutive time intervals. The numerical integrators are then iteratively defined as the solution of the resulting piecewise linear equation at the end of each consecutive interval. The LL method has been developed for a variety of equations such as the ordinary, delayed, random and stochastic differential equations. The LL integrators are key component in the implementation of inference methods for the estimation of unknown parameters and unobserved variables of differential equations given time series of (potentially noisy) observations. The LL schemes are ideals to deal with complex models in a variety of fields as neuroscience, finance, forestry management, control engineering, mathematical statistics, etc.
https://en.wikipedia.org/wiki/Local_linearization_method
In numerical analysis, the minimum degree algorithm is an algorithm used to permute the rows and columns of a symmetric sparse matrix before applying the Cholesky decomposition, to reduce the number of non-zeros in the Cholesky factor. This results in reduced storage requirements and means that the Cholesky factor can be applied with fewer arithmetic operations. (Sometimes it may also pertain to an incomplete Cholesky factor used as a preconditioner—for example, in the preconditioned conjugate gradient algorithm.) Minimum degree algorithms are often used in the finite element method where the reordering of nodes can be carried out depending only on the topology of the mesh, rather than on the coefficients in the partial differential equation, resulting in efficiency savings when the same mesh is used for a variety of coefficient values.
https://en.wikipedia.org/wiki/Minimum_degree_algorithm
Given a linear system A x = b {\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} } where A is an n × n {\displaystyle n\times n} real symmetric sparse square matrix. The Cholesky factor L will typically suffer 'fill in', that is have more non-zeros than the upper triangle of A. We seek a permutation matrix P, so that the matrix P T A P {\displaystyle \mathbf {P} ^{T}\mathbf {A} \mathbf {P} } , which is also symmetric, has the least possible fill in its Cholesky factor. We solve the reordered system ( P T A P ) ( P T x ) = P T b .
https://en.wikipedia.org/wiki/Minimum_degree_algorithm
{\displaystyle \left(\mathbf {P} ^{T}\mathbf {A} \mathbf {P} \right)\left(\mathbf {P} ^{T}\mathbf {x} \right)=\mathbf {P} ^{T}\mathbf {b} .} The problem of finding the best ordering is an NP-complete problem and is thus intractable, so heuristic methods are used instead.
https://en.wikipedia.org/wiki/Minimum_degree_algorithm
The minimum degree algorithm is derived from a method first proposed by Markowitz in 1959 for non-symmetric linear programming problems, which is loosely described as follows. At each step in Gaussian elimination row and column permutations are performed so as to minimize the number of off diagonal non-zeros in the pivot row and column. A symmetric version of Markowitz method was described by Tinney and Walker in 1967 and Rose later derived a graph theoretic version of the algorithm where the factorization is only simulated, and this was named the minimum degree algorithm.
https://en.wikipedia.org/wiki/Minimum_degree_algorithm
The graph referred to is the graph with n vertices, with vertices i and j connected by an edge when a i j ≠ 0 {\displaystyle a_{ij}\neq 0} , and the degree is the degree of the vertices. A crucial aspect of such algorithms is a tie breaking strategy when there is a choice of renumbering resulting in the same degree. A version of the minimum degree algorithm was implemented in the MATLAB function symmmd (where MMD stands for multiple minimum degree), but has now been superseded by a symmetric approximate multiple minimum degree function symamd, which is faster. This is confirmed by theoretical analysis, which shows that for graphs with n vertices and m edges, MMD has a tight upper bound of O ( n 2 m ) {\displaystyle O(n^{2}m)} on its running time, whereas for AMD a tight bound of O ( n m ) {\displaystyle O(nm)} holds. Cummings, Fahrbach, and Fatehpuria designed an exact minimum degree algorithm with O ( n m ) {\displaystyle O(nm)} running time, and showed that no such algorithm can exist that runs in time O ( n m 1 − ε ) {\displaystyle O(nm^{1-\varepsilon })} , for any ε > 0 {\displaystyle \varepsilon >0} , assuming the strong exponential time hypothesis.
https://en.wikipedia.org/wiki/Minimum_degree_algorithm
In numerical analysis, the mixed finite element method, is a type of finite element method in which extra fields to be solved are introduced during the posing a partial differential equation problem. Somewhat related is the hybrid finite element method. The extra fields are constrained by using Lagrange multiplier fields. To be distinguished from the mixed finite element method, usual finite element methods that do not introduce such extra fields are also called irreducible or primal finite element methods.
https://en.wikipedia.org/wiki/Mixed_finite_element_method
The mixed finite element method is efficient for some problems that would be numerically ill-posed if discretized by using the irreducible finite element method; one example of such problems is to compute the stress and strain fields in an almost incompressible elastic body. In mixed methods, the Lagrange multiplier fields inside the elements, usually enforcing the applicable partial differential equations. This results in a saddle point system having negative pivots and eigenvalues, rendering the system matrix to be non-definite which results in complications in solving for it.
https://en.wikipedia.org/wiki/Mixed_finite_element_method
In sparse direct solvers, pivoting may be needed, where ultimately the resulting matrix has 2x2 blocks on the diagonal, rather than a working towards a completely pure LLH Cholesky decomposition for positive definite symmetric or Hermitian systems. Pivoting may result in unpredictable memory usage increases. For iterative solvers, only GMRES based solvers work, rather than slightly "cheaper" MINRES based solvers.
https://en.wikipedia.org/wiki/Mixed_finite_element_method
In hybrid methods, the Lagrange fields are for jumps of fields between elements, living on the boundary of the elements, weakly enforcing continuity; continuity from fields in the elements does not need to be enforced through shared degrees of freedom between elements anymore. Both mixing and hybridization can be applied simultaneously. These enforcements are "weak", i.e. occur upon having the solutions and possibly only at some points or e.g. matching moment integral conditions, rather than "strong" in which case the conditions are fulfilled directly in the type of solutions sought. Apart from the harmonics (usually semi-trivial local solution to the homogeneous equations at zero loads), hybridization allows for static Guyan condensation of the discontinuous fields internal to the elements, reducing the number of degrees of freedom, and moreover reducing or eliminating the number of negative eigenvalues and pivots resulting from application of the mixed method. == References ==
https://en.wikipedia.org/wiki/Mixed_finite_element_method
In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence ( x n ) {\displaystyle (x_{n})} that converges to x ∗ {\displaystyle x^{*}} is said to have order of convergence q ≥ 1 {\displaystyle q\geq 1} and rate of convergence μ {\displaystyle \mu } if lim n → ∞ | x n + 1 − x ∗ | | x n − x ∗ | q = μ . {\displaystyle \lim _{n\rightarrow \infty }{\frac {\left|x_{n+1}-x^{*}\right|}{\left|x_{n}-x^{*}\right|^{q}}}=\mu .} The rate of convergence μ {\displaystyle \mu } is also called the asymptotic error constant.
https://en.wikipedia.org/wiki/Linear_convergence
Note that this terminology is not standardized and some authors will use rate where this article uses order (e.g., ). In practice, the rate and order of convergence provide useful insights when using iterative methods for calculating numerical approximations.
https://en.wikipedia.org/wiki/Linear_convergence
If the order of convergence is higher, then typically fewer iterations are necessary to yield a useful approximation. Strictly speaking, however, the asymptotic behavior of a sequence does not give conclusive information about any finite part of the sequence. Similar concepts are used for discretization methods.
https://en.wikipedia.org/wiki/Linear_convergence
The solution of the discretized problem converges to the solution of the continuous problem as the grid size goes to zero, and the speed of convergence is one of the factors of the efficiency of the method. However, the terminology, in this case, is different from the terminology for iterative methods. Series acceleration is a collection of techniques for improving the rate of convergence of a series discretization. Such acceleration is commonly accomplished with sequence transformations.
https://en.wikipedia.org/wiki/Linear_convergence
In numerical analysis, the quasi-Monte Carlo method is a method for numerical integration and solving some other problems using low-discrepancy sequences (also called quasi-random sequences or sub-random sequences). This is in contrast to the regular Monte Carlo method or Monte Carlo integration, which are based on sequences of pseudorandom numbers. Monte Carlo and quasi-Monte Carlo methods are stated in a similar way.
https://en.wikipedia.org/wiki/Quasi-Monte_Carlo_method