Number
int64 1
7.61k
| Text
stringlengths 2
3.11k
|
|---|---|
4,601
|
will give a result of 16331239353195370.0. In single precision , the result will be −22877332.0.
|
4,602
|
By the same token, an attempted computation of sin will not yield zero. The result will be 0.1225×10−15 in double precision, or −0.8742×10−7 in single precision.
|
4,603
|
While floating-point addition and multiplication are both commutative , they are not necessarily associative. That is, + c is not necessarily equal to a + . Using 7-digit significand decimal arithmetic:
|
4,604
|
They are also not necessarily distributive. That is, × c may not be the same as a × c + b × c:
|
4,605
|
In addition to loss of significance, inability to represent numbers such as π and 0.1 exactly, and other slight inaccuracies, the following phenomena may occur:
|
4,606
|
Machine precision is a quantity that characterizes the accuracy of a floating-point system, and is used in backward error analysis of floating-point algorithms. It is also known as unit roundoff or machine epsilon. Usually denoted Εmach, its value depends on the particular rounding being used.
|
4,607
|
With rounding to zero,
|
4,608
|
This is important since it bounds the relative error in representing any non-zero real number x within the normalized range of a floating-point system:
|
4,609
|
Backward error analysis, the theory of which was developed and popularized by James H. Wilkinson, can be used to establish that an algorithm implementing a numerical function is numerically stable. The basic approach is to show that although the calculated result, due to roundoff errors, will not be exactly correct, it is the exact solution to a nearby problem with slightly perturbed input data. If the perturbation required is small, on the order of the uncertainty in the input data, then the results are in some sense as accurate as the data "deserves". The algorithm is then defined as backward stable. Stability is a measure of the sensitivity to rounding errors of a given numerical procedure; by contrast, the condition number of a function for a given problem indicates the inherent sensitivity of the function to small perturbations in its input and is independent of the implementation used to solve the problem.
|
4,610
|
by definition, which is the sum of two slightly perturbed input data, and so is backward stable. For more realistic examples in numerical linear algebra, see Higham 2002 and other references below.
|
4,611
|
Although individual arithmetic operations of IEEE 754 are guaranteed accurate to within half a ULP, more complicated formulae can suffer from larger errors for a variety of reasons. The loss of accuracy can be substantial if a problem or its data are ill-conditioned, meaning that the correct result is hypersensitive to tiny perturbations in its data. However, even functions that are well-conditioned can suffer from large loss of accuracy if an algorithm numerically unstable for that data is used: apparently equivalent formulations of expressions in a programming language can differ markedly in their numerical stability. One approach to remove the risk of such loss of accuracy is the design and analysis of numerically stable algorithms, which is an aim of the branch of mathematics known as numerical analysis. Another approach that can protect against the risk of numerical instabilities is the computation of intermediate values in an algorithm at a higher precision than the final result requires, which can remove, or reduce by orders of magnitude, such risk: IEEE 754 quadruple precision and extended precision are designed for this purpose when computing at double precision.
|
4,612
|
For example, the following algorithm is a direct implementation to compute the function A = / − 1) which is well-conditioned at 1.0, however it can be shown to be numerically unstable and lose up to half the significant digits carried by the arithmetic when computed near 1.0.
|
4,613
|
If, however, intermediate computations are all performed in extended precision , then up to full precision in the final double result can be maintained. Alternatively, a numerical analysis of the algorithm reveals that if the following non-obvious change to line is made:
|
4,614
|
then the algorithm becomes numerically stable and can compute to full double precision.
|
4,615
|
To maintain the properties of such carefully constructed numerically stable programs, careful handling by the compiler is required. Certain "optimizations" that compilers might make can work against the goals of well-behaved software. There is some controversy about the failings of compilers and language designs in this area: C99 is an example of a language where such optimizations are carefully specified to maintain numerical precision. See the external references at the bottom of this article.
|
4,616
|
A detailed treatment of the techniques for writing high-quality floating-point software is beyond the scope of this article, and the reader is referred to, and the other references at the bottom of this article. Kahan suggests several rules of thumb that can substantially decrease by orders of magnitude the risk of numerical anomalies, in addition to, or in lieu of, a more careful numerical analysis. These include: as noted above, computing all expressions and intermediate results in the highest precision supported in hardware ; and rounding input data and results to only the precision required and supported by the input data . Brief descriptions of several additional issues and techniques follow.
|
4,617
|
As decimal fractions can often not be exactly represented in binary floating-point, such arithmetic is at its best when it is simply being used to measure real-world quantities over a wide range of scales , and at its worst when it is expected to model the interactions of quantities expressed as decimal strings that are expected to be exact. An example of the latter case is financial calculations. For this reason, financial software tends not to use a binary floating-point number representation. The "decimal" data type of the C# and Python programming languages, and the decimal formats of the IEEE 754-2008 standard, are designed to avoid the problems of binary floating-point representations when applied to human-entered exact decimal values, and make the arithmetic always behave as expected when numbers are printed in decimal.
|
4,618
|
The use of the equality test ...) requires care when dealing with floating-point numbers. Even simple expressions like 0.6/0.2-3==0 will, on most computers, fail to be true . Consequently, such tests are sometimes replaced with "fuzzy" comparisons < epsilon) ..., where epsilon is sufficiently small and tailored to the application, such as 1.0E−13). The wisdom of doing this varies greatly, and can require numerical analysis to bound epsilon. Values derived from the primary data representation and their comparisons should be performed in a wider, extended, precision to minimize the risk of such inconsistencies due to round-off errors. It is often better to organize the code in such a way that such tests are unnecessary. For example, in computational geometry, exact tests of whether a point lies off or on a line or plane defined by other points can be performed using adaptive precision or exact arithmetic methods.
|
4,619
|
Small errors in floating-point arithmetic can grow when mathematical algorithms perform operations an enormous number of times. A few examples are matrix inversion, eigenvector computation, and differential equation solving. These algorithms must be very carefully designed, using numerical approaches such as iterative refinement, if they are to work well.
|
4,620
|
Summation of a vector of floating-point values is a basic algorithm in scientific computing, and so an awareness of when loss of significance can occur is essential. For example, if one is adding a very large number of numbers, the individual addends are very small compared with the sum. This can lead to loss of significance. A typical addition would then be something like
|
4,621
|
The low 3 digits of the addends are effectively lost. Suppose, for example, that one needs to add many numbers, all approximately equal to 3. After 1000 of them have been added, the running sum is about 3000; the lost digits are not regained. The Kahan summation algorithm may be used to reduce the errors.
|
4,622
|
Round-off error can affect the convergence and accuracy of iterative numerical procedures. As an example, Archimedes approximated π by calculating the perimeters of polygons inscribing and circumscribing a circle, starting with hexagons, and successively doubling the number of sides. As noted above, computations may be rearranged in a way that is mathematically equivalent but less prone to error . Two forms of the recurrence formula for the circumscribed polygon are:
|
4,623
|
Here is a computation using IEEE "double" arithmetic:
|
4,624
|
While the two forms of the recurrence formula are clearly mathematically equivalent, the first subtracts 1 from a number extremely close to 1, leading to an increasingly problematic loss of significant digits. As the recurrence is applied repeatedly, the accuracy improves at first, but then it deteriorates. It never gets better than about 8 digits, even though 53-bit arithmetic should be capable of about 16 digits of precision. When the second form of the recurrence is used, the value converges to 15 digits of precision.
|
4,625
|
The aforementioned lack of associativity of floating-point operations in general means that compilers cannot as effectively reorder arithmetic expressions as they could with integer and fixed-point arithmetic, presenting a roadblock in optimizations such as common subexpression elimination and auto-vectorization. The "fast math" option on many compilers turns on reassociation along with unsafe assumptions such as a lack of NaN and infinite numbers in IEEE 754. Some compilers also offer more granular options to only turn on reassociation. In either case, the programmer is exposed to many of the precision pitfalls mentioned above for the portion of the program using "fast" math.
|
4,626
|
In some compilers , turning on "fast" math may cause the program to disable subnormal floats at startup, affecting the floating-point behavior of not only the generated code, but also any program using such code as a library.
|
4,627
|
In most Fortran compilers, as allowed by the ISO/IEC 1539-1:2004 Fortran standard, reassociation is the default, with breakage largely prevented by the "protect parens" setting . This setting stops the compiler from reassociating beyond the boundaries of parentheses. Intel Fortran Compiler is a notable outlier.
|
4,628
|
A common problem in "fast" math is that subexpressions may not be optimized identically from place to place, leading to unexpected differences. One interpretation of the issue is that "fast" math as implemented currently has a poorly defined semantics. One attempt at formalizing "fast" math optimizations is seen in Icing, a verified compiler.
|
4,629
|
The integers form the smallest group and the smallest ring containing the natural numbers. In algebraic number theory, the integers are sometimes qualified as rational integers to distinguish them from the more general algebraic integers. In fact, integers are algebraic integers that are also rational numbers.
|
4,630
|
The word integer comes from the Latin integer meaning "whole" or "untouched", from in plus tangere . "Entire" derives from the same origin via the French word entier, which means both entire and integer. Historically the term was used for a number that was a multiple of 1, or to the whole part of a mixed number. Only positive integers were considered, making the term synonymous with the natural numbers. The definition of integer expanded over time to include negative numbers as their usefulness was recognized. For example Leonhard Euler in his 1765 Elements of Algebra defined integers to include both positive and negative numbers. However, European mathematicians, for the most part, resisted the concept of negative numbers until the middle of the 19th century.
|
4,631
|
The use of the letter Z to denote the set of integers comes from the German word Zahlen and has been attributed to David Hilbert. The earliest known use of the notation in a textbook occurs in Algébre written by the collective Nicolas Bourbaki, dating to 1947. The notation was not adopted immediately, for example another textbook used the letter J and a 1960 paper used Z to denote the non-negative integers. But by 1961, Z was generally used by modern algebra texts to denote the positive and negative integers.
|
4,632
|
The whole numbers were synonymous with the integers up until the early 1950s. In the late 1950s, as part of the New Math movement, American elementary school teachers began teaching that "whole numbers" referred to the natural numbers, excluding negative numbers, while "integer" included the negative numbers. "Whole number" remains ambiguous to the present day.
|
4,633
|
Algebraic structures
|
4,634
|
Related structures
|
4,635
|
p-adic number theory and decimals
|
4,636
|
Algebraic geometry
|
4,637
|
Free algebra
|
4,638
|
Clifford algebra
|
4,639
|
The traditional arithmetic operations can then be defined on the integers in a piecewise fashion, for each of positive numbers, negative numbers, and zero. For example negation is defined as follows:
|
4,640
|
The traditional style of definition leads to many different cases and makes it tedious to prove that integers obey the various laws of arithmetic.
|
4,641
|
In modern set-theoretic mathematics, a more abstract construction allowing one to define arithmetical operations without any case distinction is often used instead. The integers can thus be formally constructed as the equivalence classes of ordered pairs of natural numbers .
|
4,642
|
The intuition is that stands for the result of subtracting b from a. To confirm our expectation that 1 − 2 and 4 − 5 denote the same number, we define an equivalence relation ~ on these pairs with the following rule:
|
4,643
|
precisely when
|
4,644
|
Addition and multiplication of integers can be defined in terms of the equivalent operations on the natural numbers; by using to denote the equivalence class having as a member, one has:
|
4,645
|
The negation of an integer is obtained by reversing the order of the pair:
|
4,646
|
Hence subtraction can be defined as the addition of the additive inverse:
|
4,647
|
The standard ordering on the integers is given by:
|
4,648
|
It is easily verified that these definitions are independent of the choice of representatives of the equivalence classes.
|
4,649
|
Every equivalence class has a unique member that is of the form or . The natural number n is identified with the class , and the class is denoted −n (this covers all remaining classes, and gives the class a second time since −0 = 0.
|
4,650
|
Thus, is denoted by
|
4,651
|
If the natural numbers are identified with the corresponding integers , this convention creates no ambiguity.
|
4,652
|
This notation recovers the familiar representation of the integers as {..., −2, −1, 0, 1, 2, ...} .
|
4,653
|
Some examples are:
|
4,654
|
In theoretical computer science, other approaches for the construction of integers are used by automated theorem provers and term rewrite engines.
Integers are represented as algebraic terms built using a few basic operations and, possibly, using natural numbers, which are assumed to be already constructed .
|
4,655
|
There exist at least ten such constructions of signed integers. These constructions differ in several ways: the number of basic operations used for the construction, the number and the types of arguments accepted by these operations; the presence or absence of natural numbers as arguments of some of these operations, and the fact that these operations are free constructors or not, i.e., that the same integer can be represented using only one or many algebraic terms.
|
4,656
|
An integer is often a primitive data type in computer languages. However, integer data types can only represent a subset of all integers, since practical computers are of finite capacity. Also, in the common two's complement representation, the inherent definition of sign distinguishes between "negative" and "non-negative" rather than "negative, positive, and 0". Fixed length integer approximation data types are denoted int or Integer in several programming languages .
|
4,657
|
Variable-length representations of integers, such as bignums, can store any integer that fits in the computer's memory. Other integer data types are implemented with a fixed size, usually a number of bits which is a power of 2 or a memorable number of decimal digits .
|
4,658
|
The set of integers is countably infinite, meaning it is possible to pair each integer with a unique natural number. An example of such a pairing is
|
4,659
|
This article incorporates material from Integer on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
|
4,660
|
Square roots of negative numbers can be discussed within the framework of complex numbers. More generally, square roots can be considered in any context in which a notion of the "square" of a mathematical object is defined. These include function spaces and square matrices, among other mathematical structures.
|
4,661
|
The Rhind Mathematical Papyrus is a copy from 1650 BC of an earlier Berlin Papyrus and other texts – possibly the Kahun Papyrus – that shows how the Egyptians extracted square roots by an inverse proportion method.
|
4,662
|
In the Chinese mathematical work Writings on Reckoning, written between 202 BC and 186 BC during the early Han Dynasty, the square root is approximated by using an "excess and deficiency" method, which says to "...combine the excess and deficiency as the divisor; the deficiency numerator multiplied by the excess denominator and the excess numerator times the deficiency denominator, combine them as the dividend."
|
4,663
|
A symbol for square roots, written as an elaborate R, was invented by Regiomontanus . An R was also used for radix to indicate square roots in Gerolamo Cardano's Ars Magna.
|
4,664
|
According to historian of mathematics D.E. Smith, Aryabhata's method for finding the square root was first introduced in Europe by Cataneo—in 1546.
|
4,665
|
According to Jeffrey A. Oaks, Arabs used the letter jīm/ĝīm , the first letter of the word "جذر" , placed in its initial form over a number to indicate its square root. The letter jīm resembles the present square root shape. Its usage goes as far as the end of the twelfth century in the works of the Moroccan mathematician Ibn al-Yasamin.
|
4,666
|
The symbol "√" for the square root was first used in print in 1525, in Christoph Rudolff's Coss.
|
4,667
|
The square root of x is rational if and only if x is a rational number that can be represented as a ratio of two perfect squares. The square root function maps rational numbers into algebraic numbers, the latter being a superset of the rational numbers).
|
4,668
|
For all real numbers x,
|
4,669
|
For all nonnegative real numbers x and y,
|
4,670
|
The square root function is continuous for all nonnegative x, and differentiable for all positive x. If f denotes the square root function, whose derivative is given by:
|
4,671
|
The square root of a nonnegative number is used in the definition of Euclidean norm , as well as in generalizations such as Hilbert spaces. It defines an important concept of standard deviation used in probability theory and statistics. It has a major use in the formula for roots of a quadratic equation; quadratic fields and rings of quadratic integers, which are based on square roots, are important in algebra and have uses in geometry. Square roots frequently appear in mathematical formulas elsewhere, as well as in many physical laws.
|
4,672
|
A positive number has two square roots, one positive, and one negative, which are opposite to each other. When talking of the square root of a positive integer, it is usually the positive square root that is meant.
|
4,673
|
The square roots of an integer are algebraic integers—more specifically quadratic integers.
|
4,674
|
The square roots of the perfect squares are integers. In all other cases, the square roots of positive integers are irrational numbers, and hence have non-repeating decimals in their decimal representations. Decimal approximations of the square roots of the first few natural numbers are given in the following table.
|
4,675
|
As with before, the square roots of the perfect squares are integers. In all other cases, the square roots of positive integers are irrational numbers, and therefore have non-repeating digits in any standard positional notation system.
|
4,676
|
The square roots of small integers are used in both the SHA-1 and SHA-2 hash function designs to provide nothing up my sleeve numbers.
|
4,677
|
One of the most intriguing results from the study of irrational numbers as continued fractions was obtained by Joseph Louis Lagrange c. 1780. Lagrange found that the representation of the square root of any non-square positive integer as a continued fraction is periodic. That is, a certain pattern of partial denominators repeats indefinitely in the continued fraction. In a sense these square roots are the very simplest irrational numbers, because they can be represented with a simple repeating pattern of integers.
|
4,678
|
The square bracket notation used above is a short form for a continued fraction. Written in the more suggestive algebraic form, the simple continued fraction for the square root of 11, , looks like this:
|
4,679
|
where the two-digit pattern {3, 6} repeats over and over again in the partial denominators. Since 11 = 32 + 2, the above is also identical to the following generalized continued fractions:
|
4,680
|
Square roots of positive numbers are not in general rational numbers, and so cannot be written as a terminating or recurring decimal expression. Therefore in general any attempt to compute a square root expressed in decimal form can only yield an approximation, though a sequence of increasingly accurate approximations can be obtained.
|
4,681
|
Most pocket calculators have a square root key. Computer spreadsheets and other software are also frequently used to calculate square roots. Pocket calculators typically implement efficient routines, such as the Newton's method , to compute the square root of a positive real number. When computing square roots with logarithm tables or slide rules, one can exploit the identities
|
4,682
|
The most common iterative method of square root calculation by hand is known as the "Babylonian method" or "Heron's method" after the first-century Greek philosopher Heron of Alexandria, who first described it.
The method uses the same iterative scheme as the Newton–Raphson method yields when applied to the function y = f = x2 − a, using the fact that its slope at any point is dy/dx = f′ = 2x, but predates it by many centuries.
The algorithm is to repeat a simple calculation that results in a number closer to the actual square root each time it is repeated with its result as the new input. The motivation is that if x is an overestimate to the square root of a nonnegative real number a then a/x will be an underestimate and so the average of these two numbers is a better approximation than either of them. However, the inequality of arithmetic and geometric means shows this average is always an overestimate of the square root , and so it can serve as a new overestimate with which to repeat the process, which converges as a consequence of the successive overestimates and underestimates being closer to each other after each iteration. To find x:
|
4,683
|
Start with an arbitrary positive start value x. The closer to the square root of a, the fewer the iterations that will be needed to achieve the desired precision.
|
4,684
|
Replace x by the average / 2 between x and a/x.
|
4,685
|
Repeat from step 2, using this average as the new value of x.
|
4,686
|
Using the identity
|
4,687
|
The time complexity for computing a square root with n digits of precision is equivalent to that of multiplying two n-digit numbers.
|
4,688
|
Another useful method for calculating the square root is the shifting nth root algorithm, applied for n = 2.
|
4,689
|
The name of the square root function varies from programming language to programming language, with sqrt being common, used in C and derived languages like C++, JavaScript, PHP, and Python.
|
4,690
|
The square of any positive or negative number is positive, and the square of 0 is 0. Therefore, no negative number can have a real square root. However, it is possible to work with a more inclusive set of numbers, called the complex numbers, that does contain solutions to the square root of a negative number. This is done by introducing a new number, denoted by i and called the imaginary unit, which is defined such that i2 = −1. Using this notation, we can think of i as the square root of −1, but we also have 2 = i2 = −1 and so −i is also a square root of −1. By convention, the principal square root of −1 is i, or more generally, if x is any nonnegative number, then the principal square root of −x is
|
4,691
|
The right side is indeed a square root of −x, since
|
4,692
|
For every non-zero complex number z there exist precisely two numbers w such that w2 = z: the principal square root of z , and its negative.
|
4,693
|
The above can also be expressed in terms of trigonometric functions:
|
4,694
|
When the number is expressed using its real and imaginary parts, the following formula can be used for the principal square root:
|
4,695
|
where sgn = 1 if y ≥ 0 and sgn = −1 otherwise. In particular, the imaginary parts of the original number and the principal value of its square root have the same sign. The real part of the principal value of the square root is always nonnegative.
|
4,696
|
For example, the principal square roots of ±i are given by:
|
4,697
|
In the following, the complex z and w may be expressed as:
|
4,698
|
Because of the discontinuous nature of the square root function in the complex plane, the following laws are not true in general.
|
4,699
|
A similar problem appears with other complex functions with branch cuts, e.g., the complex logarithm and the relations logz + logw = log or log = log* which are not true in general.
|
4,700
|
Wrongly assuming one of these laws underlies several faulty "proofs", for instance the following one showing that −1 = 1:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.