Number
int64
1
7.61k
Text
stringlengths
2
3.11k
4,001
An empty matrix is a matrix in which the number of rows or columns is zero. Empty matrices help dealing with maps involving the zero vector space. For example, if A is a 3-by-0 matrix and B is a 0-by-3 matrix, then AB is the 3-by-3 zero matrix corresponding to the null map from a 3-dimensional space V to itself, while BA is a 0-by-0 matrix. There is no common notation for empty matrices, but most computer algebra systems allow creating and computing with them. The determinant of the 0-by-0 matrix is 1 as follows regarding the empty product occurring in the Leibniz formula for the determinant as 1. This value is also consistent with the fact that the identity map from any finite-dimensional space to itself has determinant 1, a fact that is often used as a part of the characterization of determinants.
4,002
There are numerous applications of matrices, both in mathematics and other sciences. Some of them merely take advantage of the compact representation of a set of numbers in a matrix. For example, in game theory and economics, the payoff matrix encodes the payoff for two players, depending on which out of a given set of strategies the players choose. Text mining and automated thesaurus compilation makes use of document-term matrices such as tf-idf to track frequencies of certain words in several documents.
4,003
Complex numbers can be represented by particular real 2-by-2 matrices via
4,004
under which addition and multiplication of complex numbers and matrices correspond to each other. For example, 2-by-2 rotation matrices represent the multiplication with some complex number of absolute value 1, as above. A similar interpretation is possible for quaternions and Clifford algebras in general.
4,005
Early encryption techniques such as the Hill cipher also used matrices. However, due to the linear nature of matrices, these codes are comparatively easy to break. Computer graphics uses matrices to represent objects; to calculate transformations of objects using affine rotation matrices to accomplish tasks such as projecting a three-dimensional object onto a two-dimensional screen, corresponding to a theoretical camera observation; and to apply image convolutions such as sharpening, blurring, edge detection, and more. Matrices over a polynomial ring are important in the study of control theory.
4,006
Chemistry makes use of matrices in various ways, particularly since the use of quantum theory to discuss molecular bonding and spectroscopy. Examples are the overlap matrix and the Fock matrix used in solving the Roothaan equations to obtain the molecular orbitals of the Hartree–Fock method.
4,007
The adjacency matrix of a finite graph is a basic notion of graph theory. It records which vertices of the graph are connected by an edge. Matrices containing just two different values are called logical matrices. The distance matrix contains information about distances of the edges. These concepts can be applied to websites connected by hyperlinks or cities connected by roads etc., in which case the matrices tend to be sparse, that is, contain few nonzero entries. Therefore, specifically tailored matrix algorithms can be used in network theory.
4,008
The Hessian matrix of a differentiable function ƒ: Rn → R consists of the second derivatives of ƒ with respect to the several coordinate directions, that is,
4,009
Another matrix frequently used in geometrical situations is the Jacobi matrix of a differentiable map f: Rn → Rm. If f1, ..., fm denote the components of f, then the Jacobi matrix is defined as
4,010
If n > m, and if the rank of the Jacobi matrix attains its maximal value m, f is locally invertible at that point, by the implicit function theorem.
4,011
Partial differential equations can be classified by considering the matrix of coefficients of the highest-order differential operators of the equation. For elliptic partial differential equations this matrix is positive definite, which has a decisive influence on the set of possible solutions of the equation in question.
4,012
The finite element method is an important numerical method to solve partial differential equations, widely applied in simulating complex physical systems. It attempts to approximate the solution to some equation by piecewise linear functions, where the pieces are chosen concerning a sufficiently fine grid, which in turn can be recast as a matrix equation.
4,013
Stochastic matrices are square matrices whose rows are probability vectors, that is, whose entries are non-negative and sum up to one. Stochastic matrices are used to define Markov chains with finitely many states. A row of the stochastic matrix gives the probability distribution for the next position of some particle currently in the state that corresponds to the row. Properties of the Markov chain-like absorbing states, that is, states that any particle attains eventually, can be read off the eigenvectors of the transition matrices.
4,014
Statistics also makes use of matrices in many different forms. Descriptive statistics is concerned with describing data sets, which can often be represented as data matrices, which may then be subjected to dimensionality reduction techniques. The covariance matrix encodes the mutual variance of several random variables. Another technique using matrices are linear least squares, a method that approximates a finite set of pairs , , ..., , by a linear function
4,015
which can be formulated in terms of matrices, related to the singular value decomposition of matrices.
4,016
Random matrices are matrices whose entries are random numbers, subject to suitable probability distributions, such as matrix normal distribution. Beyond probability theory, they are applied in domains ranging from number theory to physics.
4,017
Linear transformations and the associated symmetries play a key role in modern physics. For example, elementary particles in quantum field theory are classified as representations of the Lorentz group of special relativity and, more specifically, by their behavior under the spin group. Concrete representations involving the Pauli matrices and more general gamma matrices are an integral part of the physical description of fermions, which behave as spinors. For the three lightest quarks, there is a group-theoretical representation involving the special unitary group SU; for their calculations, physicists use a convenient matrix representation known as the Gell-Mann matrices, which are also used for the SU gauge group that forms the basis of the modern description of strong nuclear interactions, quantum chromodynamics. The Cabibbo–Kobayashi–Maskawa matrix, in turn, expresses the fact that the basic quark states that are important for weak interactions are not the same as, but linearly related to the basic quark states that define particles with specific and distinct masses.
4,018
The first model of quantum mechanics represented the theory's operators by infinite-dimensional matrices acting on quantum states. This is also referred to as matrix mechanics. One particular example is the density matrix that characterizes the "mixed" state of a quantum system as a linear combination of elementary, "pure" eigenstates.
4,019
Another matrix serves as a key tool for describing the scattering experiments that form the cornerstone of experimental particle physics: Collision reactions such as occur in particle accelerators, where non-interacting particles head towards each other and collide in a small interaction zone, with a new set of non-interacting particles as the result, can be described as the scalar product of outgoing particle states and a linear combination of ingoing particle states. The linear combination is given by a matrix known as the S-matrix, which encodes all information about the possible interactions between particles.
4,020
A general application of matrices in physics is the description of linearly coupled harmonic systems. The equations of motion of such systems can be described in matrix form, with a mass matrix multiplying a generalized velocity to give the kinetic term, and a force matrix multiplying a displacement vector to characterize the interactions. The best way to obtain solutions is to determine the system's eigenvectors, its normal modes, by diagonalizing the matrix equation. Techniques like this are crucial when it comes to the internal dynamics of molecules: the internal vibrations of systems consisting of mutually bound component atoms. They are also needed for describing mechanical vibrations, and oscillations in electrical circuits.
4,021
Geometrical optics provides further matrix applications. In this approximative theory, the wave nature of light is neglected. The result is a model in which light rays are indeed geometrical rays. If the deflection of light rays by optical elements is small, the action of a lens or reflective element on a given light ray can be expressed as multiplication of a two-component vector with a two-by-two matrix called ray transfer matrix analysis: the vector's components are the light ray's slope and its distance from the optical axis, while the matrix encodes the properties of the optical element. Actually, there are two kinds of matrices, viz. a refraction matrix describing the refraction at a lens surface, and a translation matrix, describing the translation of the plane of reference to the next refracting surface, where another refraction matrix applies. The optical system, consisting of a combination of lenses and/or reflective elements, is simply described by the matrix resulting from the product of the components' matrices.
4,022
Traditional mesh analysis and nodal analysis in electronics lead to a system of linear equations that can be described with a matrix.
4,023
The behaviour of many electronic components can be described using matrices. Let A be a 2-dimensional vector with the component's input voltage v1 and input current i1 as its elements, and let B be a 2-dimensional vector with the component's output voltage v2 and output current i2 as its elements. Then the behaviour of the electronic component can be described by B = H · A, where H is a 2 x 2 matrix containing one impedance element , one admittance element , and two dimensionless elements . Calculating a circuit now reduces to multiplying matrices.
4,024
Matrices have a long history of application in solving linear equations but they were known as arrays until the 1800s. The Chinese text The Nine Chapters on the Mathematical Art written in 10th–2nd century BCE is the first example of the use of array methods to solve simultaneous equations, including the concept of determinants. In 1545 Italian mathematician Gerolamo Cardano introduced the method to Europe when he published Ars Magna. The Japanese mathematician Seki used the same array methods to solve simultaneous equations in 1683. The Dutch mathematician Jan de Witt represented transformations using arrays in his 1659 book Elements of Curves . Between 1700 and 1710 Gottfried Wilhelm Leibniz publicized the use of arrays for recording information or solutions and experimented with over 50 different systems of arrays. Cramer presented his rule in 1750.
4,025
The term "matrix" , "source", "origin", "list", "register", derived from mater—mother) was coined by James Joseph Sylvester in 1850, who understood a matrix as an object giving rise to several determinants today called minors, that is to say, determinants of smaller matrices that derive from the original one by removing columns and rows. In an 1851 paper, Sylvester explains:
4,026
Arthur Cayley published a treatise on geometric transformations using matrices that were not rotated versions of the coefficients being investigated as had previously been done. Instead, he defined operations such as addition, subtraction, multiplication, and division as transformations of those matrices and showed the associative and distributive properties held true. Cayley investigated and demonstrated the non-commutative property of matrix multiplication as well as the commutative property of matrix addition. Early matrix theory had limited the use of arrays almost exclusively to determinants and Arthur Cayley's abstract matrix operations were revolutionary. He was instrumental in proposing a matrix concept independent of equation systems. In 1858 Cayley published his A memoir on the theory of matrices in which he proposed and demonstrated the Cayley–Hamilton theorem.
4,027
The English mathematician Cuthbert Edmund Cullis was the first to use modern bracket notation for matrices in 1913 and he simultaneously demonstrated the first significant use of the notation A = to represent a matrix where ai,j refers to the ith row and the jth column.
4,028
The modern study of determinants sprang from several sources. Number-theoretical problems led Gauss to relate coefficients of quadratic forms, that is, expressions such as x2 + xy − 2y2, and linear maps in three dimensions to matrices. Eisenstein further developed these notions, including the remark that, in modern parlance, matrix products are non-commutative. Cauchy was the first to prove general statements about determinants, using as definition of the determinant of a matrix A = the following: replace the powers ajk by ajk in the polynomial
4,029
Many theorems were first established for small matrices only, for example, the Cayley–Hamilton theorem was proved for 2×2 matrices by Cayley in the aforementioned memoir, and by Hamilton for 4×4 matrices. Frobenius, working on bilinear forms, generalized the theorem to all dimensions . Also at the end of the 19th century, the Gauss–Jordan elimination was established by Wilhelm Jordan. In the early 20th century, matrices attained a central role in linear algebra, partially due to their use in classification of the hypercomplex number systems of the previous century.
4,030
The inception of matrix mechanics by Heisenberg, Born and Jordan led to studying matrices with infinitely many rows and columns. Later, von Neumann carried out the mathematical formulation of quantum mechanics, by further developing functional analytic notions such as linear operators on Hilbert spaces, which, very roughly speaking, correspond to Euclidean space, but with an infinity of independent directions.
4,031
The word has been used in unusual ways by at least two authors of historical importance.
4,032
Bertrand Russell and Alfred North Whitehead in their Principia Mathematica use the word "matrix" in the context of their axiom of reducibility. They proposed this axiom as a means to reduce any function to one of lower type, successively, so that at the "bottom" the function is identical to its extension:
4,033
For example, a function Φ of two variables x and y can be reduced to a collection of functions of a single variable, for example, y, by "considering" the function for all possible values of "individuals" ai substituted in place of variable x. And then the resulting collection of functions of the single variable y, that is, ∀ai: Φ, can be reduced to a "matrix" of values by "considering" the function for all possible values of "individuals" bi substituted in place of variable y:
4,034
Alfred Tarski in his 1946 Introduction to Logic used the word "matrix" synonymously with the notion of truth table as used in mathematical logic.
4,035
A vector is what is needed to "carry" the point A to the point B; the Latin word vector means "carrier". It was first used by 18th century astronomers investigating planetary revolution around the Sun. The magnitude of the vector is the distance between the two points, and the direction refers to the direction of displacement from A to B. Many algebraic operations on real numbers such as addition, subtraction, multiplication, and negation have close analogues for vectors, operations which obey the familiar algebraic laws of commutativity, associativity, and distributivity. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space.
4,036
Vectors play an important role in physics: the velocity and acceleration of a moving object and the forces acting on it can all be described with vectors. Many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances , their magnitude and direction can still be represented by the length and direction of an arrow. The mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the coordinate system include pseudovectors and tensors.
4,037
The vector concept, as it is known today, is the result of a gradual development over a period of more than 200 years. About a dozen people contributed significantly to its development. In 1835, Giusto Bellavitis abstracted the basic idea when he established the concept of equipollence. Working in a Euclidean plane, he made equipollent any pair of parallel line segments of the same length and orientation. Essentially, he realized an equivalence relation on the pairs of points in the plane, and thus erected the first space of vectors in the plane.: 52–4  The term vector was introduced by William Rowan Hamilton as part of a quaternion, which is a sum q = s + v of a real number s and a 3-dimensional vector. Like Bellavitis, Hamilton viewed vectors as representative of classes of equipollent directed segments. As complex numbers use an imaginary unit to complement the real line, Hamilton considered the vector v to be the imaginary part of a quaternion:
4,038
Several other mathematicians developed vector-like systems in the middle of the nineteenth century, including Augustin Cauchy, Hermann Grassmann, August Möbius, Comte de Saint-Venant, and Matthew O'Brien. Grassmann's 1840 work Theorie der Ebbe und Flut was the first system of spatial analysis that is similar to today's system, and had ideas corresponding to the cross product, scalar product and vector differentiation. Grassmann's work was largely neglected until the 1870s. Peter Guthrie Tait carried the quaternion standard after Hamilton. His 1867 Elementary Treatise of Quaternions included extensive treatment of the nabla or del operator ∇. In 1878, Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product. This approach made vector calculations available to engineers—and others working in three dimensions and skeptical of the fourth.
4,039
Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwell's Treatise on Electricity and Magnetism, separated off their vector part for independent treatment. The first half of Gibbs's Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis. In 1901, Edwin Bidwell Wilson published Vector Analysis, adapted from Gibbs's lectures, which banished any mention of quaternions in the development of vector calculus.
4,040
In physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a direction. It is formally defined as a directed line segment, or arrow, in a Euclidean space. In pure mathematics, a vector is defined more generally as any element of a vector space. In this context, vectors are abstract entities which may or may not be characterized by a magnitude and a direction. This generalized definition implies that the above-mentioned geometric entities are a special kind of vectors, as they are elements of a special kind of vector space called Euclidean space. This particular article is about vectors strictly defined as arrows in Euclidean space. When it becomes necessary to distinguish these special vectors from vectors as defined in pure mathematics, they are sometimes referred to as geometric, spatial, or Euclidean vectors.
4,041
A Euclidean vector is thus an equivalence class of directed segments with the same magnitude ) and same direction . In physics, Euclidean vectors are used to represent physical quantities that have both magnitude and direction, but are not located at a specific place, in contrast to scalars, which have no direction. For example, velocity, forces and acceleration are represented by vectors.
4,042
Since the physicist's concept of force has a direction and a magnitude, it may be seen as a vector. As an example, consider a rightward force F of 15 newtons. If the positive axis is also directed rightward, then F is represented by the vector 15 N, and if positive points leftward, then the vector for F is −15 N. In either case, the magnitude of the vector is 15 N. Likewise, the vector representation of a displacement Δs of 4 meters would be 4 m or −4 m, depending on its direction, and its magnitude would be 4 m regardless.
4,043
Vectors are fundamental in the physical sciences. They can be used to represent any quantity that has magnitude, has direction, and which adheres to the rules of vector addition. An example is velocity, the magnitude of which is speed. For instance, the velocity 5 meters per second upward could be represented by the vector . Another quantity represented by a vector is force, since it has a magnitude and direction and follows the rules of vector addition. Vectors also describe many other physical quantities, such as linear displacement, displacement, linear acceleration, angular acceleration, linear momentum, and angular momentum. Other physical vectors, such as the electric and magnetic field, are represented as a system of vectors at each point of a physical space; that is, a vector field. Examples of quantities that have magnitude and direction, but fail to follow the rules of vector addition, are angular displacement and electric current. Consequently, these are not vectors.
4,044
In Cartesian coordinates, a free vector may be thought of in terms of a corresponding bound vector, in this sense, whose initial point has the coordinates of the origin O = . It is then determined by the coordinates of that bound vector's terminal point. Thus the free vector represented by is a vector of unit length—pointing along the direction of the positive x-axis.
4,045
This coordinate representation of free vectors allows their algebraic features to be expressed in a convenient numerical fashion. For example, the sum of the two vectors and is the vector
4,046
In the geometrical and physical settings, it is sometimes possible to associate, in a natural way, a length or magnitude and a direction to vectors. In addition, the notion of direction is strictly associated with the notion of an angle between two vectors. If the dot product of two vectors is defined—a scalar-valued product of two vectors—then it is also possible to define a length; the dot product gives a convenient algebraic characterization of both angle and length . In three dimensions, it is further possible to define the cross product, which supplies an algebraic characterization of the area and orientation in space of the parallelogram defined by two vectors . In any dimension , it is possible to define the exterior product, which supplies an algebraic characterization of the area and orientation in space of the n-dimensional parallelotope defined by n vectors.
4,047
In a pseudo-Euclidean space, a vector's squared length can be positive, negative, or zero. An important example is Minkowski space .
4,048
However, it is not always possible or desirable to define the length of a vector. This more general type of spatial vector is the subject of vector spaces and affine spaces . One physical example comes from thermodynamics, where many quantities of interest can be considered vectors in a space with no notion of length or angle.
4,049
In physics, as well as mathematics, a vector is often identified with a tuple of components, or list of numbers, that act as scalar coefficients for a set of basis vectors. When the basis is transformed, for example by rotation or stretching, then the components of any vector in terms of that basis also transform in an opposite sense. The vector itself has not changed, but the basis has, so the components of the vector must change to compensate. The vector is called covariant or contravariant, depending on how the transformation of the vector's components is related to the transformation of the basis. In general, contravariant vectors are "regular vectors" with units of distance , or distance times some other unit ; covariant vectors, on the other hand, have units of one-over-distance such as gradient. If you change units from meters to millimeters, a scale factor of 1/1000, a displacement of 1 m becomes 1000 mm—a contravariant change in numerical value. In contrast, a gradient of 1 K/m becomes 0.001 K/mm—a covariant change in value . Tensors are another type of quantity that behave in this way; a vector is one type of tensor.
4,050
In pure mathematics, a vector is any element of a vector space over some field and is often represented as a coordinate vector. The vectors described in this article are a very special case of this general definition, because they are contravariant with respect to the ambient space. Contravariance captures the physical intuition behind the idea that a vector has "magnitude and direction".
4,051
Vectors are usually shown in graphs or other diagrams as arrows , as illustrated in the figure. Here, the point A is called the origin, tail, base, or initial point, and the point B is called the head, tip, endpoint, terminal point or final point. The length of the arrow is proportional to the vector's magnitude, while the direction in which the arrow points indicates the vector's direction.
4,052
On a two-dimensional diagram, a vector perpendicular to the plane of the diagram is sometimes desired. These vectors are commonly shown as small circles. A circle with a dot at its centre indicates a vector pointing out of the front of the diagram, toward the viewer. A circle with a cross inscribed in it indicates a vector pointing into and behind the diagram. These can be thought of as viewing the tip of an arrow head on and viewing the flights of an arrow from the back.
4,053
In order to calculate with vectors, the graphical representation may be too cumbersome. Vectors in an n-dimensional Euclidean space can be represented as coordinate vectors in a Cartesian coordinate system. The endpoint of a vector can be identified with an ordered list of n real numbers . These numbers are the coordinates of the endpoint of the vector, with respect to a given Cartesian coordinate system, and are typically called the scalar components of the vector on the axes of the coordinate system.
4,054
As an example in two dimensions , the vector from the origin O = to the point A = is simply written as
4,055
In three dimensional Euclidean space , vectors are identified with triples of scalar components:
4,056
This can be generalised to n-dimensional Euclidean space .
4,057
These numbers are often arranged into a column vector or row vector, particularly when dealing with matrices, as follows:
4,058
Another way to represent a vector in n-dimensions is to introduce the standard basis vectors. For instance, in three dimensions, there are three of them:
4,059
The notation ei is compatible with the index notation and the summation convention commonly used in higher level mathematics, physics, and engineering.
4,060
As explained above, a vector is often described by a set of vector components that add up to form the given vector. Typically, these components are the projections of the vector on a set of mutually perpendicular reference axes . The vector is said to be decomposed or resolved with respect to that set.
4,061
The decomposition or resolution of a vector into components is not unique, because it depends on the choice of the axes on which the vector is projected.
4,062
The choice of a basis does not affect the properties of a vector or its behaviour under transformations.
4,063
A vector can also be broken up with respect to "non-fixed" basis vectors that change their orientation as a function of time or space. For example, a vector in three-dimensional space can be decomposed with respect to two axes, respectively normal, and tangent to a surface . Moreover, the radial and tangential components of a vector relate to the radius of rotation of an object. The former is parallel to the radius and the latter is orthogonal to it.
4,064
In these cases, each of the components may be in turn decomposed with respect to a fixed coordinate system or basis set .
4,065
The following section uses the Cartesian coordinate system with basis vectors
4,066
Two vectors are parallel if they have the same direction but not necessarily the same magnitude, or antiparallel if they have opposite direction but not necessarily the same magnitude.
4,067
Two vectors are parallel if they have the same direction but not necessarily the same magnitude, or antiparallel if they have opposite direction but not necessarily the same magnitude.
4,068
The sum of a and b of two vectors may be defined as
4,069
The addition may be represented graphically by placing the tail of the arrow b at the head of the arrow a, and then drawing an arrow from the tail of a to the head of b. The new arrow drawn represents the vector a + b, as illustrated below:
4,070
This addition method is sometimes called the parallelogram rule because a and b form the sides of a parallelogram and a + b is one of the diagonals. If a and b are bound vectors that have the same base point, this point will also be the base point of a + b. One can check geometrically that a + b = b + a and + c = a + .
4,071
The difference of a and b is
4,072
Subtraction of two vectors can be geometrically illustrated as follows: to subtract b from a, place the tails of a and b at the same point, and then draw an arrow from the head of b to the head of a. This new arrow represents the vector + a, with being the opposite of b, see drawing. And + a = a − b.
4,073
A vector may also be multiplied, or re-scaled, by a real number r. In the context of conventional vector algebra, these real numbers are often called scalars to distinguish them from vectors. The operation of multiplying a vector by a scalar is called scalar multiplication. The resulting vector is
4,074
Intuitively, multiplying by a scalar r stretches a vector out by a factor of r. Geometrically, this can be visualized as placing r copies of the vector in a line where the endpoint of one vector is the initial point of the next vector.
4,075
If r is negative, then the vector changes direction: it flips around by an angle of 180°. Two examples are given below:
4,076
Scalar multiplication is distributive over vector addition in the following sense: r = ra + rb for all vectors a and b and all scalars r. One can also show that a − b = a + b.
4,077
The length or magnitude or norm of the vector a is denoted by ‖a‖ or, less commonly, |a|, which is not to be confused with the absolute value .
4,078
The length of the vector a can be computed with the Euclidean norm,
4,079
which is a consequence of the Pythagorean theorem since the basis vectors e1, e2, e3 are orthogonal unit vectors.
4,080
This happens to be equal to the square root of the dot product, discussed below, of the vector with itself:
4,081
A unit vector is any vector with a length of one; normally unit vectors are used simply to indicate direction. A vector of arbitrary length can be divided by its length to create a unit vector. This is known as normalizing a vector. A unit vector is often indicated with a hat as in â.
4,082
To normalize a vector a = , scale the vector by the reciprocal of its length ‖a‖. That is:
4,083
where θ is the measure of the angle between a and b . Geometrically, this means that a and b are drawn with a common start point, and then the length of a is multiplied with the length of the component of b that points in the same direction as a.
4,084
The dot product can also be defined as the sum of the products of the components of each vector as
4,085
The cross product is only meaningful in three or seven dimensions. The cross product differs from the dot product primarily in that the result of the cross product of two vectors is a vector. The cross product, denoted a × b, is a vector perpendicular to both a and b and is defined as
4,086
where θ is the measure of the angle between a and b, and n is a unit vector perpendicular to both a and b which completes a right-handed system. The right-handedness constraint is necessary because there exist two unit vectors that are perpendicular to both a and b, namely, n and .
4,087
The cross product a × b is defined so that a, b, and a × b also becomes a right-handed system . This is the right-hand rule.
4,088
The length of a × b can be interpreted as the area of the parallelogram having a and b as sides.
4,089
The cross product can be written as
4,090
For arbitrary choices of spatial orientation the cross product of two vectors is a pseudovector instead of a vector .
4,091
The scalar triple product is not really a new operator, but a way of applying the other two multiplication operators to three vectors. The scalar triple product is sometimes denoted by and defined as:
4,092
It has three primary uses. First, the absolute value of the box product is the volume of the parallelepiped which has edges that are defined by the three vectors. Second, the scalar triple product is zero if and only if the three vectors are linearly dependent, which can be easily proved by considering that in order for the three vectors to not make a volume, they must all lie in the same plane. Third, the box product is positive if and only if the three vectors a, b and c are right-handed.
4,093
In components , if the three vectors are thought of as rows , the scalar triple product is simply the determinant of the 3-by-3 matrix having the three vectors as rows
4,094
The scalar triple product is linear in all three entries and anti-symmetric in the following sense:
4,095
All examples thus far have dealt with vectors expressed in terms of the same basis, namely, the e basis {e1, e2, e3}. However, a vector can be expressed in terms of any number of different bases that are not necessarily aligned with each other, and still remain the same vector. In the e basis, a vector a is expressed, by definition, as
4,096
The scalar components in the e basis are, by definition,
4,097
In another orthonormal basis n = {n1, n2, n3} that is not necessarily aligned with e, the vector a is expressed as
4,098
and the scalar components in the n basis are, by definition,
4,099
The values of p, q, r, and u, v, w relate to the unit vectors in such a way that the resulting vector sum is exactly the same physical vector a in both cases. It is common to encounter vectors known in terms of different bases . In such a case it is necessary to develop a method to convert between bases so the basic vector operations such as addition and subtraction can be performed. One way to express u, v, w in terms of p, q, r is to use column matrices along with a direction cosine matrix containing the information that relates the two bases. Such an expression can be formed by substitution of the above equations to form
4,100
Distributing the dot-multiplication gives