This article provides information on tensor mathematics, relevant to fluid dynamics and computational fluid dynamics (CFD).   It describes scalars and vectors and typical algebraic vector operations.  It follows with second rank tensors, their algebraic operations, symmetry, skewness and tensor invariants such as trace and determinant.  It briefly discusses higher rank tensors before describing co-ordinate system and change of axis.  Tensor calculus is introduced, along with derivative operators such as div, grad, curl and Laplacian.  The final section covers the integral theorems of Gauss and Stokes, with a physical representation of div and curl, and scalar and vector potentials.

Tensor Mathematics: Contents

1 Scalars and Vectors

A scalar is any physical property which can be represented by a single real number in some chosen unit system, e.g. pressure (     −1 −2 kg m   s), temperature (K) and density (     −3 kg m). Scalars are denoted by single letters in italics, e.g. p, T, ρ. The standard scalar operations must be performed using consistent units of measurement; in particular, addition, subtraction and equality are only physically meaningful for scalars of the same dimensional units.

A vector is an entity which is represented by both magnitude and direction. In its most general form an n-dimensional vector a can be denoted by n scalar components (a1,a2, ) corresponding to coordinate axes x ,x  ,...x 1   2     n. For continuum mechanics, where we deal with 3 dimensional (Euclidian) space, the vector a = (a1,a2,a3) relates to a general set of axes x1,x2,x3; representing x,y, z in a rectangular Cartesian system (see Section 4.1) or r,θ,z and r,θ, ϕ in cylindrical and spherical polar coordinates. The index notation presents the same vector as a i (i = 1, 2,...,n) in which i corresponds to each of the coordinate axes. The list of indices (i = 1, 2,...,n) is usually omitted in mathematical text since it is implied by the form of the equation in which it is written. In this article the tensor notation will generally be used in which a vector or tensor (see Section 2) is represented by letters in bold face, e.g. a. The benefits of this notation are that: it does not imply anything about the coordinate system; it therefore promotes the concept of a vector as a entity with direction and magnitude rather than a group of three scalars; and, it is more compact.

The magnitude, or modulus of a vector a or ai is denoted by |a| and a in respective notations. Vectors of unit magnitude are referred to as unit vectors. It is assumed that the reader is familiar with the basic operations of multiplication of a vector and scalar and vector addition and subtraction, which are both commutative and associative. The next three sections describe the remaining vector and tensor operations required in continuum mechanics.

1.1 The scalar product of two vectors

The scalar product of two vectors a = (a ,a ,a ) 1  2  3 and b = (b ,b ,b) 1  2  3 is defined as

a∙b =  a1b1 + a2b2 + a3b3 = aibi

The following behaviour is observed:

a ∙a = a a  + a a  + a a  = a2 1 1    2 2    3 3
  ∙      ∙ a  b =  b a
a ∙(b + c) = a∙ b + a∙c

The geometrical representation of the scalar product is   ∙ a  b = ab cosθ as depicted by the shaded area in figure 1:

Figure 1: The scalar product

The scalar product is invariant under a transformation of axes since it is defined by the magnitudes of the two vectors and the angle between them. The concept of invariance is important to continuum mechanics and can be discussed further once the the ideas of change of axes have been described mathematically in Section 4.

1.2 The vector product of two vectors

The vector product of a vector a = (a1,a2,a3) with b =  (b1,b2,b3) is defined as

a × b = (a2b3 − a3b2,a3b1 − a1b3,a1b2 − a2b1) = eijkajbk

where the permutation symbol

       ( |{ 0    when  any two  indices are equal eijk =   +1   when  i,j,k are an even permutation  of 1,2,3 |( − 1  when  i,j,k are an odd permutation  of 1,2,3

where the even permutations are 123, 231 and 312 and the odd permutations are 132, 213 and 321. The following behaviour is observed:

a × a =  0
a × b =  − b × a
a × (b + c) = a × b +  a × c

The geometrical representation of the vector product can be illustrated by defining a and b to lie in the x1x2 plane of a rectangular coordinate system Ox1x2x3, a =  (a,0,0) and b = (b cosθ,b sin θ,0). The vector product is then a × b =  (0,0,absinθ ) which can be seen in figure 2 to follow the direction of the z axis. Therefore, the vector product represents a normal vector of magnitude equal to the area of a parallelogram described by vectors a and b. The direction of the normal vector follows the convention of a set of right handed axes as defined in Section 4.1.

Figure 2: The vector product

The Kronecker delta δpq is another useful symbol to shorten equations in index notation. It is defined by

       { 1   if p = q δpq =   0   if p ⁄= q

It is sometimes useful to know the e − δ identity to help to manipulate vector equations:

eijkeirs = δjrδks − δjsδkr

2 Second rank tensors

A second rank tensor is defined here as a linear vector function, i.e. it is a function which associates an argument vector to another vector. A vector is itself a first rank tensor and a scalar is a tensor of rank zero. Higher rank tensors are discussed briefly later but for the mostpart we deal with second rank tensors which are often be simply referred to as tensors.

The tensor acts as a linear vector function as follows:

ui = Tijvj

An n-dimensional second rank tensor, T or Tij has n2 components which can be expressed in a array corresponding to axes x ,x  ,...,x 1  2      n as:

          (                     ) |  T11  T12  ...  T1n | |  T21  T22  ...  T2n | T = Tij = |(   ...    ...   ...   ...  |) Tn1  Tn2  ...  Tnn

The components for which i = j are often referred to as the diagonal components, and those for i ⁄= j can be referred to as the off-diagonal components. The use of the array notation should be used sparingly since it can makes the algebra unwieldy and the notation becomes almost unmanageable for tensors of rank higher than two. For the remainder of this chapter, the 3-dimensional tensor with 9 components will be used to present tensor algebra in array notation:

           (               ) T11  T12 T13 T  = Tij = (  T21  T22 T23 ) T31  T32 T33

2.1 The single dot product

Equation (12) can be written in tensor notation as a single dot product operation pairing one geometric vector to another (expanding the vector in a column for convenience)

                   (  T  v  + T  v  + T  v  ) (    11 1    12 2    13 3 ) u = T ∙v =  Tijvj =   T21v1 + T22v2 + T23v3 T31v1 + T32v2 + T33v3

If we now define the tensor S to be the transpose of T,        T S =  T, as Sij = Tji, then:

         T u =  v∙ S  = viSji

The identity tensor I is defined by the requirement that

  ∙      ∙ I  v = v  I = v

and therefore:

         (          ) 1  0  0 I = δij = (  0  1  0 ) 0  0  1

2.2 Symmetric and skew (antisymmetric) tensors

A tensor is said to be symmetric if its components are symmetric, i.e. T =  TT. A skew or antisymmetric tensor has T =  − TT which intuitively implies that T11 = T22 = T33 = 0. Every second rank tensor can be represented by symmetric and skew parts by

     1        T    1       T T  = 2-(T  + T  )+  2(T − T  ) = symm   T + skew T ◟---◝◜----◞  ◟----◝◜---◞ symmetric       skew

A symmetric or skew tensor remains symmetric or skew under a transformation of axes, i.e. symmetry and skew-symmetry are intrinsic properties of a tensor, being independent of the coordinate system in which they are represented.

2.3 The scalar product of two tensors

The scalar product of two tensors is denoted by    ∙ T  ∙S which can be evaluated as the sum of the nine products of the tensor components

  ∙ T ∙S  = TijSij = T11S11 + T12S12 + T13S13+ T21S21 + T22S22 + T23S23+ T31S31 + T32S32 + T33S33

The ‘∙∙’ notation is used by some authors to define another scalar product which is denoted here by T ∙ ∙S

T  ∙ ∙S = TijSji =  T11S11 + T12S21 + T13S31+ T21S12 + T22S22 + T23S32+ T31S13 + T32S23 + T33S33

Of course, there is no need to distinguish between the two definitions of scalar product if either of the tensors is symmetrical.

2.4 The tensor product of two vectors

The tensor product of two vectors, denoted by ab (sometimes denoted a ⊗b), is defined by the requirement that (ab )∙v =  a(b∙ v) for all v and produces a tensor whose components are evaluated as:

            (  a b   a b   a b  ) (   1 1   1 2   1 3 ) ab =  aibj =    a2b1  a2b2  a2b3 a3b1  a3b2  a3b3

2.5 The tensor product of two tensors

The tensor product of two tensors combines two operations T and S so that S is performed first, i.e.(T ∙S )∙v =  T ∙(S ∙v ) for all v. It is denoted by    ∙ (T   S) and produces a tensor whose components are evaluated as:

Pij = TikSkj

The product is only commutative is both tensors are symmetric since

          T    TT T ∙ S = [S  ∙T  ]

2.6 The trace of a tensor

The trace of a tensor is a scalar invariant function of the tensor, denoted by

trT  = Tijδij = Tkk = T11 + T22 + T33

2.7 The determinant of a tensor

The determinant of a tensor is also a scalar invariant function of the tensor denoted by

         |              | ||T11  T12  T13 || det T =  |T21  T22  T23 | = 1-eijkepqrTipTjqTkr = ||T    T    T   ||   6 31   32   33
T11(T22T33 − T23T32 ) − T12(T21T33 − T23T31) + T13(T21T32 − T22T31)

3 Higher rank tensors

In Section 2.4 an operation was defined for the product of two vectors which produced a second rank tensor. Tensors of higher rank than two can be formed by the product of more than two vectors, e.g. a third rank tensor abc, a fourth rank tensor abcd. If one of the tensor products is replaced by a scalar (∙) product of two vectors, the resulting tensor is two ranks less than the original. For example, (a ∙b )cd is a second rank tensor since the product in brackets is a scalar quantity. Similarly if a scalar (∙ ∙) product of two tensors is substituted as in    ∙ ab ∙cd, the resulting tensor is four ranks less than the original. The process of reducing the rank of a tensor by a scalar product is known as contraction. The dot notation indicates the level of contraction and can be extended to tensors of any rank. In continuum mechanics tensors of rank greater than two are rare. The most common tensor operations to be found in continuum mechanics other than those in Sections 1 and 2 are:

a vector product of a vector a and second rank tensor T to produce a third rank tensor P  = aT whose components are

Pijk = aiTjk

a scalar product of a vector a and third rank tensor P to produce a second rank tensor T  = a ∙P whose components are

Tjk = aiPijk

a scalar (∙∙) product of a fourth rank tensor C and a second rank tensor S to produce a second rank tensor        ∙ T  = C ∙ S whose components are

Tij = CijklSkl

4 Coordinate system and change of axes

The base of reference for the physical quantities in continuum mechanics is the coordinate system in which we are working. The components of a tensor can change if the coordinate system undergoes a transformation. We must first investigate the properties of a set of axes in order to formulate rules for coordinate transformation.

4.1 Cartesian coordinates

We will confine our coordinate description to a set of right-handed rectangular cartesian axes as shown in figure 3. This system of axes is constructed by defining an origin O from which three lines are drawn at right angles to each other, termed the Ox 1, Ox 2 and Ox 3 axes. This notation is preferred to the well-known Ox, Oy, Oz as it relates better to the transformation equations. A right-handed set of axes Ox1x2x3 is defined such that to an observer looking down the x3 axis, the arc from a point on the Ox1 axis to a point on the Ox2 axis is in a clockwise sense.

Figure 3: Coordinate system and directional cosines

We can define a position vector p from the origin of a set of rectangular coordinates which makes the angles α, β, γ with the x1, x2 and x3 axes respectively. The directional cosines are then defined as cosα, cosβ, cosγ. The respective directional cosines can be expressed in index notation as li = pi∕p or simply:

    p-- l = |p |

4.2 Rotation of axes

Consider two right-handed sets of axes with the same origin, labelled Ox  x  x 1  2 3 and    ′ ′ ′ Ox 1x2x3 as shown in figure 4.

Figure 4: Rotation of axes

The sets of axes can be brought into coincidence by a rotation of axes. The directional cosines of Ox ′1x ′2x ′3 relative to Ox1x2x3 can be used to express Ox ′1x′2x ′3 in terms of Ox1x2x3: the directional cosines of Ox ′1 relative to Ox1x2x3 are expressed as L11, L12 and L13 respectively; those of Ox ′ 2 and    ′ Ox 3 are L21, L22, L23 and L31, L32, L33. The transformation can be summarised as:

    | -O′-|-x1---x2---x3-- x1 |L11  L12   L13 x′2 |L21  L22   L23 x′3 |L31  L32   L33

The matrix transformation can be expressed in a more compact form by defining the group of directional cosines as a tensor L =  Lii. A coordinate x in the Ox1x2x3 axes can then be represented in the    ′ ′ ′ Ox 1x2x3 axes as:

  ′ x  = L ∙x

Components of the transformation tensor L must satisfy certain conditions since they are defined by two right-handed sets of axes. Since the axes are mutually perpendicular:

L11L21 +  L12L22 + L13L23 = 0 L   L  +  L  L   + L  L   = 0 21 31    22 32    23  33 L31L11 +  L32L12 + L33L13 = 0

and since the sums of squares of directional cosines are unity:

L2  + L2  + L2  =  0 121    122    123 L 221 + L 222 + L 223 = 0 L 31 + L 32 + L 33 = 0

Equations (33 and (34) describe the orthonormality conditions which can be expressed in a more compact form:

     T L ∙L   = I

The transformation matrix must satisfy one further requirement which ensures that both the sets of axes are right-handed. It is:

det L = 1

5 Tensor calculus

This chapter has so far dealt with the algebra of tensors at a point. Tensors (of any rank) in continuum mechanics vary with space and time and are therefore tensor fields. Consequently we have to deal with derivatives of tensors in both space and time. The subject of time derivatives warrants a longer discussion in the context of kinematics, but here we simply introduce the concept of a total time derivative of a tensor field denoted by D ∕Dt. If we take a position vector p (t) of a particle of material at time t the velocity V is given by:

Dp         Δp ----=  lim  ---- Dt    t→0  Δt

The time derivatives of other tensors are defined in the same way. The familiar rules of a derivative of a product hold equally for two or more tensors as for scalar quantities. However, it is important to stress that since some operations, such as vector product (×) are non-commutative, it is important to preserve the order of operations, e.g.:

 D            Da            Db Dt-(a × b ) = Dt--× b + a × -Dt-

5.1 Gradient

If a scalar field F is defined and continuously differentiable then the gradient of F, ∇F is a vector

             (              ) ∇F  = ∂ F =    ∂F-, ∂F-, ∂F-- i       ∂x1  ∂x2  ∂x3

Here we introduce the nabla vector operator ∇, represented in index notation as ∂i:

                 (              ) ∇ ≡  ∂i ≡ -∂--≡    -∂--,-∂-, -∂-- ∂xi      ∂x1  ∂x2  ∂x3

The nabla operator operates on the quantity to the right of it and as before the rules of a derivative of a product still hold. Otherwise the nabla operator behaves like any other vector in an algebraic operation. When working in index notation, the use of ∂i has advantages over other notations since it represents the nabla operator as any other vector.

The derivative of F in the direction of the unit vector n is given by:

∂F--    ∙ ∂n  =  n  ∇F  = |∇F |cos θ

where θ is the angle between ∇F and n. Assuming ∇F   ⁄= 0, ∂F ∕∂n is a maximum when θ =  0. Therefore the vector ∇F follows the direction in which F increases most rapidly with a magnitude |∇F |.

The gradient can operate on any rank tensor to produce a tensor one rank higher. For example, the gradient of a vector a is a tensor

             (                               ) ∂a1 ∕∂x1  ∂a2 ∕∂x1  ∂a3 ∕∂x1 ∇a  =  ∂iaj = (  ∂a1 ∕∂x2  ∂a2 ∕∂x2  ∂a3 ∕∂x2 ) ∂a1 ∕∂x3  ∂a2 ∕∂x3  ∂a3 ∕∂x3

By the same definition as Equation (41):

∂a- = n ∙∇a ∂n

The physical representation of the gradient of vector represent the maximum rate of change of the individual components of the vector.

5.2 Divergence

If a vector field a is defined and continuously differentiable then the divergence of a, ∇ ∙a is a scalar

               ∂a    ∂a     ∂a ∇ ∙ a = ∂iai = --1-+ ---2 + ---3 ∂x1   ∂x2    ∂x3

If ∇ ′∙ and a′ represent the divergence operator and vector a under a rotation to new axes Ox ′1x′2x′3, then by Equation (32),

∇ ′∙a′ = (L ∙∇ ) ∙(L ∙a) = ∇ ∙(LT ∙L )∙a =  ∇ ∙a

since L is independent of x1,x2, x3 and by the orthonormality condition (Equation (35)). The divergence of a vector field is therefore a scalar invariant.

The divergence can operate a tensor of rank 1 or above to produce a tensor one rank lower. For example the divergence of a second rank tensor T is a vector (expanding the vector in a column for convenience)

                (                                   ) ∂T11∕∂x1 + ∂T12 ∕∂x1 + ∂T13∕∂x1 ∇ ∙T  = ∂iTij = (  ∂T21∕∂x2 + ∂T22 ∕∂x2 + ∂T23∕∂x2  ) ∂T31∕∂x3 + ∂T32 ∕∂x3 + ∂T33∕∂x3

The physical representation of divergence is discussed in Section 6 and is central to the understanding of continuum mechanics.

5.3 Curl

If a vector field a is defined and continuously differentiable then the curl of at, ∇  × a is a vector

                    (                                   ) ∂a3    ∂a2 ∂a1    ∂a3  ∂a2    ∂a1 ∇  × a = eijk∂jak =   ∂x--−  ∂x-,∂x-- − ∂x--,∂x--−  ∂x-- 2      3    3      1    1     2

Curl can operate on any tensor of rank one and higher to produce a tensor of the same rank. For example the curl of a second rank tensor T is a second rank tensor

∇  × T =  eijk∂jTkl

As with divergence, the physical representation of curl is discussed in Section 6 and has important significance in continuum mechanics.

5.4 Laplacian

The Laplacian is a scalar operator defined by ∇2  ≡ ∇ ∙ ∇. It can be deduced that the laplacian is a scalar invariant operator since it is the scalar product of two vectors, both the nabla operator. The laplacian of a scalar field F is the scalar:

               ∂2F    ∂2F    ∂2F ∇2F   = ∂2F  = ---2 + ---2 + ---2 ∂x 1    ∂x2    ∂x3

5.5 Useful tensor identities

Several identities are listed below which can be verified by under the assumption that all the relevant derivatives exist and are continuous. The identities are expressed for scalar F, vector a and (second rank) tensor T

∇ ∙(∇ × a ) ≡ 0 ∇ × (∇s ) ≡ 0 ∇ ∙(sa) ≡ s∇ ∙a + a ∇s ∇ × (sa ) ≡ s∇ × a + a ∇ × s ∇ (a ∙b ) ≡ a × (∇  × b ) + b × (∇ × a) + (a∙ ∇)b + (b ∙∇ )a ∇ ∙(a × b) ≡ b ∙(∇  × a) − a ∙(∇  × b) ∇ × (a × b ) ≡ a(∇ ∙b ) − b (∇ ∙a) + (b ∙∇ )a − (a ∙∇ )b ∇ × (∇  × a) ≡ ∇ (∇ ∙a ) − ∇2a (∇ × a ) × a ≡ a∙(∇a ) − ∇(a ∙a)

6 Integral theorems

In the preceding sections we dealt with the behaviour of tensors at a point and its representation of a tensor field. However, it is also necessary to consider the behaviour of tensors over finite regions of space in order to derive many of the equations of continuum mechanics. The derivations rely on some integral theorems which are presented here without derivation in their most general forms which are independent of the choice of coordinate system. The theorems relate line C, surface S and volume V integrals which are merely generalisations of the definite, double and triple integrals. For example, in the definite integral

∫  a f(x ) dx b

we integrate along the x-axis between a and b and the integrand f is a function defined at each point between a and b. In a line integral we integrate over a curve in space and the integrand is defined at all points along C. In the following theorems it is assumed that the curves and surfaces are piecewise smooth, i.e. they consist of a finite number of smooth curves and surfaces respectively.

6.1 Gauss’s theorem

Gauss’s theorem relates the integral over an arbitrary volume of space to the integral over the surface bounding the volume. The generalised Gauss’s theorem takes the form

∫              ∫ n ⋆ A dS  =    ∇ ⋆ A  dV S             V

where nis the unit normal vector to the S and A can represent any scalar, vector and tensor field which is defined and continuously differentiable throughout V. The star notation ⋆ is introduced to represent any product, i.e. scalar a ∙b, vector a × b, tensor ab. The star ‘⋆’ can therefore be replaced by either a ‘∙’, a ‘×’ or nothing and the volume integral will contain a    ∙ ∇, ∇ × or ∇ respectively.

6.2 Stokes’s theorem

Stokes’s theorem relates the integral over a closed curve (represented by ∮ C) in space to the integral over a portion of an orientable surface in space bounded by the curve. Stokes’s theorem applied to a vector a is

∫                   ∮ n ∙(∇  × a) dS =     t∙a dC S                   C

where t is the unit tangent vector along the curve.

6.3 Physical representation of divergence and curl

Take a closed surface S bounding a volume V and consider the integral I

    ∫             ∫ I =    ∇ ∙a dV  =    n ∙a dS V             S

If a is directed away from the enclosed volume, I > 0 and the field a is diverging from V. if a is directed towards it, I < 0 and the field ais converging towards V. In general, n ∙a may take positive or negative values around S and the sign of I will indicate whether the field is convergent or divergent. If we collapse our volume to a single point P, the sign of ∇  ∙a represents whether the field in the neighborhood of P is divergent or convergent and its magnitude represent the strength of divergence or convergence.

In the context of continuum mechanics, in a motion of an incompressible medium the net flow through a volume of material must be zero, i.e. the net flux of velocity V across the surface bounding the volume must be zero. Therefore,

∫             ∫ n∙ V dS  =    ∇ ∙V  dV  = 0 S             V

and by vanishing V to a point, it can be concluded that the condition for incompressibility is that ∇ ∙V  = 0 at all points.

Figure 5: Physical representation of curl

In order to understand the physical significance of ∇ × a, we take an arbitrary circular disk bounded by the closed curve C and centred at a point P. The disc is oriented with its normal axis in the direction of ∇ ×  a at P as shown in figure 5. If we consider

    ∮             ∫ I =    t ∙a dC =    n ∙(∇ ×  a) dS C             S

Since n and ∇ × a are in the same direction n ∙(∇  × a) ≥ 0. This indicates that the field in the neighbourhood of P is either rotating, when t ∙a > 0, or irrotational when t ∙a = 0. The condition for an irrotational field a must be ∇ × a =  0. In continuum mechanics, flows are termed irrotational when ∇  × V =  0.

6.4 Scalar and vector potentials

One of the relationships in Eqn. (50) is ∇ × (∇F  ) ≡ 0. It can be shown that if a vector field a defined in a (singly connected) region is such that ∇ × a ≡  0, then a scalar potential field F exists such that a ≡ ∇F.

We define two scalar potentials, F and F′ such that a = ∇F   = ∇F  ′ and let U = F  − F ′. It can be seen that ∇U   = 0 which means U must be independent of the coordinate system, i.e. U =  constant. This shows that the scalar potential is unique apart from an additive constant.

A generalisation of the statement given at the end of Section 6.3 is that any vector field a for which ∇ ×  a ≡ 0 is said to be irrotational. For such a field the line integral between two points P and Q

∫  Q t ∙a dC P

is independent of the path of integration and is said to be conservative. The scalar potential is used in many areas of continuum mechanics. It is often adopted to reduce the complexity of a problem by reducing a vector field to a scalar field although, in doing this, we are making the assumption that that the field is irrotational.

Another of the relationships in Eqn. (50) is ∇  ∙(∇  × a) ≡ 0. It can be shown that if a vector field a defined in a (singly connected) region is such that ∇  ∙a ≡ 0, then a vector potential field b exists such that a ≡  ∇ × b.

We define two scalar potentials, b and   ′ b such that                     ′ a =  ∇ × b =  ∇ × b and let c = b − b ′. It can be seen that ∇  × c = 0 which means a scalar field F exists such that c = ∇F. This shows that the vector potential is unique apart from an addition of the gradient of an arbitrary scalar field. A vector field a which satisfies ∇  ∙a ≡ 0 is said to be solenoidal.

Helmholtz’s theorem combines vector and scalar potentials in the statement that any continuously differentiable vector field a can be decomposed into the sum of a irrotational scalar field F and a solenoidal vector field b,

a = ∇F  + ∇  × b

The regions of a field a where ∇ ∙a ⁄= 0 are often termed sources of a, and regions where ∇  × a ⁄= 0 are called vortices of a.

Tagged on: