Tensors as multi-dimensional arrays

In a given basis, a pure tensor of type \({(j,k)}\) can be written using component notation in the form

\(\displaystyle v^{1}\otimes\dotsb\otimes v^{j}\otimes\varphi_{1}\otimes\dotsb\otimes\varphi_{k}\equiv T^{\mu_{1}\dots\mu_{j}}{}_{\lambda_{1}\dots\lambda_{k}}e_{\mu_{1}}\otimes\dotsb\otimes e_{\mu_{j}}\otimes\beta^{\lambda_{1}}\otimes\dotsb\otimes\beta^{\lambda_{k}}, \)

where the Einstein summation convention is used in the second expression. Note that the collection of terms into \({T}\) is only possible due to the defining property of the tensor product being linear over addition. The tensor product between basis elements is often dropped in such expressions. Also note that this means that in terms of the tensor as a multilinear mapping we have

\(\displaystyle T^{\mu_{1}\dots\mu_{j}}{}_{\lambda_{1}\dots\lambda_{k}}=T\left(\beta^{\mu_{1}},\dotsb,\beta^{\mu_{j}},e_{\lambda_{1}},\dotsb,e_{\lambda_{k}}\right). \)

A general tensor is a sum of such pure tensor terms, so that any tensor \({T}\) can be represented by a \({\left(j+k\right)}\)-dimensional array of scalars. For example, any tensor of order 2 is a matrix, and type \({(1,1)}\) tensors are linear mappings operating on vectors or forms via ordinary matrix multiplication if they are all expressed in terms of components in the same basis. Basis-independent quantities from linear algebra such as the trace and determinant are then well-defined on such tensors.

Δ It is important to remember that a tensor \({T^{\mu\nu}}\) or \({T_{\mu\nu}}\) can be written as a matrix of scalars, but linear algebra operations only are valid for linear operators \({T^{\mu}{}_{\nu}}\). A similar source of potential confusion is that the (anti-)symmetry of \({T^{\mu\nu}}\) or \({T_{\mu\nu}}\) is basis independent, while that of \({T^{\mu}{}_{\nu}}\) is not.
Δ A potentially confusing aspect of component notation is the basis vectors \({e_{\mu}}\), which are not components of a 1-form but rather vectors, with \({\mu}\) a label, not an index. Similarly, the basis 1-forms \({\beta^{\lambda}}\) should not be confused with components of a vector. In particular, index lowering/raising does not apply to basis vectors, e.g. \({\beta^{\mu}\overset{\mathrm{no}}{=}g^{\mu\nu}e_{\nu}}\) makes no sense since we cannot equate a 1-form to a sum of vectors.

The Latin letters of abstract index notation (e.g. \({T^{ab}{}_{cd}}\)) can thus be viewed as placeholders for what would be indices in a particular basis, while the Greek letters of component notation represent an actual array of scalars that depend on a specific basis. The reason for the different notations is to clearly distinguish tensor identities, true in any basis, from equations true only in a specific basis.

Δ It is common in general relativity and other subjects to abuse both abstract and index notation to represent objects that are non-tensorial. We will see this in the chapter on Riemannian manifolds.
Δ Note that if abstract index notation is not being used, Latin and Greek indices are often used to make other distinctions, a common one being between indices ranging over three space indices and indices ranging over four spacetime indices.
Δ Note that “rank” and “dimension” are overloaded terms across these constructs: “rank” is sometimes used to refer to the order of the tensor, which is the dimensionality of the corresponding multi-dimensional array; the dimension of a tensor is that of the underlying vector space, and so is the length of a side of the corresponding array (also sometimes called the dimension of the array). However, the rank of a order 2 tensor coincides with the rank of the corresponding matrix.

As a component matrix, the metric tensor satisfies \({g^{\mu}{}_{\lambda}g^{\lambda}{}_{\nu}=g^{\mu\lambda}g_{\lambda\nu}=g^{\mu}{}_{\nu}=\delta^{\mu}{}_{\nu}=I}\), hence the dual metric tensor \({g^{ab}}\) is also called the inverse metric tensor. Recalling the transformation of the top exterior product of basis vectors of an \({n}\)-dimensional vector space, we can derive an expression for the unit \({n}\)-vector in an arbitrary basis \({e_{\mu}=M^{\nu}{}_{\mu}\hat{e}_{\nu}}\) of the same orientation by using the component array of a metric of signature \({(r,s)}\) in that basis:

\begin{aligned}g_{\mu\nu} & =g\left(e_{\mu},e_{\nu}\right)\\
& =g\left(M^{\lambda}{}_{\mu}\hat{e}_{\lambda},M^{\sigma}{}_{\nu}\hat{e}_{\sigma}\right)\\
& =\sum_{\lambda}M^{\lambda}{}_{\mu}M^{\lambda}{}_{\nu}g\left(\hat{e}_{\lambda},\hat{e}_{\lambda}\right)\\
& =\left(M^{T}\tilde{M}\right)_{\mu\nu}\\
\Rightarrow\det\left(g\right) & =\det\left(M^{T}\tilde{M}\right)\\
& =\pm\left(\det\left(M\right)\right)^{2}\\
\Rightarrow e_{1}\wedge\dotsb\wedge e_{n} & =\sqrt{\left|\det\left(g\right)\right|}\:\hat{e}_{1}\wedge\dotsb\wedge\hat{e}_{n}\\
\Rightarrow\hat{\beta}^{1}\wedge\dotsb\wedge\hat{\beta}^{n} & =\sqrt{\left|\det\left(g\right)\right|}\:\beta^{1}\wedge\dotsb\wedge\beta^{n}
\end{aligned}

Here \({\tilde{M}}\) is \({M}\) with negative entries for every row \({\lambda}\) where \({g\left(\hat{e}_{\lambda},\hat{e}_{\lambda}\right)=-1}\), whose determinant is thus changed by a sign when \({s}\) is odd.

Δ It is important to remember that the element \({g^{\mu\nu}}\) is the entry in row \({\mu}\) and column \({\nu}\) of the inverse of the component matrix \({g_{\mu\nu}}\); in particular, \({g^{\mu\nu}g_{\mu\nu}=r+s\neq1}\).
Δ It is important to remember that the number \({\det\left(g\right)}\) is the determinant of the matrix with element \({g_{\mu\nu}}\) in row \({\mu}\) and column \({\nu}\), and that it depends on both the basis and the inner product.
Δ The symbol \({g}\) is frequently used to denote \({\mathrm{det}(g)}\), and sometimes \({\sqrt{\left|\mathrm{det}(g)\right|}}\), in addition to denoting the metric tensor itself.

An Illustrated Handbook