Combinatorial notations

An alternating quantity can be represented in several different ways. For example, the exterior product applied to multiple vectors is defined to change sign under the exchange of any two vector components. This can be written

\(\displaystyle v_{1}\wedge v_{2}\wedge\dotsb\wedge v_{k}=\textrm{sign}(\pi)v_{\pi(1)}\wedge v_{\pi(2)}\wedge\dotsb\wedge v_{\pi(k),} \)

where \({\pi}\) is any permutation of the \({k}\) indices, and sign\({(\pi)}\) is the sign of the permutation. Another way of writing it is

\(\displaystyle v_{1}\wedge v_{2}\wedge\dotsb\wedge v_{k}=\frac{1}{k!}\underset{i_{1},i_{2},\dotsc,i_{k}}{\sum}\varepsilon_{i_{1}i_{2}\dots i_{k}}v_{i_{1}}\wedge v_{i_{2}}\wedge\dotsb\wedge v_{i_{k},} \)

where each index ranges from \({1}\) to \({k}\) and \({\varepsilon}\) is the permutation symbol (AKA completely anti-symmetric symbol, Levi-Civita symbol, alternating symbol, \({\varepsilon}\)-symbol), defined to be \({+1}\) for even index permutations, \({-1}\) for odd, and \({0}\) otherwise. In order to remove the summation sign by using the Einstein summation convention, the permutation symbol with upper indices is defined identically.

Δ Some texts, especially in the context of special relativity, define \({\varepsilon^{0\cdots n}=1}\) as we do, but define \({\varepsilon_{0\cdots n}=-1}\) (by lowering the indices with \({\eta_{\mu\nu}}\), as we will shortly cover).

The generalized Kronecker delta

\(\displaystyle \delta_{\mu_{1}\cdots\mu_{k}}^{\nu_{1}\cdots\nu_{k}}\equiv\sum_{\pi}\textrm{sign}\left(\pi\right)\delta_{\mu_{1}}^{\nu_{\pi\left(1\right)}}\cdots\delta_{\mu_{k}}^{\nu_{\pi\left(k\right)}} \)

gives the sign of the permutation of upper versus lower indices and vanishes if they are not permutations or have a repeated index. We can then relate this to the permutation symbol:

\begin{aligned}\delta_{\mu_{1}\cdots\mu_{k}}^{\nu_{1}\cdots\nu_{k}} & =\frac{1}{\left(n-k\right)!}\varepsilon^{\nu_{1}\cdots\nu_{k}\lambda_{k+1}\dots\lambda_{n}}\varepsilon_{\mu_{1}\cdots\mu_{k}\lambda_{k+1}\dots\lambda_{n}}\\
\Rightarrow\varepsilon^{\lambda_{1}\cdots\lambda_{n}}\varepsilon_{\lambda_{1}\dots\lambda_{n}} & =n!
\end{aligned}

Δ It is important to remember that \({\varepsilon}\) gives the sign of the permutation of sequential integers, and is only defined for \({n}\) indices which take values \({1\cdots n}\), while \({\delta}\) gives the sign of the permutation of any number of indices. For example, if we write matrices with all upper indices, for two dimensional symmetric and anti-symmetric matrices \({S^{ij}}\) and \({A^{ij}}\) we have \({S^{ij}\varepsilon_{ij}=0}\) and \({A^{ij}\varepsilon_{ij}=2A^{12}}\), while for \({n}\) dimensional matrices we have \({S^{ij}\delta_{ij}^{kl}=0}\) and \({A^{ij}\delta_{ij}^{kl}=2A^{kl}}\).

For objects with many indices, multi-index notation is sometimes used, in which a multi-index \({I}\) can be defined as \({I\equiv i_{1},i_{2},\dotsc,i_{k}}\), but also can represent a sum or product. For example, the previous expression can be written

\begin{aligned}v_{1}\wedge v_{2}\wedge\dotsb\wedge v_{k} & =\frac{1}{k!}\underset{I}{\sum}\varepsilon^{I}v_{i_{1}}\wedge v_{i_{2}}\wedge\dotsb\wedge v_{i_{k}}\\ & =\frac{1}{k!}\varepsilon^{I}v_{I}.\end{aligned}

Δ Note that multi-index notation is potentially ambiguous and much must be inferred from context, since the number of indices \({k}\) is not explicitly noted, and the sequence of indices may be applied to either one object or any sum or product.

Another example of an alternating quantity is the determinant of an \({n\times n}\) matrix \({M^{i}{}_{j}}\), which can be written

\begin{aligned}\textrm{det}(M) & =\underset{\pi}{\sum}\textrm{sign}\left(\pi\right)M^{1}{}_{\pi(1)}M^{2}{}_{\pi(2)}\dotsm M^{n}{}_{\pi(n)}\\
& =\underset{i_{1},i_{2},\dotsc,i_{n}}{\sum}\varepsilon^{i_{1}i_{2}\dots i_{n}}M^{1}{}_{i_{1}}M^{2}{}_{i_{2}}\dotsm M^{n}{}_{i_{n}},\end{aligned}

where the first sum is over all permutations \({\pi}\) of the \({n}\) second indices of the matrix \({M^{i}{}_{j}}\). Using the previous relation for the exterior product in terms of the permutation symbol, we can see that the transformation of the top exterior product of basis vectors under a change of basis \({e_{\mu}^{\prime}=M^{\nu}{}_{\mu}e_{\nu}}\) is

\(e_{1}^{\prime}\wedge e_{2}^{\prime}\wedge\dotsb\wedge e_{n}^{\prime}=\textrm{det}(M)\: e_{1}\wedge e_{2}\wedge\dotsb\wedge e_{n}, \)

which reminds of us of the Jacobian determinant from integral calculus, and as we will see makes the exterior product a natural way to express the volume element.

The above relationship between the exterior product and the determinant means that under a positive definite inner product, for \({k}\) arbitrary vectors \({v_{\mu}=M^{\nu}{}_{\mu}\hat{e}_{\nu}}\) the quantity \({P\equiv v_{1}\wedge v_{2}\wedge\dotsb\wedge v_{k}}\) satisfies \({\left\Vert P\right\Vert =\sqrt{\left\langle P,P\right\rangle }=\left|\mathrm{det}(M)\right|}\), which equals the volume of the parallelepiped defined by the vectors for orthonormal \({\hat{e}_{\nu}}\). There are other ways in which \({P}\) behaves like a parallelepiped, and it is often useful to picture it as such.
Δ Since the specific vectors in \({P=v_{1}\wedge v_{2}\wedge\dotsb\wedge v_{k}}\) can have many values without changing \({P}\) itself (e.g. \({v\wedge w=(v+w)\wedge w}\)), a more accurate visualization might be the oriented subspace associated with the parallelepiped along with a basis-independent specification of volume. In particular, the change of basis formula above means that given any pseudo inner product, \({P}\) can always be expressed as the exterior product of \({k}\) orthogonal vectors.

If \({V}\) is \({n}\)-dimensional and has a basis \({e_{\mu}}\), a general element \({A}\) of \({\Lambda^{k}V}\) can be written in terms of a basis for \({\Lambda^{k}V}\) as

\(\displaystyle A=\underset{\mu_{1} < \dotsb < \mu_{k}}{\sum}A^{\mu_{1}\dots\mu_{k}}e_{\mu_{1}}\wedge\dotsb\wedge e_{\mu_{k}}. \)

Here the sum is over only ordered sequences of indices, since due to anti-symmetric elements being identified, only these are linearly independent. Each index can take on any value between \({1}\) and \({n}\). We can also write

\(\displaystyle A=\frac{1}{k!}\underset{\mu_{1},\dotsc,\mu_{k}}{\sum}A^{\mu_{1}\dots\mu_{k}}e_{\mu_{1}}\wedge\dotsb\wedge e_{\mu_{k},} \)

where the coefficient is now defined for all combinations of indices, and its value changes sign for any exchange of indices (and thus vanishes if any two indices have the same value). The factorial ensures that the values for ordered sequences of indices matches the above expression.

The first expression shows that \({\Lambda^{k}V}\) is a vector space with dimension equal to number of distinct subsets of \({k}\) indices from the set of \({n}\) available, i.e. its dimension is equal to the binomial coefficient “\({n}\) choose \({k}\)”

\(\displaystyle \left(\begin{array}{c} n\\ k \end{array}\right)\equiv\frac{n!}{k!\left(n-k\right)!}. \)

A general element of \({\Lambda V}\) then has the form

\(\displaystyle \underset{0\leq k\leq n}{\bigoplus}\left[\underset{\mu_{1} < \dotsb < \mu_{k}}{\sum}A^{\mu_{1}\dots\mu_{k}}e_{\mu_{1}}\wedge\dotsb\wedge e_{\mu_{k}}\right], \)

from which we can calculate that \({\Lambda V}\) has dimension \({2^{n}}\).

An Illustrated Handbook