Section 7.2 Dual space, Tensor product, and Exterior Algebra
We are all familiar with representing a hyperplane in \(\mathbb R^{n}\) in the form of
\begin{equation*}
a_{1}x_{1}+\ldots+a_{n}x_{n}=b
\text{ for some }a_{1},\ldots, a_{n}, \text{ not all zero, and some } b.
\end{equation*}
Restricting to the case \(b=0\text{,}\) such a hyperplane can be thought of as the null space of the linear function
\begin{equation*}
\alpha ({\mathbf x}) := a_{1}x_{1}+\ldots+a_{n}x_{n}
\end{equation*}
defined on the vectors \({\mathbf x}\in {\mathbb R}^{n}\text{.}\)
Definition 7.2.1. Covectors and Dual Space.
The set of all linear functions on a (finite dimensional) vector space
\(V\) is called the dual space of
\(V\text{,}\) and is denoted as
\(V^{*}\text{.}\) Elements of
\(V^{*}\) are called covectors.
A multilinear function of order
\(m\) if a function
\(\alpha:V\times\cdots\times V\mapsto \bbR\) (with
\(m\) factors of
\(V\)) such that
\(\alpha(\bx_{1},\cdots, \bx_{m})\) is linear in each
\(\bx_{i}\) when the rest is held as fixed. Such an
\(\alpha\) is also called a covariant tensor of order
\(m\text{.}\) The space of covariant tensors of order
\(m\) on
\(V\) is denoted as
\({\mathcal T}^{m}(V)\text{.}\)
Each non-zero element in
\(V^{*}\) determines a hyperplane (through the origin) in
\(V\text{,}\) namely, a subspace of codimension one; and two such elements determine the same hyperplane, if they are non-zero multiple of each other.
There are at least two ways to describe a subspace of
\(V\) of codimension bigger than one. One way is as the intersection of several codimension one hypersurfaces:
\(\left\{{\mathbf x}\in V: \alpha_{i} ({\mathbf x})=0, 1\le i \le m\right\}\text{,}\) namely, as the set of solutions of a system of
\(m\) linear homogeneous equations. If
\(\dim \text{Span} \left\{ \alpha_{i}, 1\le i \le m\right\}=l\text{,}\) then this is a codimension
\(l\) subspace. This is seen by writing out each
\(\alpha_{i}\) in terms of its coefficients, then
\(\left\{{\mathbf x}\in V: \alpha_{i} ({\mathbf x})=0, 1\le i \le m\right\}\) is the solution set of a system of
\(m\) linear equations in
\(n\) variables, with a coefficient matrix whose row rank is
\(l\text{,}\) thus the solution space is
\(n-l\) dimensional.
Another way is to choose a basis \(\left\{ {\mathbf u}_{1}, \ldots, {\mathbf u}_{k}\right\}\) for a subspace of \(V\text{.}\) But there are other choices for a basis of this subspace. Exterior algebra provides a convenient tool to identify a subspace (with orientation). Suppose \(\left\{ {\mathbf v}_{1}, \ldots, {\mathbf v}_{k}\right\}\) is another basis of the subspace, then we can write
\begin{equation}
{\mathbf v}_{j}=\sum_{i=1}^{k}a_{ij} {\mathbf u}_{i}, 1\le j \le k\tag{7.2.1}
\end{equation}
for some coefficients
\(a_{ij} \text{.}\) This would form a
\(k\times k\) invertible matrix
\(A\) with
\(a_{ij} \) as its entries.
(7.2.1) can be written compactly in a matrix form:
\begin{equation}
[{\mathbf v}_{1}\, \cdots \,{\mathbf v}_{k}]=[{\mathbf u}_{1}\, \cdots \,{\mathbf u}_{k}]A.\tag{7.2.2}
\end{equation}
It turns out that the following product between vectors, a generalization of the cross product between vectors in
\(\bbR^{3}\) called the
exterior product, provides an efficient tool to describe oriented subspaces of
\(V\text{.}\) We first give a preliminary formal definition to illustrate its usage; a more precise definition, together with the justification for the existence/construction of this product, will be given shortly.
Definition 7.2.2. Exterior Product.
Let \(V\) be a vector space. The exterior product is a product \(\bu\wedge \bv\) between \(\bu,\bv\in V\) such that it is linear in each factor and antisymmetric in \(\bu,\bv\text{:}\)
\begin{equation*}
\bu\wedge \bv=- \bv\wedge \bu.
\end{equation*}
Furthermore, this product extends to any \(k\)-tuple of vectors \(\left\{ {\mathbf u}_{1}, \ldots, {\mathbf u}_{k}\right\}\) in \(V\) which obeys associativity and is linear in each factor and antisymmetric in any adjacent pairs.
As an illustration, take \(k=3\text{,}\) then
\begin{equation*}
( \bu_{1}\wedge \bu_{2})\wedge \bu_{3}= \bu_{1}\wedge (\bu_{2}\wedge \bu_{3})
=- \bu_{2}\wedge \bu_{1}\wedge \bu_{3}=-
\bu_{1}\wedge \bu_{3}\wedge \bu_{2}\text{.}
\end{equation*}
Back to
(7.2.2). The algebraic rules of exterior algebra would lead to
\begin{equation}
{\mathbf v}_{1}\wedge \ldots\wedge {\mathbf v}_{k}=( \det A )\,
{\mathbf u}_{1}\wedge \ldots\wedge {\mathbf u}_{k}.\tag{7.2.3}
\end{equation}
This is particularly easy to see in the case of \(k=2\text{:}\)
\begin{equation*}
{\mathbf v}_{1}\wedge {\mathbf v}_{2}= \left( a_{11} {\mathbf u}_{1}+a_{21} {\mathbf u}_{2}\right)
\wedge \left( a_{12} {\mathbf u}_{1}+a_{22} {\mathbf u}_{2}\right)
= \left( a_{11}a_{22}-a_{12}a_{21}\right) {\mathbf u}_{1}\wedge {\mathbf u}_{2}.
\end{equation*}
Thus the exterior product of a basis \({\mathbf v}_{1}\wedge \ldots\wedge {\mathbf v}_{k}\text{,}\) up to the scaling factor \(\det A\text{,}\) is independent of the choice of a basis for the subspace. In fact, the sign of \(\det A\) can be used to identify whether the two bases \(\left\{ {\mathbf u}_{1}, \ldots, {\mathbf u}_{k}\right\}\) and \(\left\{ {\mathbf v}_{1}, \ldots, {\mathbf v}_{k}\right\}\) are in the same or opposite orientation of the subspace.
Proposition 7.2.3.
For any basis \(\left\{ {\mathbf u}_{1}, \ldots, {\mathbf u}_{n}\right\}\) of a vector space \(V\text{,}\) there is a unique basis \(\left\{\alpha_{1},\ldots, \alpha_{n}\right\}\) of \(V^{*}\) such that
\begin{equation}
\alpha_{i}({\mathbf u}_{j})=\delta_{ij} \text{ for } 1\le i, j \le n.\tag{7.2.4}
\end{equation}
Conversely, for any basis
\(\left\{\alpha_{1},\ldots, \alpha_{n}\right\}\) of
\(V^{*}\text{,}\) there is a unique basis
\(\left\{ {\mathbf u}_{1}, \ldots, {\mathbf u}_{n}\right\}\) of
\(V\) such that
(7.2.4) holds.
Definition 7.2.4. Dual Basis.
\(\left\{\alpha_{1},\ldots, \alpha_{n}\right\}\) above is called the dual basis of
\(\left\{ {\mathbf u}_{1}, \ldots, {\mathbf u}_{n}\right\}\text{,}\) and the latter is called the dual basis of
\(\left\{\alpha_{1},\ldots, \alpha_{n}\right\}\text{.}\)
Definition 7.2.5. Inner Product on a Vector Space.
An inner product on a vector space
\(V\) is a symmetric positive definite covariant tensor of order
\(2\text{,}\) namely, a bilinear and symmetric function
\(g\) on
\(V\) such that
\(g({\mathbf x}, {\mathbf x})>0\) unless
\({\mathbf x}={\mathbf 0}\text{.}\) Such a
\(g\) is also called a metric.
On an inner product space one can define orthonormal bases. If two bases
\(\left\{ {\mathbf u}_{1}, \ldots, {\mathbf u}_{n}\right\}\) and
\(\left\{ {\mathbf v}_{1}, \ldots, {\mathbf v}_{n}\right\}\) are orthonormal with respect to a metric
\(g\text{,}\) and are related via
(7.2.1), then the matrix
\(A\) must be an orthogonal matrix.
Definition 7.2.6. Isometry of a Metric.
A linear map
\(T: V\mapsto V\) is called an isometry of
\(g\text{,}\) if
\(g(T {\mathbf x}, T {\mathbf x})=g({\mathbf x}, {\mathbf x}) \) holds for all
\({\mathbf x} \in V\text{.}\)
This condition is equivalent to
\(g(T {\mathbf x}, T {\mathbf y})=g({\mathbf x}, {\mathbf y}) \) holds for all
\({\mathbf x}, {\mathbf y} \in V\text{.}\)
On an inner product space \(V\) with \(g\) as its inner product, there is an isomorphism \(\sharp: V^{*}\mapsto V\) such that
\begin{equation*}
\alpha(\mathbf x)=g(\sharp \alpha, \mathbf x) \text{ for } \alpha \in V^{*}, \mathbf x\in V.
\end{equation*}
This induces an inner product on \(V^{*}\) via
\begin{equation*}
g(\alpha, \beta)=g(\sharp \alpha, \sharp \beta) \text{ for } \alpha, \beta \in V^{*}.
\end{equation*}
An abstract vector space does not have a natural definition of volume even for cells of the form
\(\left\{s_{1}{\mathbf u}_{1}+ \ldots+ s_{k} {\mathbf u}_{k}: 0\le s_{i}\le 1,
1\le i \le k\right\}\text{.}\) But once an inner product
\(g\) is introduced, it is natural to define the
\(k\) dimensional volume of such cells to be
\(1\) whenever
\(\left\{ {\mathbf u}_{1}, \ldots, {\mathbf u}_{k}\right\}\) is an orthonormal basis. This volume is invariant under all isometries of
\(g\text{.}\) If
\(\left\{ {\mathbf u}_{1}, \ldots, {\mathbf u}_{k}\right\}\) is an orthonormal basis of
\(g\) and
(7.2.1) holds, then
(7.2.3) shows that the corresponding cell generated by
\(\left\{ {\mathbf v}_{1}, \ldots, {\mathbf v}_{k}\right\}\) has volume equal to
\(|\det A|\text{;}\) in other words, the exterior vector
\({\mathbf v}_{1}\wedge \ldots\wedge {\mathbf v}_{k}\) encodes both the volume and orientation of the parallelepiped formed with these vectors as edges.
An inner product on
\(V\) is just a special kind of bilinear function on
\(V\text{.}\)
Definition 7.2.7. Tensor Product.
The tensor product of any two linear functions \(\alpha, \beta\) on \(V\) is the bilinear function
\begin{equation*}
\alpha \otimes \beta ({\mathbf x}, {\mathbf y})= \alpha ({\mathbf x}) \beta ({\mathbf y})
\text{ for } {\mathbf x}, {\mathbf y} \in V.
\end{equation*}
If \(\alpha\in {\mathcal T}^{m}(V)\) and \(\beta \in {\mathcal T}^{l}(V)\text{,}\) then \(\alpha \otimes \beta \in {\mathcal T}^{m+l}(V)\) is defined by
\begin{align*}
\amp \alpha \otimes \beta (\bx_{1}, \ldots, \bx_{m},\bx_{m+1},\ldots, \bx_{m+l})\\
= \amp \alpha(\bx_{1}, \ldots, \bx_{m})\beta (\bx_{m+1},\ldots, \bx_{m+l})
\text{ for $\bx_{1}, \ldots, \bx_{m},\bx_{m+1},\ldots, \bx_{m+l} \in V$.}
\end{align*}
Note that
\(\alpha \otimes \beta \ne \beta \otimes \alpha \) in general.
For any basis
\(\left\{\alpha_{1},\ldots, \alpha_{n}\right\}\) of
\(V^{*}\) for
\(m\ge 2\text{,}\) \(\sum_{i=1}^{n}\alpha_{i}\otimes \alpha_{i}\) defines an inner product on
\(V\) so that its dual basis is an orthonormal basis in this metric.
Definition 7.2.8. Alternating Form.
An \(m\)-linear function \(\omega \in {\mathcal T}^{m}(V)\) is called an alternating form on \(V\) if
\begin{equation*}
\omega({\mathbf x}_{\sigma(1)}, \ldots, {\mathbf x}_{\sigma(m)})
=\text{sgn}\ \sigma\, \omega({\mathbf x}_1, \ldots, {\mathbf x}_m) \text{ for all permutations } \sigma.
\end{equation*}
The subspace of alternating forms on
\(V\) is denoted as
\(\Lambda^{m}(V)\text{.}\)
Definition 7.2.9. Alt Map.
We define \(\text{Alt}: {\mathcal T}^{m}(V) \mapsto \Lambda^{m}(V)\) by
\begin{equation*}
\text{Alt}(\omega )({\mathbf x}_1, \ldots, {\mathbf x}_m)
=\frac{1}{m!}\sum_{\sigma}\text{sgn}\ \sigma\, \omega({\mathbf x}_{\sigma(1)}, \ldots, {\mathbf x}_{\sigma(m)})\text{,}
\end{equation*}
and define the wedge product, also called exterior product,
\begin{equation*}
\alpha \wedge \beta =\frac{(m+l)!}{m!\, l!} \text{Alt}(\alpha \otimes \beta) \text{ for }
\alpha \in {\Lambda}^{m}(V), \beta \in {\Lambda}^{l}(V).
\end{equation*}
Note that
\begin{equation*}
\text{Alt}(\omega )= \omega \text{ if $\omega \in {\Lambda}^{m}(V)$.}
\end{equation*}
The following property will be used often.
\begin{equation}
\left(\alpha \wedge \beta \right)\wedge \gamma =\alpha \wedge \left(\beta \wedge \gamma \right)
=\frac{(k+l+m)!}{k!\, l!\, m!} \text{Alt}(\alpha\otimes \beta \otimes \gamma)\tag{7.2.5}
\end{equation}
for \(\alpha \in \Lambda^{m}(V), \beta\in \Lambda^{l}(V), \gamma\in \Lambda^{k}(V)\text{.}\)
In the case of \(m=l=1\text{,}\)
\begin{equation*}
\text{Alt}(\alpha \otimes \beta)=\frac 12 \left( \alpha \otimes \beta-\beta \otimes \alpha \right),
\end{equation*}
and
\begin{equation*}
\alpha \wedge \beta=\alpha \otimes \beta-\beta \otimes \alpha.
\end{equation*}
In the case of \(k=l=m=1\) we also get
\begin{align*}
\amp \left(\alpha \wedge \beta \right)\wedge \gamma\\
= \amp \alpha\otimes \beta \otimes \gamma
+ \beta \otimes \gamma \otimes \alpha+ \gamma \otimes \alpha\otimes \beta\\
\amp
- \beta \otimes \alpha\otimes \gamma - \alpha \otimes \gamma \otimes \beta
- \gamma \otimes \beta \otimes \alpha.
\end{align*}
It also follows that \(\alpha \wedge \alpha =0\) if \(\alpha \in \Lambda^{1}(V)\text{.}\) However, if \(n\ge 4\) and \(\left\{\alpha_{1},\ldots, \alpha_{n}\right\}\) is a basis of \(V^{*}\text{,}\) then
\begin{equation*}
(\alpha_{1} \wedge \alpha_{2}+\alpha_{3} \wedge \alpha_{4})\wedge
(\alpha_{1} \wedge \alpha_{2}+\alpha_{3} \wedge \alpha_{4})
=2 \alpha_{1} \wedge \alpha_{2}\wedge \alpha_{3} \wedge \alpha_{4}\ne 0.
\end{equation*}
If
\(\left\{\alpha_{1},\ldots, \alpha_{n}\right\}\) is a basis of
\(V^{*}\text{,}\) then
\(\{\alpha_{i}\otimes \alpha_{j}: 1\le i, j\le n\}\) forms a basis of
\({\mathcal T}^{2}(V)\text{,}\) and
\(\{\alpha_{i}\wedge \alpha_{j}: 1\le i < j\le n\}\) forms a basis of
\(\Lambda^{2}(V)\text{.}\) \(\{\alpha_{i}\otimes \alpha_{j} + \alpha_{j}\otimes \alpha_{i}: 1\le i\le j\le n\}\) forms a basis of
\({\mathcal S}^{2}(V)\text{,}\) the space of symmetric two tensors of
\(V\text{.}\)
Alternatively, a tensor
\(g\in {\mathcal S}^{2}(V)\) is determined by its actions on
\(\{(\bu_{i}, \bu_{j}): 1\le i\le j\le n\}\text{,}\) while a tensor
\(\omega\in \Lambda^{2}(V)\) is determined by its actions on
\(\{(\bu_{i}, \bu_{j}): 1\le i < j\le n\}\text{.}\) Here
\(\{\bu_{1}, \ldots, \bu_{n}\}\) is the dual basis of
\(\left\{\alpha_{1},\ldots, \alpha_{n}\right\}\text{.}\)
For any \(\bx =\sum_{i=1}^{n}x_{i} \bu_{i}\) and \(\by =\sum_{i=1}^{n}y_{i} \bu_{i}\text{,}\) then a symmetric \(2\)-tensor \(g\) satisfies
\begin{align*}
g(\bx, \by)= \amp g(\sum_{i=1}^{n}x_{i} \bu_{i}, \sum_{i=1}^{n}y_{i} \bu_{i})\\
=\amp \sum_{i=1}^{n}\sum_{j=1}^{n}x_{i} y_{j }g( \bu_{i}, \bu_{j})\\
=\amp \sum_{i=1}^{n} x_{i} y_{i }g( \bu_{i}, \bu_{i}) +
\sum_{i < j} \left(x_{i} y_{j }+ x_{j}y_{i}\right) g( \bu_{i}, \bu_{j})
\end{align*}
while an antisymmetric \(2\)-tensor \(\omega\) satisfies
\begin{equation*}
\omega(\bx, \by)= \sum_{i=1}^{n}\sum_{j=1}^{n}x_{i} y_{j }\omega( \bu_{i}, \bu_{j})
=\sum_{i < j} \left(x_{i} y_{j }- x_{j}y_{i}\right) \omega ( \bu_{i}, \bu_{j}).
\end{equation*}
For example, when \(n=2\text{,}\) \({\mathcal S}^{2}(\bbR^{2})\) is three dimensional, while \(\Lambda^{2}(\bbR^{2})\) is one dimensional, and \(g\in {\mathcal S}^{2}(\bbR^{2})\) is determined by
\begin{equation*}
g(\bx, \by)=x_{1}y_{1}g(\bu_{1}, \bu_{1})+(x_{1}y_{2}+x_{2}y_{1})g(\bu_{1}, \bu_{2})+
x_{2}y_{2}g(\bu_{2}, \bu_{2}),
\end{equation*}
while \(\omega \in { \Lambda}^{2}(\bbR^{2})\) is determined by
\begin{equation*}
\omega(\bx, \by)=(x_{1}y_{2}-x_{2}y_{1})\omega(\bu_{1}, \bu_{2}).
\end{equation*}
In computations sometimes \(\alpha_{i}\wedge \alpha_{j}\) may show up even when \(i\ge j\text{,}\) but to identify the coefficients of the resulting alternating tensor, one needs to transform all terms in terms of the basis discussed above. For example, \(a\, \alpha\otimes \beta + b\, \beta\otimes \alpha \in {\mathcal T}^{2}(V)\text{,}\) and if we apply the Alt operation on it, we get a tensor in \(\Lambda^{2}(V)\)
\begin{equation*}
\frac{a}{2} \left(\alpha\otimes \beta- \beta\otimes \alpha\right)
+\frac{b}{2} \left( \beta\otimes \alpha- \alpha\otimes \beta \right),
\end{equation*}
which could be recognized to be \(\frac{a}{2} \alpha\wedge \beta + \frac{b}{2}\beta\wedge\alpha\) but ends up identified as \(\frac{a-b}{2} \alpha\wedge \beta\text{.}\) The discussion here applies to higher order tensors as well.
We can treat vectors in
\(V\) as linear functions on
\(V^{*}\text{,}\) then tensor product and exterior product on
\(V\) make sense. For instance, for any
\(\bu, \bv \in V\text{,}\) \(\bu\otimes \bv \in {\mathcal T}^{2}(V^{*})\) in the sense that
\(\bu\otimes \bv (\alpha, \beta)= \alpha (\bu) \beta (\bv)\) and
\(\bu\wedge\bv \in {\Lambda}^{2}(V^{*})\) in the sense that
\(\bu\wedge \bv (\alpha, \beta)= \alpha (\bu) \beta (\bv)-\alpha(\bv)\beta (\bu)\text{.}\)
For any linear transformation \(L:V\mapsto W\text{,}\) there is a naturally defined adjoint map, labeled as \(L^{*}\text{,}\) such that for any \(\alpha \in W^{*}\text{,}\)
\begin{equation}
L^{*}(\alpha)\in V^{*} \text{ is defined via } L^{*}(\alpha)({\mathbf x})=\alpha(L({\mathbf x})) \text{ for all }
{\mathbf x} \in V.\tag{7.2.6}
\end{equation}
In fact, one can define \(L^{*}\) on \({\mathcal T}^{m}(W)\) in a similar way such that for any \(\omega\in {\mathcal T}^{m}(W)\text{,}\)
\begin{equation*}
L^{*}(\omega)({\mathbf x}_{1}, \ldots, {\mathbf x}_{m})
=\omega(L({\mathbf x}_{1}), \ldots, L({\mathbf x}_{m})) \text{ for all }
{\mathbf x}_{1}, \ldots, {\mathbf x}_{m} \in V.
\end{equation*}
For a metric
\(g\) on
\(W\text{,}\) \(L^{*}(g)\) is a metric on
\(V\) provided that
\(L\) is injective. In such a case, we call
\(L^{*}(g)\) the
pull-back metric of
\(g\) by
\(L\text{.}\)
When \(\omega\) is an alternating tensor on \(W\text{,}\) \(L^{*}(\omega)\) is an alternating tensor on \(V\text{.}\) Furthermore, for two alternating tensors \(\alpha, \beta\) on \(W\text{,}\)
\begin{equation*}
L^{*}(\alpha\wedge \beta)= L^{*}(\alpha)\wedge L^{*}(\beta).
\end{equation*}
A property related to
(7.2.6) is that for any two bases
\(\{\bu_{1},\cdots, \bu_{n}\}\) and
\(\{\bv_{1},\cdots, \bv_{n}\}\) of
\(V\text{,}\) let
\(\{\alpha_{1},\cdots, \alpha_{n}\}\) and
\(\{\beta_{1}, \cdots, \beta_{n}\}\) be their respective dual bases in
\(V^{*}\text{,}\) then for any vector
\(X\in V\text{,}\) and covector
\(\omega \in V^{*}\text{,}\) if
\begin{equation*}
X=\sum_{i=1}^{n}x_{i} \bu_{i}=\sum_{i=1}^{n}y_{i}\bv_{i}, \quad
\omega=\sum_{i=1}^{n} a_{i} \alpha_{i}=\sum_{i=1}^{n} b_{i}\beta_{i}
\end{equation*}
then
\begin{equation*}
\omega (X) = \sum_{i=1}^{n} a_{i} x_{i}=\sum_{i=1}^{n} b_{i} y_{i}.
\end{equation*}
In the context of Stokes Theorem we will treat \(\sum_{i=1}^{n} X_{i}(\bx)x_{i}'(t)\) as the pairing between a vector and a covector and will apply the above transformation property when applying a change of variables.
Exercises Exercises
1.
Let
\(\alpha, \beta\in V^{*}\) be such that
\(\{\bx \in V: \alpha(\bx)=0\}\) is identical to
\(\{\bx \in V: \beta(\bx)=0\}\text{.}\) Prove that there exists some constant
\(c\) such that
\(\alpha =c\beta\text{.}\)
2.
Prove that
\(T:V\mapsto V\) is an isometry with respect to the metric
\(g\) iff
\(g(T\bx, T\by)=g(\bx, \by)\) for all
\(\bx, \by \in V\text{.}\)
3.
Let
\(\{\bu_{1},\cdots, \bu_{n}\}\) be a basis of
\(V\) and
\(\{\alpha_{1},\cdots, \alpha_{n}\}\) be its dual basis. Let
\(g\) be a metric on
\(V\text{,}\) and
\(g_{ij}=g(\bu_{i},\bu_{j})\text{.}\) Then for the covector
\(\alpha=\sum_{i=1}^{n} a_{i} \alpha_{i}\in V^{*}\text{,}\) we have
\(\sharp \alpha= \sum_{i, j=1}^{n} a_{i} g^{ij}\bu_{j}\text{,}\) where
\(g^{ij}\) are the coefficients of the inverse matrix of
\([g_{ij}]\text{.}\)
4.
5.
Let \(\left\{\alpha_{1},\ldots, \alpha_{2n}\right\}\) be a basis of \(V^{*}\) and \(\omega=\alpha_{1} \wedge \alpha_{2}+\alpha_{3} \wedge \alpha_{4}+\cdots+
\alpha_{2n-1} \wedge \alpha_{2n} \in \Lambda^{2}(V)\text{.}\) Prove that
\begin{equation*}
\omega\wedge \cdots \wedge \omega = n! \ \alpha_{1} \wedge \alpha_{2}\wedge\alpha_{3} \wedge \alpha_{4}\wedge \cdots \wedge
\alpha_{2n-1} \wedge \alpha_{2n}\text{,}
\end{equation*}
where the wedge product has \(n\) factors of \(\omega\text{.}\)
6.
Let
\(\{\bu_{1},\cdots, \bu_{n}\}\) be a basis of
\(V\) and
\(\{\alpha_{1},\cdots, \alpha_{n}\}\) be its dual basis in
\(V^{*}\text{,}\) \(\{\bv_{1},\cdots, \bv_{m}\}\) a basis of
\(W\) and
\(\{\beta_{1}, \cdots, \beta_{m}\}\) be its dual basis in
\(W^{*}\text{.}\) Let
\(L:V\mapsto W\) be a linear map and the
\(m\times n\) matrix
\(A\) be the matrix representation of
\(L\) with respect to the bases
\(\{\bu_{1},\cdots, \bu_{n}\}\) and
\(\{\bv_{1},\cdots, \bv_{m}\}\text{.}\) Then
\(L^{*}\) is represented by
\(A^{\rm T}\) with respect to the bases
\(\{\beta_{1}, \cdots, \beta_{m}\}\) and
\(\{\alpha_{1},\cdots, \alpha_{n}\}\text{.}\)
7.
Let
\(\{\bu_{1},\cdots, \bu_{n}\}\) and
\(\{\bv_{1},\cdots, \bv_{n}\}\) of
\(V\) be two bases of
\(V\text{,}\) and
\(\{\alpha_{1},\cdots, \alpha_{n}\}\) and
\(\{\beta_{1}, \cdots, \beta_{n}\}\) be their respective dual bases in
\(V^{*}\text{.}\) Suppose that
\(\bv_{i}=\sum_{k=1}^{n}a_{ik} \bu_{k}\) for some matrix
\(A=[a_{ik}]\text{.}\) Prove that
\(\beta_{i}=\sum_{k=1}^{n}b_{ik} \alpha_{k}\text{,}\) where
\([b_{ik}]=(A^{-1})^{T}\text{.}\)
8.
Prove
(7.2.5) and use it to show that for any
\(k\) covectors
\(\{\alpha_{1},\cdots, \alpha_{k}\}\) in
\(V^{*}\) and
\(k\) vectors
\(\{\bu_{1},\cdots, \bu_{k}\}\) in
\(V\text{,}\)
\begin{equation*}
\alpha_{1}\wedge \cdots\wedge \alpha_{k}(\bu_{1},\cdots, \bu_{k})
=\det \left[ \alpha_{i}(\bu_{j})\right].
\end{equation*}