next up previous contents
Next: Postulates of Quantum Mechanics Up: Mathematical Background Previous: Commutators in Quantum Mechanics

Linear Vector Spaces in Quantum Mechanics

We have observed that most operators in quantum mechanics are linear operators. This is fortunate because it allows us to represent quantum mechanical operators as matrices and wavefunctions as vectors in some linear vector space. Since computers are particularly good at performing operations common in linear algebra (multiplication of a matrix times a vector, etc.), this is quite advantageous from a practical standpoint.

In an n-dimensional space we may expand any vector $\Psi$ as a linear combination of basis vectors

\begin{displaymath}\Psi = \sum_{i=1}^{n} a_i \Psi_i
\end{displaymath} (80)

For a general vector space, the coefficients ai may be complex; thus one should not be too quick to draw parallels to the expansion of vectors in three-dimensional Euclidean space. The coefficients ai are referred to as the ``components'' of the state vector $\Psi$, and for a given basis, the components of a vector specify it completely. The components of the sum of two vectors are the sums of the components. If $\Psi_a = \sum a_i \Psi_i$ and $\Psi_b = \sum b_i \Psi_i$ then

\begin{displaymath}\Psi_a + \Psi_b = \sum_i (a_i + b_i) \Psi_i
\end{displaymath} (81)

and similarly

\begin{displaymath}\lambda \Psi_a = \sum_i (\lambda a_i) \Psi_i
\end{displaymath} (82)

The scalar product of two vectors is a complex number denoted by

\begin{displaymath}(\Psi_b, \Psi_a) = (\Psi_a, \Psi_b)^*
\end{displaymath} (83)

where we have used the standard linear-algebra notation. If we also require that

\begin{displaymath}(\Psi_a, \lambda \Psi_b) = \lambda (\Psi_a, \Psi_b)
\end{displaymath} (84)

then it follows that

\begin{displaymath}(\lambda \Psi_a, \Psi_b) = \lambda^* (\Psi_a, \Psi_b)
\end{displaymath} (85)

We also require that

\begin{displaymath}(\Psi_a, \Psi_b + \Psi_c) = (\Psi_a, \Psi_b) + (\Psi_a, \Psi_c)
\end{displaymath} (86)

If the scalar product vanishes (and if neither vector in the product is the null vector) then the two vectors are orthogonal.

Generally the basis is chosen to be orthonormal, such that

\begin{displaymath}(\hat{\Psi}_i, \hat{\Psi}_j) = \delta_{ij}
\end{displaymath} (87)

In this case, we can write the scalar product of two arbitrary vectors as
$\displaystyle (\Psi_a, \Psi_b)$ = $\displaystyle (\sum_i a_i \hat{\Psi}_i, \sum_j b_j \hat{\Psi}_j)$ (88)
  = $\displaystyle \sum_i \sum_j a_i^* b_j (\hat{\Psi}_i, \hat{\Psi}_j)$  
  = $\displaystyle \sum_i a_i^* b_i$  

This can also be written in vector notation as

\begin{displaymath}(\Psi_a, \Psi_b) = (a_1^* a_2^* \cdots a_n^*)
\left( \begin{array}{c}
b_1 \\
b_2 \\
\vdots \\
b_n \end{array} \right)
\end{displaymath} (89)

It is useful at this point to introduce Dirac's bra-ket notation. We define a ``bra'' as

\begin{displaymath}\langle \Psi_a \vert = (a_1^* a_2^* \cdots a_n^*)
\end{displaymath} (90)

and a ``ket'' as

\begin{displaymath}\vert \Psi_a \rangle = \left( \begin{array}{c}
a_1 \\
a_2 \\
\vdots \\
a_n \end{array} \right)
\end{displaymath} (91)

A bra to the left of a ket implies a scalar product, so

\begin{displaymath}\langle \Psi_a \vert \Psi_b \rangle = (\Psi_a, \Psi_b)
\end{displaymath} (92)

Sometimes in superficial treatments of Dirac notation, the symbol $\langle \Psi_a \vert \Psi_b \rangle$ is defined alternatively as

 \begin{displaymath}
\langle \Psi_a \vert \Psi_b \rangle = \int \Psi_a^{*}(x) \Psi_b(x) dx
\end{displaymath} (93)

This is equivalent to the above definition if we make the connections $a_i = \Psi_a(x)$ and $b_i = \Psi_b(x)$. This means that our basis vectors are every possible value of x. Since x is continuous, the sum is replaced by an integral (see Szabo and Ostlund [4] , exercise 1.17). Often only the subscript of the vector is used to denote a bra or ket; we may have written the above equation as

\begin{displaymath}\langle a \vert b \rangle = \int \Psi_a^{*}(x) \Psi_b(x) dx
\end{displaymath} (94)

Now we turn our attention to matrix representations of operators. An operator $\hat{A}$ can be characterized by its effect on the basis vectors. The action of $\hat{A}$ on a basis vector $\hat{\Psi}_j$yields some new vector $\Psi'_j$ which can be expanded in terms of the basis vectors so long as we have a complete basis set.

 \begin{displaymath}
\hat{A} \hat{\Psi}_j = \Psi'_j = \sum_i^{n} \hat{\Psi}_i A_{ij}
\end{displaymath} (95)

If we know the effect of $\hat{A}$ on the basis vectors, then we know the effect of $\hat{A}$ on any arbitrary vector because of the linearity of $\hat{A}$.
$\displaystyle \Psi_b = \hat{A} \Psi_a = \hat{A} \sum_j a_j \hat{\Psi}_j$ = $\displaystyle \sum_j a_j
\hat{A} \hat{\Psi}_j = \sum_j \sum_i a_j \hat{\Psi}_i A_{ij}$ (96)
  = $\displaystyle \sum_i \hat{\Psi}_i ( \sum_j A_{ij} a_j )$  

or

\begin{displaymath}b_i = \sum_j A_{ij} a_j
\end{displaymath} (97)

This may be written in matrix notation as

\begin{displaymath}\left( \begin{array}{c}
b_1 \\
b_2 \\
\vdots \\
b_n \end{a...
...n{array}{c}
a_1 \\
a_2 \\
\vdots \\
a_n \end{array} \right)
\end{displaymath} (98)

We can obtain the coefficients Aij by taking the inner product of both sides of equation 95 with $\hat{\Psi}_i$, yielding
$\displaystyle (\hat{\Psi}_i, \hat{A} \hat{\Psi}_j)$ = $\displaystyle (\hat{\Psi}_i, \sum_k^{n} \hat{\Psi}_k A_{kj} )$ (99)
  = $\displaystyle \sum_k^{n} A_{kj} (\hat{\Psi}_i, \hat{\Psi}_k)$  
  = Aij  

since $(\hat{\Psi}_i, \hat{\Psi}_k) = \delta_{ik}$ due to the orthonormality of the basis. In bra-ket notation, we may write

\begin{displaymath}A_{ij} = \langle i \vert \hat{A} \vert j \rangle
\end{displaymath} (100)

where i and j denote two basis vectors. This use of bra-ket notation is consistent with its earlier use if we realize that $\hat{A} \vert j \rangle$ is just another vector $\vert j' \rangle$.

It is easy to show that for a linear operator $\hat{A}$, the inner product $(\Psi_a, \hat{A} \Psi_b)$ for two general vectors (not necessarily basis vectors) $\Psi_a$ and $\Psi_b$ is given by

\begin{displaymath}(\Psi_a, \hat{A} \Psi_b) = \sum_i \sum_j a_i^{*} A_{ij} b_j
\end{displaymath} (101)

or in matrix notation

\begin{displaymath}(\Psi_a, \hat{A} \Psi_b) = \left( a_1^{*} a_2^{*} \cdots a_n^...
...n{array}{c}
b_1 \\
b_2 \\
\vdots \\
b_n \end{array} \right)
\end{displaymath} (102)

By analogy to equation (93), we may generally write this inner product in the form

\begin{displaymath}(\Psi_a, \hat{A} \Psi_b) = \langle a \vert \hat{A} \vert b \rangle =
\int \Psi_a^{*}(x) \hat{A} \Psi_b(x) dx
\end{displaymath} (103)

Previously, we noted that $(\Psi_a, \Psi_b) = (\Psi_b, \Psi_a)^{*}$, or $\langle a \vert b \rangle = \langle b \vert a \rangle^{*}$. Thus we can see also that

\begin{displaymath}(\Psi_a, \hat{A} \Psi_b) = (\hat{A} \Psi_b, \Psi_a)^{*}
\end{displaymath} (104)

We now define the adjoint of an operator $\hat{A}$, denoted by $\hat{A}^{\dagger}$, as that linear operator for which

\begin{displaymath}(\Psi_a, \hat{A} \Psi_b) = (\hat{A}^{\dagger} \Psi_a, \Psi_b)
\end{displaymath} (105)

That is, we can make an operator act backwards into ``bra'' space if we take it's adjoint. With this definition, we can further see that

\begin{displaymath}(\Psi_a, \hat{A} \Psi_b) = (\hat{A} \Psi_b, \Psi_a)^{*} =
(\...
...A}^{\dagger} \Psi_a)^{*} =
(\hat{A}^{\dagger} \Psi_a, \Psi_b)
\end{displaymath} (106)

or, in bra-ket notation,

\begin{displaymath}\langle a \vert \hat{A} \vert b \rangle = \langle \hat{A} b \...
...t a \rangle^{*} =
\langle \hat{A}^{\dagger} a \vert b \rangle
\end{displaymath} (107)

If we pick $\Psi_a = \hat{\Psi}_i$ and $\Psi_b = \hat{\Psi}_j$ (i.e., if we pick two basis vectors), then we obtain
$\displaystyle (\hat{A} \hat{\Psi}_i, \hat{\Psi}_j)$ = $\displaystyle (\hat{\Psi}_i, \hat{A}^{\dagger} \hat{\Psi}_j)$ (108)
$\displaystyle (\hat{\Psi}_j, \hat{A} \hat{\Psi}_i)^{*}$ = $\displaystyle (\hat{\Psi}_i, \hat{A}^{\dagger} \hat{\Psi}_j)$  
Aji* = $\displaystyle A^{\dagger}_{ij}$  

But this is precisely the condition for the elements of a matrix and its adjoint! Thus the adjoint of the matrix representation of $\hat{A}$ is the same as the matrix representation of $\hat{A}^{\dagger}$.

This correspondence between operators and their matrix representations goes quite far, although of course the specific matrix representation depends on the choice of basis. For instance, we know from linear algebra that if a matrix and its adjoint are the same, then the matrix is called Hermitian. The same is true of the operators; if

\begin{displaymath}\hat{A} = \hat{A}^{\dagger}
\end{displaymath} (109)

then $\hat{A}$ is a Hermitian operator, and all of the special properties of Hermitian operators apply to $\hat{A}$ or its matrix representation.


next up previous contents
Next: Postulates of Quantum Mechanics Up: Mathematical Background Previous: Commutators in Quantum Mechanics