A linear operator L acts on vectors a,b\mathop{\mathop{…}} in a linear vector space V to give new vectors La, Lb,\mathop{\mathop{…}} such that1
Example 3.1:
Lx = \left (\array{
1&2& 3\cr
4&8 &−1} \right )x.
|
Lf(x) = {d\over
dx}f(x).
|
are both linear (see example sheet).
If the operators L maps the vector f on the vector g, Lf = g, the vector space of f’s (the domain) can be different from the vector space of g’s (the codomain or target). L is an operator which maps the domain onto the codomain, and even though it is defined for every element of the domain, the image of the domain (called the “range of L” or the “image of L”) is in general only a subset of the codomain, see Fig. 3.1, even though in many physical cases we shall assume that the range and codomain coincide.
Example 3.2:
Let L be a linear operator, and y = lx. Let {e}_{1}, {e}_{2}, …and {u}_{1}, {u}_{2}, …be chosen sets of basis vectors in the domain and codomain, respectively, so that
Then the components are related by the matrix relation
where the matrix {L}_{ji} is defined by
L{e}_{i} ={ \mathop{∑
}}_{j}{u}_{j}{L}_{ji} ={ \mathop{∑
}}_{j}{\left ({L}^{T}\right )}_{
ij}{u}_{j}.
| (3.1) |
Notice that the transformation relating the components x and y is the transpose of the matrix that connects the basis. This difference is related to what is sometimes called the active or passive view of transformations: in the active view, the components change, and the basis remains the same. In the passive view, the components remain the same but the basis changes. Both views represent the same transformatio!
{L}_{ji} = ({u}_{j},L{e}_{i}).
| (3.2) |
Example 3.3:
Find a matrix representation of the differential operator {d\over dx} in the space of functions on the interval (−π,π).
Solution:
Since domain and codomain coincide, the bases in both spaces are identical; the easiest and most natural choice is the discrete Fourier basis 1,\{\mathop{cos}\nolimits nx,\mathop{sin}\nolimits nx{\}}_{n=1}^{∞}. With this choice, using (\mathop{cos}\nolimits nx)' = −n\mathop{sin}\nolimits nx and (\mathop{sin}\nolimits nx)' = n\mathop{cos}\nolimits nx, we find
We can immediately see that the matrix representation ”M” takes the form
Another common example is the matrix representation of the Schrödinger equation. Suppose we are given an orthonormal basis \{{ϕ}_{i}{\}}_{i=1}^{∞} for the Hilbert space in which the operator \hat{H} acts. By decomposing an eigenstate ψ of the Schrödinger equation,
in the basis {ϕ}_{j}(x) as ψ ={\mathop{ \mathop{∑ }}\nolimits }_{j}{c}_{j}{ϕ}_{j}, we get the matrix form
{\mathop{∑
}}_{j}{H}_{ij}{c}_{j} = E{c}_{i}\quad ,
| (3.3) |
with
This is clearly a form of Eq. (3.2).
The result in Eq. (3.3) is obviously an infinite-dimensional matrix problem, and no easier to solve than the original problem. Suppose, however, that we truncate both the sum over j and the set of coefficients c to contain only N terms. This can then be used to find an approximation to the eigenvalues and eigenvectors. See the Mathematica notebook heisenberg.nb for an example how to apply this to real problems.
You should be familiar with the Hermitian conjugate (also called adjoint) of a matrix, the generalisation of transpose: The Hermitian conjugate of a matrix is the complex conjugate of its transpose,
Thus
v = \left (\array{
{v}_{1}
\cr
\mathop{\mathop{⋮}}\cr
{v}_{
n} } \right ),\quad {v}^{†} = ({v}_{
1}^{∗},\mathop{\mathop{…}},{v}_{
n}^{∗}).
|
This allows us to write the inner product as a matrix product,
(w,v) ={ w}^{†}v.
|
The most useful definition of Hermitian conjugate, which will be generalised below, is through the inner product:
(a,Mb) ={ \mathop{∑
}}_{ij}{a}_{i}^{∗}{M}_{
ij}{b}_{j} ={ \mathop{∑
}}_{ij}{a}_{i}^{∗}{({M}_{
ji}^{†})}^{∗}{b}_{
j} ={ \mathop{∑
}}_{ij}{({M}_{ji}^{†}{a}_{
i})}^{∗}{b}_{
j} = ({M}^{†}a,b),
| (3.4) |
see Fig. 3.2.
From the examples above, and the definition, we conclude that if M is an m × n matrix, {M}^{†} is an n × m one.
We now use our result (3.4) above for an operator, and define
where the last two terms are identical, as follows from the basic properties of the scalar product, Eq. (2.2). A linear operator L maps the domain onto the codomain; its adjoint {L}^{†} maps the codomain back on to the domain.
As can be gleamed from Fig. 3.3, we can also use a basis in both the domain and codomain to use the matrix representation of linear operators (3.1,3.2), and find that the matrix representation of an operator satisfies the same relations as that for a finite matrix,
A final important definition is that of