Suppose I know my wave function at time t = 0 is the sum of the two lowest-energy harmonic oscillator wave functions,
ψ(x,0) = {1\over
\sqrt{2}}\left [{ϕ}_{0}(x) + {ϕ}_{1}(x)\right ].
| (8.25) |
The introduction of the time independent wave function was through the separation {ψ}_{n}(x,t) = {e}^{−i{E}_{n}∕ℏt}{ϕ}_{n}(x). Together with the superposition for time-dependent wave functions, we find
ψ(x,t) = {1\over
\sqrt{2}}\left [{ϕ}_{0}(x){e}^{−i{1\over
2}ωt} + {ϕ}_{1}(x){e}^{−i{3\over
2}ωt}\right ].
| (8.26) |
The expectation value of \hat{H}, i.e., the expectation value of the energy is
\left \langle \hat{H}\right \rangle = {1\over
2}({E}_{0} + {E}_{1}) = ℏω.
| (8.27) |
The interpretation of probilities now gets more complicated. If we measure the energy, we don’t expect an outcome {E}_{3}, since there is no {ϕ}_{3} component in the wave functon. We do expect {E}_{0} = {1\over 2}ℏω or {E}_{1} = {3\over 2}ℏω with 50 % propability, which leads to the right average. Actually simple mathematics shows that the result for the expectation value was just that, \langle E\rangle = {1\over 2}{E}_{0} + {1\over 2}{E}_{1}.
We can generalise this result to stating that if
ψ(x,t) ={ \mathop{∑
}}_{n=0}^{∞}{c}_{
n}(t){ϕ}_{n}(x),
| (8.28) |
where {ϕ}_{n}(x) are the eigenfunctions of an (Hermitean) operator \hat{O},
\hat{O}{ϕ}_{n}(x) = {o}_{n}{ϕ}_{n}(x),
| (8.29) |
then
\left \langle \hat{O}\right \rangle ={ \mathop{∑
}}_{n=0}^{∞}|{c}_{
n}(t){|}^{2},
| (8.30) |
and the probability that the outcome of a measurement of \hat{O} at time {t}_{0} is {o}_{n} is |{c}_{n}(t){|}^{2}. Here we use orthogonality and completeness of the eigenfunctions of Hermitean operators.
If we measure E once and we find {E}_{i} as outcome we know that the system is in the ith eigenstate of the Hamiltonian. That certainty means that if we measure the energy again we must find {E}_{i} again. This is called the “collapse of the wave function”: before the first measurement we couldn’t predict the outcome of the experiment, but the first measurements prepares the wave function of the system in one particuliar state, and there is only one component left!
Now what happens if we measure two different observables? Say, at 12 o’clock we measure the position of a particle, and a little later its momentum. How do these measurements relate? Measuring \hat{x} to be {x}_{0} makes the wavefunction collapse to δ(x − {x}_{0}), whatever it was before. Now mathematically it can be shown that
δ(x − {x}_{0}) = {1\over
2π}{\mathop{\mathop{\mathop{∫
}\nolimits }}\nolimits }_{−∞}^{∞}{e}^{ik(x−{x}_{0})}dk.
| (8.31) |
Since {e}^{ikx} is an eigenstate of the momentum operator, the coordinate eigen function is a superposition of all momentum eigen functions with equal weight. Thus the spread in possible outcomes of a measurement of p is infinite!
__________________________________________________________________
The reason is that \hat{x} and \hat{p} are so-called incompatible operators, where
\hat{p}\hat{x}\mathrel{≠}\hat{x}\hat{p}!
| (8.32) |
The way to show this is to calculate
(\hat{p}\hat{x} −\hat{ x}\hat{p})f(x) ≡ [\hat{p},\hat{x}]f(x)
| (8.33) |
for arbitrary f(x). A little algebra shows that
In operatorial notation,
[\hat{p},\hat{x}] = {ℏ\over
i} \hat{1},
| (8.35) |
where the operator \hat{1}, which multiplies by 1, i.e., changes f(x) into itself, is usually not written.
The reason these are now called “incompatible operators” is that an eigenfunction of one operator is not one of the other: if \hat{x}ϕ(x) = {x}_{0}ϕ(x), then
\hat{x}\hat{p}ϕ(x) = {x}_{0}\hat{p}ϕ(x) −{ℏ\over
i} ϕ(x)
| (8.36) |
If ϕ(x) was also an eigenstate of \hat{p} with eigenvalue {p}_{0} we find the contradiction {x}_{0}{p}_{0} = {x}_{0}{p}_{0} −{ℏ\over i} .
Now what happens if we initially measure \hat{x} = {x}_{0} with finite acuracy Δx? This means that the wave function collapses to a Gaussian form,
ϕ(x) ∝\mathop{ exp}\nolimits (−{(x − {x}_{0})}^{2}∕Δ{x}^{2})
| (8.37) |
It can be shown that
from which we read off that Δp = ℏ∕(2Δx), and thus we conclude that at best
ΔxΔp = ℏ∕2
| (8.39) |
which is the celeberated Heisenberg uncertainty relation.