5.3 Two–by–Two Matrices: Index Notation and Multiplication

5.3.1 Basis Vectors and Index Notation

Vectors

The vectors {e}_{1} = \left (\array{ 1\cr 0 } \right ),\quad {e}_{2} = \left (\array{ 0\cr 1 } \right ) are called basis vectors of {ℝ}^{2}.

Any arbitrary vector a ∈ {ℝ}^{2} is written as a linear combination
\begin{eqnarray} a = {a}_{1}{e}_{1} + {a}_{2}{e}_{2} ={ \mathop{∑ }}_{i=1}^{2}{a}_{ i}{e}_{i}.& & %&(5.24) \\ \end{eqnarray}

In this representation, sometimes Einstein’s summation convention is used: We write a ={\mathop{ \mathop{∑ }}\nolimits }_{i=1}^{2}{a}_{i}{e}_{i} = {a}_{i}{e}_{i}, omitting the sum symbol in order to simplify the notation. The sum is automatically carried out over repeated indices. Here, the index is i.

Matrices

The element {A}_{ij} of a matrix A is the entry in its i-th row and its j-th column. For two-by-two matrices, this reads \left (\array{ {A}_{11}&{A}_{12} \cr {A}_{21}&{A}_{22} } \right ).

Note: be very careful not to mix up the row and the column index!

Matrix operating on vector

The result of a linear mapping x → y = Ax can be written in index form, too:

\begin{eqnarray} & & x = \left (\array{ {x}_{1} \cr {x}_{2} } \right ) → Ax = y = \left (\array{ {y}_{1} \cr {y}_{2} } \right )%& \\ & \mathrel{↔}& {y}_{i} ={ \mathop{∑ }}_{j=1}^{2}{A}_{ ij}{x}_{j}. %&(5.25)\\ \end{eqnarray}

This means that the first and second components, {y}_{1} and {y}_{2}, of y = Ax are given by

\begin{eqnarray}{ y}_{1} ={ \mathop{∑ }}_{j=1}^{2}{A}_{ 1j}{x}_{j},\quad {y}_{2} ={ \mathop{∑ }}_{j=1}^{2}{A}_{ 2j}{x}_{j}.& & %&(5.26) \\ \end{eqnarray}

Note that the index j runs over the columns of the matrix A.

5.3.2 Multiplication of a Matrix with a Scalar

This is simple,

\begin{eqnarray} λ\left (\array{ a&b\cr c &d } \right ) = \left (\array{ λa&λb\cr λc &λd } \right ).& & %&(5.27)\\ \end{eqnarray}

5.3.3 Matrix Multiplication: Definition

A matrix A moves a vector x into a new vector y = Ax. This new vector can again be transformed into another vector y' by acting with another matrix B on it: y' = By = BAx. The combined operation C = BA transforms the original vector x into y' in one single step. This matrix product is calculated according to

\begin{eqnarray} B& =& \left (\array{ {a}_{2}&{b}_{2} \cr {c}_{2}&{d}_{2} } \right ),\quad A = \left (\array{ {a}_{1}&{b}_{1} \cr {c}_{1}&{d}_{1} } \right )%& \\ ⇝ BA& =& \left (\array{ {a}_{2}{a}_{1} + {b}_{2}{c}_{1}&{a}_{2}{b}_{1} + {b}_{2}{d}_{1} \cr {c}_{2}{a}_{1} + {d}_{2}{c}_{1}&{c}_{2}{b}_{1} + {d}_{2}{d}_{1} } \right ). %&(5.28)\\ \end{eqnarray}

In general, the matrix product does not commmute, i.e.,

\begin{eqnarray} AB\mathrel{≠}BA.& & %&(5.29) \\ \end{eqnarray}

This means that in contrast to real or complex numbers, the result of a multiplication of two matrices A and B depends on the order of A and B.

The commutator [A,B] of two matrices A and B is defined as [A,B] = AB − BA.

The commutator plays a central role in quantum mechanics, where classical variables like position x and momentum p are replaced by operators(matrices) which in general do not commute, i.e., their commutator is non–zero.

Example:

\begin{eqnarray} {σ}_{z}& =& \left (\array{ 1& 0\cr 0 & −1 } \right )\quad {σ}_{x} = \left (\array{ 0&1\cr 1 &0 } \right ) %&(5.30) \\ {σ}_{z}{σ}_{x}& =& \left (\array{ 0 &1\cr − 1 &0 } \right ),\quad {σ}_{x}{σ}_{z} = \left (\array{ 0& − 1\cr 1 & 0 } \right )\mathrel{≠}{σ}_{z}{σ}_{x},\quad [{σ}_{z},{σ}_{x}] = 2\left (\array{ 0 &1\cr − 1 &0 } \right ).%&\\ \end{eqnarray}

5.3.4 Matrix Multiplication: Index Notation

The abstract way to write a matrix multiplication with indices:

\begin{eqnarray} C = BA ⇝ {C}_{ij} ={ \mathop{∑ }}_{k=1}^{2}{B}_{ ik}{A}_{kj}.\quad \text{($ = {B}_{ik}{A}_{kj}$ in the summation convention).}& & %&(5.31) \\ \end{eqnarray}

To get the element in the ith row and jth column of the product BA, take the scalar product of the ith row-vector of B with the j-th column vector of A. This looks complicated but it is not, it is just another formulation of our definition Eq.(5.28).