Comprehensive Study Notes

So far we have considered only cases where the potential energy $V(x)$ is a constant. This makes the Schrödinger equation a second-order linear homogeneous differential equation with constant coefficients, which we know how to solve. For cases in which $V$ varies with $x$, a useful approach is to try a power-series solution of the Schrödinger equation.

To illustrate the method, consider the differential equation

\(
\begin{equation}
y^{\prime \prime}(x)+c^{2} y(x)=0 \tag{4.1}
\end{equation}
\)

where $c^{2}>0$. Of course, this differential equation has constant coefficients, but we can solve it with the power-series method if we want. Let us first find the solution by using the auxiliary equation, which is $s^{2}+c^{2}=0$. We find $s= \pm i c$. Recalling the work in Section 2.2 [Eqs. (2.10) and (4.1) are the same], we get trigonometric solutions when the roots of the auxiliary equation are pure imaginary:

\(
\begin{equation}
y=A \cos c x+B \sin c x \tag{4.2}
\end{equation}
\)

where $A$ and $B$ are the constants of integration. A different form of (4.2) is

\(
\begin{equation}
y=D \sin (c x+e) \tag{4.3}
\end{equation}
\)

where $D$ and $e$ are arbitrary constants. Using the formula for the sine of the sum of two angles, we can show that (4.3) is equivalent to (4.2).

Now let us solve (4.1) using the power-series method. We start by assuming that the solution can be expanded in a Taylor series (see Prob. 4.1) about $x=0$; that is, we assume that

\(
\begin{equation}
y(x)=\sum{n=0}^{\infty} a{n} x^{n}=a{0}+a{1} x+a{2} x^{2}+a{3} x^{3}+\cdots \tag{4.4}
\end{equation}
\)

where the $a$ 's are constant coefficients to be determined so as to satisfy (4.1). Differentiating (4.4), we have

\(
\begin{equation}
y^{\prime}(x)=a{1}+2 a{2} x+3 a{3} x^{2}+\cdots=\sum{n=1}^{\infty} n a_{n} x^{n-1} \tag{4.5}
\end{equation}
\)

where we assumed that term-by-term differentiation is valid for the series. (This is not always true for infinite series.) For $y^{\prime \prime}$, we have

\(
\begin{equation}
y^{\prime \prime}(x)=2 a{2}+3(2) a{3} x+\cdots=\sum{n=2}^{\infty} n(n-1) a{n} x^{n-2} \tag{4.6}
\end{equation}
\)

Substituting (4.4) and (4.6) into (4.1), we get

\(
\begin{equation}
\sum{n=2}^{\infty} n(n-1) a{n} x^{n-2}+\sum{n=0}^{\infty} c^{2} a{n} x^{n}=0 \tag{4.7}
\end{equation}
\)

We want to combine the two sums in (4.7). Provided certain conditions are met, we can add two infinite series term by term to get their sum:

\(
\begin{equation}
\sum{j=0}^{\infty} b{j} x^{j}+\sum{j=0}^{\infty} c{j} x^{j}=\sum{j=0}^{\infty}\left(b{j}+c_{j}\right) x^{j} \tag{4.8}
\end{equation}
\)

To apply (4.8) to the two sums in (4.7), we want the limits in each sum to be the same and the powers of $x$ to be the same. We therefore change the summation index in the first sum in (4.7), defining $k$ as $k \equiv n-2$. The limits $n=2$ to $\infty$ correspond to $k=0$ to $\infty$ and use of $n=k+2$ gives
$\sum{n=2}^{\infty} n(n-1) a{n} x^{n-2}=\sum{k=0}^{\infty}(k+2)(k+1) a{k+2} x^{k}=\sum{n=0}^{\infty}(n+2)(n+1) a{n+2} x^{n}$
The last equality in (4.9) is valid because the summation index is a dummy variable; it makes no difference what letter we use to denote this variable. For example, the sums $\sum{i=1}^{3} c{i} x^{i}$ and $\sum{m=1}^{3} c{m} x^{m}$ are equal because only the dummy variables in the two sums differ. This equality is easy to see if we write out the sums:

\(
\sum{i=1}^{3} c{i} x^{i}=c{1} x+c{2} x^{2}+c{3} x^{3} \quad \text { and } \quad \sum{m=1}^{3} c{m} x^{m}=c{1} x+c{2} x^{2}+c{3} x^{3}
\)

In the last equality in (4.9), we simply changed the symbol denoting the summation index from $k$ to $n$.

The integration variable in a definite integral is also a dummy variable, since the value of a definite integral is unaffected by what letter we use for this variable:

\(
\begin{equation}
\int{a}^{b} f(x) d x=\int{a}^{b} f(t) d t \tag{4.10}
\end{equation}
\)

Using (4.9) in (4.7), we find, after applying (4.8), that

\(
\begin{equation}
\sum{n=0}^{\infty}\left[(n+2)(n+1) a{n+2}+c^{2} a_{n}\right] x^{n}=0 \tag{4.11}
\end{equation}
\)

If (4.11) is to be true for all values of $x$, then the coefficient of each power of $x$ must vanish. To see this, consider the equation

\(
\begin{equation}
\sum{j=0}^{\infty} b{j} x^{j}=0 \tag{4.12}
\end{equation}
\)

Putting $x=0$ in (4.12) shows that $b{0}=0$. Taking the first derivative of (4.12) with respect to $x$ and then putting $x=0$ shows that $b{1}=0$. Taking the $n$th derivative and putting $x=0$ gives $b_{n}=0$. Thus, from (4.11), we have

\(
\begin{gather}
(n+2)(n+1) a{n+2}+c^{2} a{n}=0 \tag{4.13}\
a{n+2}=-\frac{c^{2}}{(n+1)(n+2)} a{n} \tag{4.14}
\end{gather}
\)

Equation (4.14) is a recursion relation. If we know the value of $a{0}$, we can use (4.14) to find $a{2}, a{4}, a{6}, \ldots$ If we know $a{1}$, we can find $a{3}, a{5}, a{7}, \ldots$. Since there is no restriction on the values of $a{0}$ and $a{1}$, they are arbitrary constants, which we denote by $A$ and $B c$ :

\(
\begin{equation}
a{0}=A, \quad a{1}=B c \tag{4.15}
\end{equation}
\)

Using (4.14), we find for the coefficients

\(
\begin{align}
a{0}=A, \quad a{2} & =-\frac{c^{2} A}{1 \cdot 2}, \quad a{4}=\frac{c^{4} A}{4 \cdot 3 \cdot 2 \cdot 1}, \quad a{6}=-\frac{c^{6} A}{6!}, \ldots \
a{2 k} & =(-1)^{k} \frac{c^{2 k} A}{(2 k)!}, \quad k=0,1,2,3, \ldots \tag{4.16}\
a{1}=B c, \quad a{3} & =-\frac{c^{3} B}{2 \cdot 3}, \quad a{5}=\frac{c^{5} B}{5 \cdot 4 \cdot 3 \cdot 2}, \quad a{7}=-\frac{c^{7} B}{7!}, \ldots \
a{2 k+1} & =(-1)^{k} \frac{c^{2 k+1} B}{(2 k+1)!}, \quad k=0,1,2, \ldots \tag{4.17}
\end{align}
\)

From (4.4), (4.16), and (4.17), we have

\(
\begin{align}
& y=\sum{n=0}^{\infty} a{n} x^{n}=\sum{n=0,2,4, \ldots}^{\infty} a{n} x^{n}+\sum{n=1,3,5, \ldots}^{\infty} a{n} x^{n} \tag{4.18}\
& y=A \sum{k=0}^{\infty}(-1)^{k} \frac{c^{2 k} x^{2 k}}{(2 k)!}+B \sum{k=0}^{\infty}(-1)^{k} \frac{c^{2 k+1} x^{2 k+1}}{(2 k+1)!} \tag{4.19}
\end{align}
\)

The two series in (4.19) are the Taylor series for $\cos c x$ and $\sin c x$ (Prob. 4.2). Hence, in agreement with (4.2), we have $y=A \cos c x+B \sin c x$.