2
$\begingroup$

Some context, I've posted this question on physics SE and stack overflow. The former had nothing to offer, the latter had a great commenter that agreed with the phase looking off being one of the potential issues and some difference in the solution's dynamics (and also alerted me to the existence of this part of SE, which I had no idea existed!). Hopefully the question has a bit more success here.

I'm attempting to use Crank-Nicolson scheme to solve the TDSE in 1D. The initial wave is a real Gaussian that has been normalised with respect to its probability density. As the solution evolves, a depression grows in the central peak of the real part of the wave, and the imaginary part's central trough is perhaps a bit higher than I expect (image below).

Does this behaviour seem reasonable? I have searched around and not seen questions/figures that are similar. I've tested another person's code from Github and it exhibits the same behaviour, which makes me feel a bit better. But I still think the center peak should just decrease in height and increase in width. The likelihood of me getting a physics-based explanation is relatively low here I'd assume, but a computational-based explanation on errors I may have made is more likely.

I'm happy to give more information. Thanks in advance!

Here's a link to GIF of time evolution with positive dt:

https://i.sstatic.net/axIfq.jpg

GIF with negative dt:

https://i.sstatic.net/DTqeh.jpg

This changing of sign of dt came about from seeing this: https://scicomp.stackexchange.com/q/6725 and testing how it changes the solution (which it does a fair bit!).

An update from hardmath's comment: the equation I'm attempting to solve is the time-dependent Schroedinger equation, with no potential. $$i \hbar \frac{ \partial }{ \partial t} u(x, t) = - \frac{ \hbar^{2} }{2m} \frac{\partial^2}{\partial x^2} u(x, t)$$ Applying the Crank-Nicolson method results in (for me): $$u_{j}^{n+1}+\frac{1}{2}\frac{i\Delta t}{\hbar}\frac{-\hbar^{2}}{2m} \frac{u_{j+1}^{n+1}-2u_{j}^{n+1}+u_{j-1}^{n+1}}{\Delta x^2}= u_{j}^{n}-\frac{1}{2}\frac{i\Delta t}{\hbar}\frac{-\hbar^{2}}{2m} \frac{u_{j+1}^{n}-2u_{j}^{n}+u_{j-1}^{n}}{\Delta x^2}$$ Superscript is for time stepping, subscript for space stepping. $$A_{l} u^{n+1} = A_{r} u^{n}$$ where: $$A_{l} = \begin{pmatrix} 1-\alpha & \alpha & \cdots & \cdots & 0 \\ \alpha & 1-2\alpha & \alpha & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & \alpha & 1-2\alpha & \alpha \\ 0 & \cdots & \cdots & \alpha & 1-\alpha \\ \end{pmatrix}$$ $$A_{r} = \begin{pmatrix} 1-\beta & \beta & \cdots & \cdots & 0 \\ \beta & 1-2\beta & \beta & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & \beta & 1-2\beta & \beta \\ 0 & \cdots & \cdots & \beta & 1-\beta \\ \end{pmatrix}$$ The coefficients $\alpha$ and $\beta$ collect terms arising from applying the Crank-Nicolson method to the form of the Schroedinger equation I'm attempting to solve here (and working with natural units $\hbar=m=1$): $$\alpha = \frac{1}{4} \frac{i \Delta t}{ \Delta x^{2} }$$ $$\beta = -\frac{1}{4} \frac{i \Delta t}{ \Delta x^{2} }$$ The matrices are diagonally symmetric, except the corners, where the boundary conditions imposed (Dirichlet: $u(x_{min},t)=u(x_{max},t)=0$) result in elements 'outside' the matrices being known as $0$ and changing the corner elements appropriately.

The wave function is $u$, and its superscript indicates its time: so if $n$ is the current time, I am using the Crank-Nicolson method to estimate the wave function at the next time after taking timestep $dt$, $n+1$.

And the part of my code relevant to solving the 1D TDSE: (pretty much the entire thing except the plotting)

import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation

# Define function for norm.
def normf(dxc, uc, ic):
    return sum(dxc * np.square(np.abs(uc[ic, :])))

# Define function for expectation value of position.
def xexpf(dxc, xc, uc, ic):
    return sum(dxc * xc * np.square(np.abs(uc[ic, :])))

# Define function for expectation value of squared position.
def xexpsf(dxc, xc, uc, ic):
    return sum(dxc * np.square(xc) * np.square(np.abs(uc[ic, :])))

# Define function for standard deviation.
def sdaf(xexpc, xexpsc, ic):
    return np.sqrt(xexpsc[ic] - np.square(xexpc[ic]))

# Time t: t0 =< t =< tf. Have N steps at which to evaluate the CN scheme. The
# time interval is dt. decp: variable for plotting to certain number of decimal
# places.
t0 = 0
tf = 20
N = 200
dt = tf / N
t = np.linspace(t0, tf, num = N + 1, endpoint = True)
decp = str(dt)[::-1].find('.')

# Initialise array for filling with norm values at each time step.
norm = np.zeros(len(t))

# Initialise array for expectation value of position.
xexp = np.zeros(len(t))

# Initialise array for expectation value of squared position.
xexps = np.zeros(len(t))

# Initialise array for alternate standard deviation.
sda = np.zeros(len(t))

# Position x: -a =< x =< a. M is an even number. There are M + 1 total discrete
# positions, for the points to be symmetric and centred at x = 0.
a = 100
M = 1200
dx = (2 * a) / M
x = np.linspace(-a, a, num = M + 1, endpoint = True)

# The gaussian function u diffuses over time. sd sets the width of gaussian. u0
# is the initial gaussian at t0.
sd = 1
var = np.power(sd, 2)
mu = 0
u0 = np.sqrt(1 / np.sqrt(np.pi * var)) * np.exp(-np.power(x - mu, 2) / (2 * \
                                                                        var))
u = np.zeros([len(t), len(x)], dtype = 'complex_')
u[0, :] = u0

# Normalise u.
u[0, :] = u[0, :] / np.sqrt(normf(dx, u, 0))

# Set coefficients of CN scheme.
alpha = dt * -1j / (4 * np.power(dx, 2))
beta = dt * 1j / (4 * np.power(dx, 2))

# Tridiagonal matrices Al and AR. Al to be solved using Thomas algorithm.
Al = np.zeros([len(x), len(x)], dtype = 'complex_')

for i in range (0, M):
       Al[i + 1, i] = alpha
       Al[i, i] = 1 - (2 * alpha)
       Al[i, i + 1] = alpha

# Corner elements for BC's.
Al[M, M], Al[0, 0] = 1 - alpha, 1 - alpha

Ar = np.zeros([len(x), len(x)], dtype = 'complex_')

for i in range (0, M):
       Ar[i + 1, i] = beta
       Ar[i, i] = 1 - (2 * beta)
       Ar[i, i + 1] = beta

# Corner elements for BC's.
Ar[M, M], Ar[0, 0] = 1 - beta, 1 - beta

# Thomas algorithm variables. Following similar naming as in Wikipedia page.
a = np.diag(Al, -1)
b = np.diag(Al)
c = np.diag(Al, 1)

NT = len(b)

cp = np.zeros(NT - 1, dtype = 'complex_')

for n in range(0, NT - 1):
    if n == 0:
        cp[n] = c[n] / b[n]
    else:
        cp[n] = c[n] / (b[n] - (a[n - 1] * cp[n - 1]))

d = np.zeros(NT, dtype = 'complex_')

dp = np.zeros(NT, dtype = 'complex_')

# Iterate over each time step to solve CN method. Maintain boundary
# conditions. Keep track of multiple values.
for i in range(0, N):
       # BC's.
       u[i, 0], u[i, M] = 0, 0
       
       # Find RHS.
       d = np.dot(Ar, u[i, :])
       
       for n in range(0, NT):
           if n == 0:
               dp[n] = d[n] / b[n]
           else:
               dp[n] = (d[n] - (a[n - 1] * dp[n - 1])) / (b[n] - (a[n - 1] * \
                                                                  cp[n - 1]))

       nc = NT - 1
       while nc > -1:
           if nc == NT - 1:
               u[i + 1, nc] = dp[nc]
               nc -= 1
           else:
               u[i + 1, nc] = dp[nc] - (cp[nc] * u[i + 1, nc + 1])
               nc -= 1

       norm[i] = normf(dx, u, i)
       xexp[i] = xexpf(dx, x, u, i)
       xexps[i] = xexpsf(dx, x, u, i)
       sda[i] = sdaf(xexp, xexps, i)


# Fill in final norm value.
norm[N] = normf(dx, u, N)

# Fill in final position expectation value.
xexp[N] = xexpf(dx, x, u, N)

# Fill in final squared position expectation value.
xexps[N] = xexpsf(dx, x, u, N)

# Fill in final standard deviation value.
sda[N] = sdaf(xexp, xexps, N)
$\endgroup$
11
  • 2
    $\begingroup$ Minor point: "pretext" should likely be "context". Major point: It asks too much of your Readers to reverse engineer the differential equation you are trying to solve from your code. I understand that the code is an essential part of your Question, but referring to the underlying problem to be solved merely as "1D TDSE" leaves room for interpretation. You can post "typeset" mathematical notation here (see this brief introduction and its links to more detailed documentation). $\endgroup$
    – hardmath
    Commented Aug 7, 2022 at 12:54
  • $\begingroup$ If it's only about solving the TDSE with any method, better use a diagonalization method, i.e. calculate the exponential of the Hamiltonian directly. This has the same asymptotical effort, but is of higher order. $\endgroup$
    – davidhigh
    Commented Aug 7, 2022 at 21:24
  • 1
    $\begingroup$ @hardmath updated question on both notes of yours. $\endgroup$ Commented Aug 8, 2022 at 3:35
  • $\begingroup$ @davidhigh I am trying to teach myself numerical methods, so applying Crank-Nicolson is non-negotiable here. $\endgroup$ Commented Aug 8, 2022 at 3:38
  • 2
    $\begingroup$ If you can't unambiguously say whether your solution is right or wrong, you need to employ the Method of Manufactured Solutions. $\endgroup$ Commented Aug 8, 2022 at 4:00

0