1
$\begingroup$

I want to simulate 1D diffusion with a constant diffusion coefficient using the Crank-Nicolson method. $$\frac{\partial u (x,t)}{\partial t} = D \frac{\partial^2 u(x,t)}{\partial x^2}.$$

I take an average of the approximations at the points $(x_i, t_j)$ and $(x_i, t_{j+1}):$

$$\frac{u_i^{n+1} -u_i^n}{\Delta t} \approx D \frac{(u_{i+1}^{n+1} - 2u_i^{n+1} + u_{i-1}^{n+1})+ (u_{i+1}^{n} - 2u_i^{n} + u_{i-1}^{n})}{2 \Delta x^2}$$

Letting $\alpha = D \frac{\Delta t}{\Delta x^2}$ yields: $$-\alpha u_{i+1}^{j+1} + 2(1+ \alpha)u_{i}^{j+1} - \alpha u_{i-1}^{j+1}= \alpha u_{i+1}^{j} + 2(1- \alpha)u_{i}^{j} + \alpha u_{i-1}^{j}.$$

In matrix form $$Au^{j+1} = Bu^{j}$$ Where $A$ is the tridiagonal matrix $$A = \begin{bmatrix} b_0 & c_0 & & & & & & 0 \\ a_1 & b_1 & c_1 & & & & & \\ & a_2 & b_2 & c_2 & & & & \\ & & \ddots & \ddots & \ddots & & & \\ & & & a_i & b_i & c_i & & \\ & & & &\ddots & \ddots & \ddots & \\ & & & & & a_{n_x -1} & b_{n_x -1} & c_{n_x -1} \\ 0 & & & & & & a_{n_x} & b_{n_x} \end{bmatrix} = \begin{bmatrix} ? & ? & & & & & & 0 \\ -\alpha & 2(1+\alpha) & -\alpha & & & & & \\ & -\alpha & 2(1+\alpha) & -\alpha & & & & \\ & & \ddots & \ddots & \ddots & & & \\ & & & -\alpha & 2(1+\alpha) & -\alpha & &\\ & & & &\ddots & \ddots & \ddots & \\ & & & & & -\alpha & 2(1+\alpha) & -\alpha \\ 0 & & & & & & ? & ? \end{bmatrix} = $$ And $B$ is the tridiagonal matrix $$B = \begin{bmatrix} ? & ? & & & & & & 0 \\ \alpha & 2(1-\alpha) & \alpha & & & & & \\ & \alpha & 2(1-\alpha) & \alpha & & & & \\ & & \ddots & \ddots & \ddots & & & \\ & & & \alpha & 2(1-\alpha) & \alpha & &\\ & & & &\ddots & \ddots & \ddots & \\ & & & & & \alpha & 2(1-\alpha) & \alpha \\ 0 & & & & & & ? & ? \end{bmatrix}$$

I have insulated boundary conditions, so to determine the top rows I first introduce a fictitious points $u_{-1}$

$$\frac{u_{1}^{j+1} - u_{-1}^{j+1}}{2 \Delta x} = \text{Flux}(0) =0$$ $$u_{-1}^{j+1} = u_{1}^{j+1}$$

Implying: $$-\alpha u_{1}^{j+1} + 2(1+ \alpha)u_{0}^{j+1} - \alpha u_{-1}^{j+1}= \alpha u_{1}^{j} + 2(1- \alpha)u_{0}^{j} + \alpha u_{-1}^{j}$$ $$-2\alpha u_{1}^{j+1} + 2(1+ \alpha)u_{0}^{j+1} = 2\alpha u_{1}^{j} + 2(1- \alpha)u_{0}^{j}$$

So I set the top row of $A$ to be $\begin{bmatrix} 2(1+\alpha) & -2\alpha & ... \end{bmatrix}$ and the top row of $B$ to be $\begin{bmatrix} 2(1-\alpha) & 2\alpha & ... \end{bmatrix}$. A similar derivation applies to the bottom rows.

The problem is that this does not seem to do what I want; it does not conserve flux at the boundary. Here is a sample implementation in Python. I first create a function to multiply by the matrix $B$ to save memory:

def mltply_B(u,alpha):
    u_i = 2*(1-alpha)*u
    u_i=np.insert(u_i,0,0)
    u_i=np.append(u_i,0)

    u_iplus1=alpha*u
    u_iplus1=np.append(u_iplus1,[0,0])

    u_iminus1=alpha*u
    u_iminus1=np.insert(u_iminus1,0,0)
    u_iminus1=np.insert(u_iminus1,0,0)

    B_u=u_i+u_iplus1+u_iminus1
    B_u=B_u[1:-1]
    B_u[0]=(1-alpha)*u[0]
    B_u[-1]=(1-alpha)*u[-1]

    return B_u

Next I implement the tridiagonal matrix algorithm, or Thomas algorithm, to solve the linear equation:

def TDMAsolver(a, b, c, d):
    nf = len(d)  # number of equations
    ac, bc, cc, dc = map(np.array, (a, b, c, d))  # copy arrays
    for it in range(1, nf):
        mc = ac[it-1] / bc[it-1]
        bc[it] = bc[it] - mc * cc[it-1]
        dc[it] = dc[it] - mc * dc[it-1]
            
    xc = bc
    xc[-1] = dc[-1] / bc[-1]

    for il in range(nf-2, -1, -1):
        xc[il] = (dc[il] - cc[il] * xc[il+1]) / bc[il]

    return xc

I create a vector which looks like [4,0,0,...,0,0,4] and observe the output: nx=150 alpha=0.07

u=np.zeros(nx+1)
u[0]=4
u[-1]=4

a=np.full(nx, -alpha)
b=np.full(nx+1, 2*(1+alpha))
b[0]=1+alpha
b[-1]=1+alpha
c=np.full(nx, -alpha)

d=mltply_B(u,alpha)
u1=TDMAsolver(a,b,c,d)

Now the output of

np.sum(u)

Is 8.0, as expected, but the output of

np.sum(u1)

is roughly 7.49, an approximately $1/16$ loss of mass. If you run this in a loop the amount of mass gradually dwindles (after about 50 loops you only have half of your mass left).

If I initialize $u$ so that the mass is concentrated in the center then there is very little loss, at least initially. Eventually, once a substantial amount of mass reaches the endpoints mass leaves the system. In the eventual application I am interested in, I will be injecting a "source" every time step at one of the boundaries, so it is important for the behavior near the boundaries to be accurate.


Edit: If a right and left difference are used to derive the boundary conditions then we take instead:

$$\frac{u_{0}^{j+1} - u_{-1}^{j+1}}{\Delta x} = \text{Flux}(0) =0$$ $$u_{-1}^{j+1} = u_{0}^{j+1}$$

Implying: $$-\alpha u_{1}^{j+1} + 2(1+ \alpha)u_{0}^{j+1} - \alpha u_{-1}^{j+1}= \alpha u_{1}^{j} + 2(1- \alpha)u_{0}^{j} + \alpha u_{-1}^{j}$$ $$-\alpha u_{1}^{j+1} + (2+ \alpha)u_{0}^{j+1} = \alpha u_{1}^{j} + (2- \alpha)u_{0}^{j}$$

So we set the top row of $A$ to be $\begin{bmatrix} (2+\alpha) & -\alpha & ... \end{bmatrix}$ and the top row of $B$ to be $\begin{bmatrix} (2-\alpha) & \alpha & ... \end{bmatrix}$. (And again, a similar derivation applies to the bottom rows.)

This does conserve mass; however, multiple sources imply that that it is more accurate (specifically, accurate to second rather than first order) to use the centered difference at the boundary. So what gives? Is there a way to keep the second order accuracy at the boundary and also conserve mass?

$\endgroup$
2
  • $\begingroup$ Can you let me know the boundary condition that you are planning to impose on the problem? If its dirichlet, Impose the BC strongly ( By making the diagonal 1 and other entries zero and substituting the RHS). $\endgroup$ Commented Jun 14, 2023 at 4:24
  • $\begingroup$ @ThivinAnandh I am sorry, I do not understand your comment. The boundary condition is that the flux at the ends is zero, or the derivative at the ends is zero, which is a Neumann boundary condition. $\endgroup$
    – Jbag1212
    Commented Jun 14, 2023 at 14:14

1 Answer 1

3
$\begingroup$

I tried to run your code and I guess there might be a small mistake in your derivation,

When you impose $u_{-1}^{j+1} = u_1^{j+1}$ into the equations at the ends, you have to equate $u_{-1}^{j+1} = u_0^{j+1}$ and not $u_{-1}^{j+1} = u_1^{j+1}$

When this modification is applied which basically changes $2(1 \pm \alpha)$ in $A$ and $B$ to be $(2 \pm \alpha)$, then there is mass conservation, even for multiple time steps.

Could you please check if this resolves your issue?!


EDIT: One tip for mass conservation is to check if the sum of the rows and columns of the resulting matrix gives the same vector of the form $a [1, 1,..., 1]^\top$, the resulting matrices with central difference for insulating BC above does not hold this property.

$\endgroup$
4
  • $\begingroup$ Thank you, yes this works; however, please see the update of my question. $\endgroup$
    – Jbag1212
    Commented Jun 14, 2023 at 17:55
  • 1
    $\begingroup$ I didn't realise you were going for central difference of the boundary condition, will update the answer. $\endgroup$ Commented Jun 14, 2023 at 19:04
  • $\begingroup$ The first reference mentions accuracy and convergence analysis, do you happen to have references to them as well? $\endgroup$ Commented Jun 14, 2023 at 19:10
  • $\begingroup$ Unfortunately, I cannot find the other lecture notes for this specific link math.toronto.edu/mpugh/Teaching/Mat1062/notes2.pdf however multiple other links confirm that the central difference is accurate to second order $\endgroup$
    – Jbag1212
    Commented Jun 14, 2023 at 19:24

Not the answer you're looking for? Browse other questions tagged or ask your own question.