Continuous Formulation
If I understood everything correctly you are trying to solve the heat equation on some domain $\Omega\subseteq \mathbb{R}^d$ with space-variant diffusivity $\kappa:\Omega\to [0,\infty)$,
some term $\rho:\Omega\to[0,\infty]$, a forcing term $q:\Omega\to\mathbb{R}$, and Robin boundary conditions on the boundary $\partial\Omega$:
\begin{alignat}{3}
\newcommand{\bm}[1]{\boldsymbol{#1}}
\rho(\bm{x})\partial_t T(t,\bm{x}) &= \bm{\nabla}\cdot (\kappa(\bm{x}) \bm{\nabla} T(t,\bm{x})) + q(\bm{x}), &\quad& \bm{x} \in \Omega, \,\,t\in(0,\infty), \\
\eta T(t,\bm{x}) + \kappa(\bm{x})\partial_{\bm{n}}T(t,\bm{x}) &= \eta T_{amb}, &\quad& \bm{x}\in\partial\Omega, \,\, t\in(0,\infty). \\
T(0,\bm{x}) &= T_0(\bm{x}), &\quad& \bm{x}\in\Omega.
\end{alignat}
In the above $\bm{n}:\partial\Omega\to\mathcal{S}^{d-1}$ is the outward facing unit
normal. For your specific case $d=1$, $\Omega = (0,1)$, $\partial\Omega = \{0,1\}$,
$n(0) = -1$, $n(1)=1$, and thus $\partial_{n}T(t,0) = -\partial_x T(t,0)$ and $\partial_n T(t,1) = \partial_x T(t,1)$.
Weak Formulation
You want to discretise the problem with the finite volume method, so let us multiply both sides of the equation by a test function $w:\Omega\to\mathbb{R}$ and integrate over $\Omega$:
\begin{align}
\int_{\Omega} \rho\, w\,\partial_t T &=
\int_{\Omega} w\,\bm{\nabla}\cdot (\kappa\bm{\nabla} T) + \int_{\Omega}w\,q \\
&= \int_{\Omega}\bm{\nabla}\cdot(w\kappa\bm{\nabla}T) -
\int_{\Omega}\kappa\,\bm{\nabla}w \cdot \bm{\nabla}T + \int_{\Omega} w \,q \\
&=\int_{\partial\Omega}w\kappa\,\partial_{\bm{n}}T -\int_{\Omega}\kappa\,\bm{\nabla}w \cdot \bm{\nabla}T + \int_{\Omega} w \,q.
\end{align}
FVM discretisation: Piecewise Constant Test Functions
Now consider a finite-dimensional space for the test functions made up of piecewise-constant functions on a partition of $\Omega$ made up of the control volumes $\{K_0,\ldots,K_{m+1}\}$:
\begin{equation}
W = \left\{w(x) = \sum_{j=0}^{m+1} \alpha_j \bm{1}_{K_j}(x)\right\}, \, \Omega = \bigcup_{i=0}^{m+1} K_i, \, K_i\ne \emptyset, \, K_i \cap K_j = \emptyset, \, i\ne j.
\end{equation}
A basis of that space are of course the indicator functions: $\psi_i(\bm{x}) = \bm{1}_{K_i}(\bm{x})$.
Then instead of integrating over $\Omega$ we may integrate over $K_i$ and set
$w=\psi_i$ which yields:
\begin{equation}
\int_{K_i}\rho \partial_tT = \int_{\partial K_i}\kappa\, \partial_{\bm{n}} T + \int_{K_i} q,
\end{equation}
where the term $\int_{K_i} \kappa \, \bm{\nabla} w \cdot \bm{\nabla} T = 0$ since
$\bm{\nabla} \psi_i = 0$.
Partition in 1-D
In your 1-D setting let the faces be given as $0=f_0<f_1<\ldots<f_{m+1}<f_{m+2}=1$, i.e.\
$K_i = (f_i,f_{i+1})$ and let the centers $(x_i)_{i=0}^{m+1}$ be such that $x_i\in \overline{K_i}$, where $\overline{K_i} = [f_i,f_{i+1}]$ (note that you should probably avoid
degenerate cases where $x_i=x_{i+1}$). If you don't have any specific knot vector for the
$x_i$ you may set them at the centers of the control volumes: $x_{i+1} = \frac{f_i+f_{i+1}}{2}$.
For the discretisation I will use, the boundary points should be chosen such that
$0\leq i \leq m-1$, and $x_0 = f_0 = 0$, $x_{m+1} = f_{m+2}$ = 1.
Flux Discretisation: Continuous Piecewise Linear Trial Functions
We would like to approximate the flux terms $\kappa(f_i)\partial_xT(f_i)$.
I have chosen to approximate the solution $T$ with linear function in every interval $[x_i,x_{i+1}]$ and require that it is continuous on $\Omega$. Note that
this choice is consistent with the discretisation you get from finite differences
on an irregular grid, the one you get from the finite element method with linear
elements, and the one you get from the finite volume method with the two-point flux approximation scheme.
Our solution resides in the space of continuous piecewise linear functions
(i.e.\ polygonal chains) on the knot vector $(x_0,\ldots,x_{m+1})$:
\begin{equation}
V = \left\{v(\bm{x}) = \sum_{j=0}^{m+1}\alpha_j \phi_j(\bm{x})\right\},
\end{equation}
where the basis functions $\phi_i$ are the hat functions:
\begin{equation}
\phi_i(x) =
\begin{cases}
\frac{x-x_{i-1}}{x_i-x_{i-1}}, &\text{for } x \in K_{i-1}, \\
\frac{x-x_{i+1}}{x_{i}-x_{i+1}}, &\text{for } x \in K_i, \\
0, &\text{for } x\in\Omega \setminus (K_{i-1} \cup K_i).
\end{cases}
\end{equation}
Since the functions are linear and $x_i\leq f_i \leq x_{i+1}$ we have: $\partial_{x}T(f_i) = \frac{T(x_{i})-T(x_{i-1})}{x_{i}-x_{i-1}}=(T_{i} - T_{i-1}) \Delta_{i-1}$ for $1\leq i \leq m$.
You mention that $\kappa$ is discontinuous at faces. One option would be to take:
\begin{equation}
\kappa_{i-1/2} = \frac{\lim_{h\to 0^+}\kappa(f_i+h) + \lim_{h\to 0^+}\kappa(f_i-h)}{2}.
\end{equation}
If $\kappa$ is a constant function on the volumes then that would be just the average between
$\kappa$ on $K_{i-1}$ and on $K_{i}$.
If you want a more formal treatment of this then you should probably look into discontinuous Galerkin methods.
If you assume instead that $\kappa$ varies linearly between the control volume centers,
you can take the linearly interpolated value:
\begin{equation}
\kappa_{i-1/2} :=\kappa(f_i) \approx \frac{f_i-x_i}{x_{i-1}-x_i}\kappa(x_i)
+ \frac{f_i-x_{i-1}}{x_{i}-x_{i-1}}\kappa(x_{i-1}).
\end{equation}
I don't know which you should take - you could try both.
Control Volume Integral Discretisation
The simplest discretisation of the control volume integrals is by
assuming that the functions $\rho, q, T$ are constant on the cells $K_i$, then:
\begin{equation}
\int_{K_i}\rho\partial_t T \approx |K_i| \rho(x_i) \partial_t T(x_i), \quad
\int_{K_i} q \approx |K_i| q(x_i), \quad |K_i| = f_{i+1}-f_i.
\end{equation}
As a "better" alternative you may replace $\rho(x_i)$ with
$\rho_{avg,i} = \int_{K_i}\rho / |K_i|$, and similarly for $q(x_i)$.
You may rightfully object that just before that we claimed that $T$ belongs
to a space of continuous piecewise linear functions. In fact this mix
of approximations is even called the finite volume element method sometimes
for that reason (also box method).
In finite element literature they call the resulting matrix a lumped mass matrix,
it avoids having to solve an additional linear system since the mass matrix is
diagonal in that case.
Of course, if you wish you may discretise $\partial_t T$ as a linear function
on $K_i$ and integrate, then the result would also depend on the the two neighbours
neighbours.
Space-discrete Formulation
Substituting our discretisation in the weak formulation projected onto the space
of piecewise constant test functions yields:
\begin{align}
\int_{K_i}\rho \partial_tT &= \int_{\partial K_i}\kappa\, \partial_{\bm{n}} T + \int_{K_i} q, \\
\rho_{avg,i} |K_i|d_tT_i &= \kappa_{i+1/2}(T_{i+1}-T_i)\Delta_i - \kappa_{i-1/2}(T_i-T_{i-1})\Delta_{i-1} + |K_i| q_{avg,i}, \,\, 1\leq i \leq m.
\end{align}
Note that this produced $m$ equations for the volumes $K_1,\ldots,K_m$.
We also need to handle the boundary conditions and boundary volumes:
\begin{align}
\eta T(t,\bm{x}) + \kappa(\bm{x})\partial_{\bm{n}}T(t,\bm{x}) &= \eta T_{amb}, \quad \bm{x}\in\partial\Omega, \,\, t\in(0,\infty), \\
\eta_0 T_0 - \kappa_0 (T_1-T_0)\Delta_0 &= \eta_0 T_{amb}, \\
\eta_{m+1} T_{m+1} + \kappa_{m+1} (T_{m+1}-T_m)\Delta_m &= \eta_{m+1} T_{amb},
\end{align}
You can use the above to solve for $T_0$ and $T_{m+1}$ as:
\begin{align}
T_0 = \frac{\eta_0 T_{amb} + \kappa_{0}\Delta_0 T_1}{\eta_0+\kappa_0\Delta_0}, \quad
T_{m+1} = \frac{\eta_{m+1}T_{amb} - \kappa_{m+1}\Delta_mT_{m+1}}{\eta_{m+1}-\kappa_{m+1}\Delta_m}.
\end{align}
You may then substitute this solution for $T_0$ in the equation for $d_tT_1$,
and $T_{m+1}$ in the equation for $d_tT_m$, which would eliminate the variables
and you would be able to work with an $m\times m$ matrix. Then at the end you
can compute $T_0$ and $T_{m+1}$ from the solution for $T_1$ and $T_m$.
The other option is of course to keep these two rows, then you would have an
$(m+2)\times (m+2)$ matrix. In either case we may write the system in matrix-vector
form as:
\begin{equation}
\bm{M}_{\bm{\rho}}d_t\bm{T}(t) = -\bm{W}_{\bm{\kappa}}\bm{T}(t) + \bm{q},\quad \bm{T}(0) = \bm{T}_0.
\end{equation}
Time-continuous Solution
If we assume that $\rho_{avg,i}$ and $|K_i|$ are non-zero for all $i$, then the
matrix $\bm{M}_{\bm{\rho}}$ is invertible, and we may write:
\begin{align}
d_t\bm{T} &= -\bm{M}_{\bm{\rho}}^{-1}\bm{W}_{\bm{\kappa}}\bm{T}(t) + \bm{M}_{\bm{\rho}}^{-1}\bm{q} =
-\bm{L}\bm{T}(t) + \bm{b},\\
\bm{T}(t) &= \bm{\exp}(-t\bm{L})\bm{T}_0
+ \left(\int_{0}^t \bm{\exp}((s-t)\bm{L})\,ds\right)\bm{b}, \\
&= \bm{\exp}(-t\bm{L})\bm{T}_0
+ \left(\sum_{k=0}^{\infty} \frac{t(-t\bm{L})^k}{(k+1)!}\right)\bm{b}, \\
&= \bm{\exp}(-t\bm{L})\bm{T}_0 +
\biggl(\bm{L}^{\#} - \bm{\exp}(-t\bm{L})\bm{L}^{\#}+t(\bm{I}-\bm{L}\bm{L}^{\#})\biggr)\bm{b},
\end{align}
where $\bm{L}^{\#}$ is the Drazin inverse. This solution doesn't require
a discretisation in time, but instead requires computing matrix exponentials,
and this Drazin inverse.
While both are easy to do if you have the eigendecomposition $\bm{L} = \bm{P}\bm{\Lambda}\bm{P}^{-1}$, it is not so easy otherwise.
However both explicit Euler and Implicit Euler approximate this
solution as the step size goes to zero.
Series solution
One option is to directly use one of the solutions from earlier and write the
exponential as a series:
\begin{align}
\bm{T}(t) &= \bm{\exp}(-t\bm{L})\bm{T}_0
+ \left(\sum_{k=0}^{\infty} \frac{t(-t\bm{L})^k}{(k+1)!}\right)\bm{b} \\
&= \left(\sum_{k=0}^{\infty}\frac{(-t\bm{L})^k}{k!}\right) \bm{T}_0 +
\left(\sum_{k=0}^{\infty} \frac{t}{k+1}\frac{(-t\bm{L})^k}{k!}\right)\bm{b} \\
&= \sum_{k=0}^{\infty}\frac{(-tL)^k}{k!} \left(\bm{T}_0 + \frac{t}{k+1}\bm{b}\right).
\end{align}
I can't tell you when you can truncate this series, but it should give you an
approximation of the solution provided you truncate after enough terms. You don't have to
compute powers of matrices either, you can just perform two matrix-vector mutliplications
to update some product variables. If you can compute the Drazin inverse applied to $\bm{b}$
(e.g.\ using some iterative method) then you can reduce that to one matrix-vector
multiplication :
\begin{align}
\bm{T}(t) &= \bm{\exp}(-t\bm{L})\bm{T}_0 +
\biggl(\bm{L}^{\#} - \bm{\exp}(-t\bm{L})\bm{L}^{\#}+t(\bm{I}-\bm{L}\bm{L}^{\#})\biggr)\bm{b}
\\
&= \sum_{k=0}^{\infty}\frac{(-tL)^k}{k!}(\bm{T}_0-(\bm{L}^{\#}\bm{b})) +
(\bm{I}+t\bm{I}-t\bm{L})(\bm{L}^{\#}\bm{b}).
\end{align}
I believe that Matlab uses a Pad{'e} approximation of the exponential for faster
series convergence.
Explicit Euler
Explicit Euler can be formulated as taking forward differences in time for $d_t\bm{T}$,
and taking the right-hand side at the old time step:
\begin{equation}
\frac{\bm{T}^{k+1}-\bm{T}^k}{\tau} = -\bm{L}\bm{T}^k + \bm{b} \implies
\bm{T}^{k+1} = (\bm{I}-\tau\bm{L})\bm{T}^k + \tau\bm{b}.
\end{equation}
Let the steady state solution be $\bm{T}^*$ (this is the solution at time infinty),
then the error is given as
\begin{align}
\bm{\epsilon}^{k+1} &= \bm{T}^{k+1}-\bm{T}^* = (\bm{I}-\tau\bm{L})\bm{T}^k + \tau \bm{b} - (\tau\bm{b}-\tau\bm{L}\bm{T}^*) \\
&= (\bm{I}-\tau\bm{L})(\bm{T}^k-\bm{T}^*) = (\bm{I}-\tau\bm{L})\bm{\epsilon}^k \\
&= (\bm{I}-\tau\bm{L})^{k+1}\bm{\epsilon}^0 = (\bm{I}-\tau\bm{L})^{k+1}(\bm{T}_0-\bm{T}^*).
\end{align}
Since we want for the error to go to zero $\lim_{k\to\infty}\|\bm{e}^k\| = \lim_{k\to\infty}\|(\bm{I}-\tau\bm{L})^k\bm{e}^0\| = 0$ for some norm $\|\cdot\|$,
and knowing that $\|\bm{e}^{k+1}\| = \|(\bm{I}-\tau\bm{L})\bm{e}^k\|\leq \|\bm{I}-\tau\bm{L}\|\|\bm{e}^k\|$,
where $\|\cdot\|$ is the matrix norm induced by $\|\cdot\|$, a sufficient condition is
$\|\bm{I}-\tau\bm{L}\|<1$. For the $2$-norm and a symmetric positive
semi-definite matrix $\bm{L}$ the condition is then $|1-\tau\lambda_i|<1$ for any $i$,
which yields $\tau\in (0, 2/\lambda_{\max})$.
This is typically where your step size constraint comes from.
However in your case you have a non-symmetric matrix, so the $2$-norm is given by the
largest singular value of $(\bm{I}-\tau\bm{L})$, thus the constraint reads:
\begin{equation}
\|(\bm{I}-\tau\bm{L})^T(\bm{I}-\tau\bm{L})\| = \|\bm{I}-\tau(\bm{L}^T+\bm{L}) + \tau^2 \bm{L}^T\bm{L}\|<1.
\end{equation}
So I am not even sure you can guarantee there always exists a step size that works
for non-symmetric matrices. It is for instance clear that this method doesn't
work even for symmetric indefinite matrices, because you can never find a step
size that would decrease the error for all possible $\bm{e}^k$, since
for negative eigenvalues you get $1-\tau \lambda_i < 1 \implies \tau<0$, while for positive eigenvalues you have $\tau>0$, i.e. there is no such $\tau$. Note that explicit Euler
is equivalent to the Richardson iteration (also to Jacobi if you divide by the diagonal).
Implicit Euler
Implicit Euler discretises the time derivative with a forward difference but takes
the right hand side at the future time:
\begin{equation}
\frac{\bm{T}^{k+1}-\bm{T}^k}{\tau} = -\bm{L}\bm{T}^{k+1} + \bm{b} \implies
(\bm{I}+\tau\bm{L})\bm{T}^{k+1} = \bm{T}^k+\tau\bm{b}.
\end{equation}
Note that you have to solve a linear system for each step of the above, in your
case this system is non-symmetric so you should use BiCGStab, GMREs, CGNE,
CGNR/LSQR for it. Since your system is tridiagonal you can also use the
Thomas algorithm directly.
If $\bm{L}$ is symmetric positive semi-definite then this is unconditionally stable, i.e. you can take any $\tau > 0$. For the indefinite symmetric case implicit Euler has the constraint $\tau > \frac{2}{-\lambda_{neg,\min}}$, where $\lambda_{neg,\min}$ is the negative eigenvalue that is closest to zero, so you can also construct examples where it is unstable. That is, implicit Euler can actually be unstable because you took a step that is too small if your matrix is symmetric and has negative eigenvalues. This makes sense because the iteration matrix is $(\bm{I}+\tau\bm{L})^{-1}$, thus the eigenvalues are reciprocal, and you get the condition $|(1+\tau\lambda_i)^{-1}|<1$. For non-symmetric matrices you need to again consider the 2-norm of the iteration matrix, and then you get
\begin{equation}
\|(\bm{I}+\tau\bm{L})^{-T}(\bm{I}+\tau\bm{L})^{-1}\| = \|(\bm{I}+\tau(\bm{L}^T+\bm{L}) + \tau^2\bm{L}^T\bm{L})^{-1}\|<1
\end{equation}
Crank-Nicolson
The Crank-Nicolson discretisation has the form:
\begin{equation}
\begin{aligned}
\bm{T}^{k+1}-\bm{T}^k &= -\frac{\tau}{2}\bm{L}\bm{T}^k - \frac{\tau}{2}\bm{L}\bm{T}^{k+1} + \tau\bm{b}, \\
(\bm{I}+0.5\tau\bm{L})\bm{T}^{k+1} &= (\bm{I}-0.5\tau\bm{L})\bm{T}^k + \tau\bm{b}, \\
\bm{T}^{k+1} &= (\bm{I}+0.5\tau\bm{L})^{-1}(\bm{I}-0.5\tau\bm{L})\bm{T}^k + \tau(\bm{I}+0.5\tau\bm{L})^{-1}\bm{b}.
\end{aligned}
\end{equation}
Once again you have to solve a linear system of equations to get the solution. The supposed benefit is that your error decays as $O(\tau^2)$ in
time. For symmetric positive semi-definite matrices $\bm{L}$ this is again
unconditionally stable for $\tau>0$ since we have:
\begin{equation}
\|(\bm{I}+0.5\tau\bm{L})^{-1}(\bm{I}-0.5\tau\bm{L})\|<1 \iff \left|\frac{1-0.5\tau \lambda_i}{1+0.5\tau\lambda_i}\right| < 1.
\end{equation}
The latter holds for any $\tau>0$, but breaks if you have even one zero or negative eigenvalue. The zero case may be admitted however, as we may not want to remove the error corresponding to a zero eigenvector (e.g. if we wish to preserve the average grey value). Once again for unsymmetric matrices you have the condition:
\begin{equation}
\|(\bm{I}-0.5\tau\bm{L})^T(\bm{I}+0.5\tau\bm{L})^{-T}(\bm{I}-0.5\tau\bm{L})^{-1}(\bm{I}-0.5\tau\bm{L})\|<1,
\end{equation}
which is again hard to argue about. Note that Crank-Nicolson benefits you if you want to take small time steps because of the $O(\tau^2)$ error decay. However if you plan to take large time steps, e.g. larger than the bound for explicit Euler, then you may get nasty manifestations of the error in the solution, from wikipedia:
However, the approximate solutions can still contain (decaying)
spurious oscillations if the ratio of time step $\Delta t$ times the
thermal diffusivity to the square of space step, $(\Delta x)^2$, is
large (typically, larger than $1/2$ per Von Neumann stability
analysis).
A Warning
Note that I haven't tested any of the above nor have I verified it extensively, so there may be typos and mistakes. Also I am not sure whether I understood correctly the formulation that you were going for.
A
symmetric or hermitian? This is often not the case if boundary conditions contain derivatives. Usegmres
instead as solver. $\endgroup$get_F
that there are some Robin conditions being applied consistent with Newton's law of cooling that will make the problem asymmetric. Now that there are some equations we can see that more clearly $\endgroup$gmres
andbicgstab
a try without success - see updated implementation. It does sound like my implementation is asymmetric due to the boundary conditions though. Still,bicgstab
would be expected to converge? Is there a way to confirm the matrix class without actually instantiating the matrix? Also, I have tried implicit Euler but this becomes unstable for the range of material properties I am using. $\endgroup$