0
$\begingroup$

I had a discussion with a colleague today. He claimed that usually for a general numerical scheme for solving a general 1D PDE, for smaller grid size there is an increased roundoff error because of more time steps performed, for larger grid size there is a larger truncation error, so sweet spot is always in between. This implies that refining the grid is not always good in general. I started a search and I found the plot at the end of this lecture, and as well this Youtube video. They are both not discussing the issue in depth. I never experienced this issue when working with double precision. Is this maybe a issue that would mostly affect single precision?

$\endgroup$
3
  • $\begingroup$ You seem to understand the issue correctly. You can certainly see it in double precision if you use a high-order method on a problem with a smooth solution and refine the mesh enough. $\endgroup$ Commented Apr 1 at 5:07
  • $\begingroup$ this answer of mine is relevant $\endgroup$
    – Anton Menshov
    Commented Apr 1 at 17:45
  • $\begingroup$ Round-off increasing the total error below a certain step-size is also visible here (near the bottom of the reply). $\endgroup$
    – IPribec
    Commented Apr 15 at 7:51

1 Answer 1

2
$\begingroup$

I think there are two types of error being conflated here. The first is that if you are using an $n$-th order spatial discretization and an $m$-th order temporal discretization then the total error is something like $\mathcal{O}(\Delta x^n + \Delta t^m)$, so to reach the appropriate limit as $\Delta x, \Delta t\to0$, one must take them both to zero jointly at appropriate rates so that the total error goes to zero. You can see this in, e.g., edvection equations with central difference where you take $\Delta t\to0$ far faster than $\Delta x$ and the resulting solution exhibits numerical diffusion in space that is very finely resolved in time.

The other type of error involves how finite differences interact with finite precision. Consider some smooth function $f(x)$ that is evaluated to some finite precision such that $\mathrm{float}(f(x+\Delta x)-f(x))= f(x+\Delta x) - f(x) + \epsilon$. We then have $$ \frac{\mathrm{float}(f(x+\Delta x)-f(x))}{\Delta x} = \frac{f(x+\Delta x)-f(x) + \epsilon}{\Delta x} = f'(x) + f''(\xi)\Delta x + \frac{\epsilon}{\Delta x} + \mathcal{O}(\Delta x^2), $$ where $\xi \in[0,\Delta x]$. To leading order, the difference $f'(x) - \frac{\mathrm{float}(f(x+\Delta x)-f(x))}{\Delta x}$ is minimized when $\Delta x = \sqrt{\frac{\epsilon}{f''(\xi)}}$. This difference $f''(\xi)\Delta x + \frac{\epsilon}{\Delta x}$ looks like what is plotted in the linked page. Assuming $f''$ is bounded, we have that finite precision is minimal when $\Delta x = \mathcal{O}(\epsilon^{1/2})$. If we assume $f(x)=\mathcal{O}(1)$ and double precision so $\epsilon\approx 10^{-16}$, then we find that our finite difference approximations actual lose accuracy if $\Delta x < 10^{-8}$. Similarly, we find an optimal grid spacing of $\Delta x = 10^{-4}$ for single precision.

$\endgroup$
2
  • $\begingroup$ This is exactly what OP already said in his post. $\endgroup$ Commented Apr 1 at 5:06
  • 1
    $\begingroup$ OP said such topics were not discussed in detail, so I added details and commented on single precision, which is OP's question. $\endgroup$
    – whpowell96
    Commented Apr 1 at 13:38

Not the answer you're looking for? Browse other questions tagged or ask your own question.