2
$\begingroup$

I am applying a 6th order Finite-Difference differentiation scheme as seen in http://www.scholarpedia.org/article/Method_of_lines/example_implementation/dss006

There seems to be severe numerical/floating-point issues with this scheme. The result is very sensitive to the values in the vector with which the FD matrix is pre-multiplied with.

For example, the gradient of a 10*1 vector with identical values is all zeros. For example, applying the following 10*10 matrix

   -1764        4320       -5400        4800       -2700         864        -120           0           0           0
    -120        -924        1800       -1200         600        -180          24           0           0           0
      24        -288        -420         960        -360          96         -12           0           0           0
     -12         108        -540           0         540        -108          12           0           0           0
       0         -12         108        -540           0         540        -108          12           0           0
       0           0         -12         108        -540           0         540        -108          12           0
       0           0           0         -12         108        -540           0         540        -108          12
       0           0           0          12         -96         360        -960         420         288         -24
       0           0           0         -24         180        -600        1200       -1800         924         120
       0           0           0         120        -864        2700       -4800        5400       -4320        1764

to a 10x1 vector 25545.007*ones(10,1)

results in [6.1118e-10, -2.3574e-09, -1.266e-09, -3.638e-10, -3.638e-10, -3.638e-10, -3.638e-10, 2.0082e-09, -6.1118e-10, -9.8225e-09]

whereas the correct answer is a 10x1 zero vector. The result produced by the second derivative matrix on the same set of vectors is even worse, with numbers of the order of 1e3 instead of zeros!

When I try this same first-derivative FD matrix on a slightly different vector 25545.007*ones(10,1), this works and produces the right answer (i.e. zero vector). These results have been double-checked on MATLAB, Python(Numpy) and Mathematica using standard double-precision arithmetic. The exact numerical values differ in each of the software, but the order of magnitude remains similiar to the above results.

Why is the scheme so sensitive to a vector that differs only by 1e-3? I really need high accuracy and such numerical issues mean that my solution values in downstream computations become unreliable and doesn't lend confidence in the overall solutions.

Is there a better way to implement this? Perhaps there exists an alternative recommended scheme for such scenarios? I am unable to introduce variable-precision arithmetic libraries, due to the project's complexities and their inherent slowness.

$\endgroup$
14
  • $\begingroup$ Are you using a finite difference scheme to solve a differential equation or just to calculate derivatives? $\endgroup$
    – KyleW
    Commented Mar 8, 2017 at 14:08
  • $\begingroup$ Just to calculate the derivatives. This is a coupled system of equations, wherein the solution is done using IDA, which drives the residual of the system down to zero $\endgroup$ Commented Mar 8, 2017 at 15:46
  • $\begingroup$ If you're looking to computer the derivative of a known function you might have a look at automatic differentiation. $\endgroup$
    – KyleW
    Commented Mar 8, 2017 at 16:23
  • 1
    $\begingroup$ Yes. This is a rank 9 matrix. Just curious to know why is this a problem? I am not trying to compute the inverse or solve a linear system. Maybe my linear algebra knowledge is a little rusty here, does rank deficient matrices create numerical errors when multiplied with vectors? If the matrix-vector product is computed on paper, I get exact zeros in the solution $\endgroup$ Commented Mar 8, 2017 at 19:56
  • 3
    $\begingroup$ What you are trying to do is called numerical differentiation, and it is usually considered as an ill-posed problem. For example, consider the three-point formula for computing second derivative. If you make an error in your data, you will get the error components of about $h^2$ from the exact part of the data and $\varepsilon h^{-2}$ for the data perturbation, where $\varepsilon$ is the data perturbation. $\endgroup$
    – VorKir
    Commented Mar 9, 2017 at 0:53

2 Answers 2

2
$\begingroup$

If I understood you correctly, you are using the IDA solver from the LLNL SUNDIALS suite.

If you have access to the source code that computes the residual of your DAE system, you may then consider automatic differentiation and IDAS, which is basically IDA with sensitivity analysis capabilities, both forward and adjoint.

There is a user's manual and a comprehensive list of examples.

If you want to stick with numerical differentiation, i think this na.stackexchange post is a good starting point.

$\endgroup$
0
$\begingroup$

This could be just a roundoff error issue. Each row of a differentiation matrix must sum to zero. Subtracting mean value of vector gives better results

x = 25545.007*numpy.ones((10,1))
A.dot(x)
array([[  0.00000000e+00],
       [ -7.45058060e-09],
       [  0.00000000e+00],
       [ -1.01863407e-10],
       [ -1.01863407e-10],
       [ -8.14907253e-10],
       [ -9.31322575e-10],
       [ -3.72529030e-09],
       [  3.72529030e-09],
       [  1.49011612e-08]])
xa = x.mean()
y = x - xa
A.dot(y)
array([[ 0.],
       [ 0.],
       [ 0.],
       [ 0.],
       [ 0.],
       [ 0.],
       [ 0.],
       [ 0.],
       [ 0.],
       [ 0.]])

Another option would be this. If the derivative is given by $$ v = Du, \qquad v_i = \sum_{j=1}^N D_{ij} u_j $$ then compute as $$ v_i = \sum_{j=1}^N D_{ij} (u_j - u_i), \qquad 1 \le i \le N $$ I expect this to reduce round-off errors.

$\endgroup$
3
  • 1
    $\begingroup$ I am not fully convinced this is true for all scenarios. Here, the mean of the 10x1 constant vector is 25545.007. When you subtract this from every element, the result just becomes a null vector, i.e. your 'xa' in code is a null vector, and you are multiplying a matrix by a null vector, which is guaranteed to give you the correct result. Now, it remains to be seen whether subtracting out the mean, will indeed result in better accuracy for all possible input vectors $\endgroup$ Commented Mar 11, 2017 at 18:13
  • $\begingroup$ When you compute mean of a constant vector, there will be round-off errors, so u-mean(u) is not exactly zero on the computer. $\endgroup$
    – cfdlab
    Commented Mar 12, 2017 at 5:14
  • $\begingroup$ @Krishna, I have done some numerical study for this example (constant) and a sine function of similar scale. - published as an answer to another question regarding mean subtraction. scicomp.stackexchange.com/questions/26371/… $\endgroup$
    – Anton Menshov
    Commented Apr 15, 2017 at 4:35

Not the answer you're looking for? Browse other questions tagged or ask your own question.