Related to: What are the negatives of using higher order finite diference schemes?
Problem: I have some discrete data of a trajectory $x_t$ with errors $\delta x_t$ of a physical system sampled at equal times $\Delta t$ and I want to calculate the derivative of the trajectory, i.e., the velocity $v_t$.
I know I can differentiate it numerically using a finite difference method. From the formulas, it may seem a higher order method is more accurate as its theoretical error is lower. However, I believe the derivative of a function at a given point should depend only on a local region of that point. If we take more points that are far away using a higher order finite difference scheme, I feel we would be increasing the error of the derivative.
Questions: Is this reasoning correct? If so, how to balance these two problems to find the best scheme in a particular situation?
Based on Superbee's answer I have thought of a way to approach the problem:
Compare the sampling separation $\Delta t$ with the variations on your data $\Delta x$. In order to use a higher order method you would need the sampling in time to be much less than the variations in your data. However, to compare the different variables you should remove their units. We could do it in a rough way normalizing against a characteristic length: $L$ and $T$, for $x$ and $t$ respectively. These lengths could be defined as an average of the variables. Then, to use higher order methods we would require: $$ \frac{\Delta t}{T} \ll \frac{\Delta x}{L.} $$ If we have doubts this is fulfilled, we would prefer a lower order.
If we do not consider Runge's phenomenon, in the case 1. is fulfilled we could use higher order methods without a problem. However, there would not be any benefit in using a higher order method if the theoretical error $\varepsilon$ of the finite difference method is lower than the error coming from your data $\delta$:
$\varepsilon$ depends typically on a derivative of the function (which we could estimate numerically) and a given power of $\Delta t$.
$\delta$ comes from the propagation of the error $\delta x_t$ to the finite difference method. We could estimate this from the form of the finite difference method using the typical formula of error propagation: $$ \delta y = \sqrt{\sum_i \left( \frac{ \partial y }{\partial x_i} \right ) ^2}$$
From these two errors we could estimate also the real error of the derivative.
Does this approach look fine?