7
\$\begingroup\$

Background

I've done a long one-shot measurement of a periodic signal by an oscilloscope. Thus, I've captured around 250 periods of a signal. The signal itself is a pseudo-random binary sequence at the output of a low-pass filter. Here is how a tiny portion of the signal looks at the oscilloscope:

enter image description here

Now I want to imitate the way this signal would be sampled by a coherent receiver (one that would precisely know the clock frequency \$ f_c\$ of the signal generator) with averaging (that is, the receiver would select identical segments from each period and average over them). To do so, I define a periodic (averaging) sampling function \$g_p(t)\$:

$$ x[n] = \int x(t) g_p(t - n/f_c) dt \\ g_p(t) = 1/L\sum_{l=0}^{L-1} g(t - lN/f_c), $$

where \$N\$ is the number of samples in the period, \$L\$ is the number of observed periods and \$g(t)\$ is an elementary window that selects one sample (e.g. Hamming window). Also, note that the sampling frequency of the oscillograph is by an order of magnitude higher than the clock frequency \$f_c\$ of the binary sequence.

Now comes my first, minor question: Is my flow of thought generally OK?

Main problem

I do not know the exact value of the \$f_c\$ and want to estimate it from the input data, having an initial guess and refining the result by solving an optimization problem. My thinking is that I want the periodic sampling function \$g_p(t)\$ to select as similar data fragments as possible. Intuition suggests that we could look at the variance of \$x[n]\$:

$$ \text{Var}\left\{x[n]\right\} = \mathbb{E}_{g_p}\left\{x^2\right\} - \mathbb{E}_{g_p}\left\{x\right\}^2\\ \mathbb{E}_{g_p}\left\{a\right\} \stackrel{\text{def}}{=} \frac{1}{||g_p||}\int a(t) g_p(t) dt. $$

So far, this equation will give us a variance (a.k.a. measurement uncertainty) for a single sample of a receiver. I expect that this value should achieve its minimum if we've picked the correct \$f_c\$.

For stability reasons, I form the cost function as a combined variance from several samples. One sample will probably give a noisy result, and taking all samples requires too much computational effort.

Unfortunately, as I do a sweep over the search space, the cost function does not show any sort of convexity. Note that the initial guess is located in the middle of the X axis.

Cost function

Could anybody give me a hint about what I could possibly be doing wrong?

Also, I've got a strong feeling that I am reinventing the wheel. Does anybody know how this problem is called properly so that I can google for it?

\$\endgroup\$
4
  • \$\begingroup\$ Post a link to your original sampled data, before you've 'done things' to it. \$\endgroup\$
    – Neil_UK
    Commented Jun 29, 2018 at 10:57
  • \$\begingroup\$ added a screenshot from oscilloscope \$\endgroup\$
    – skobls
    Commented Jun 29, 2018 at 11:03
  • 3
    \$\begingroup\$ The search term should be ensemble averaging. Because you lack a good trigger here, this might not be easy. I suggest using periodogram techniques, like the Welch Periodogram, and then inverse transforming \$\endgroup\$ Commented Jun 29, 2018 at 13:36
  • \$\begingroup\$ What about FFT and finding the frequency with the highest amplitude? Perhaps vary the sampling for the FFT. And when you found a frequency range, search for a frequency that produces the best FFT/standard deviation from...? In that I am not specialist. \$\endgroup\$ Commented May 18 at 18:57

0

Browse other questions tagged or ask your own question.