Originally Posted by rh200
With todays computers I would be surprised if you couldn't easily process in real time. After all you can easily fft and square (power spectrum) 100 MHz bandwidth nyquist sample data in real time. Hence I would have thought for these trivial bandwidths it would be a walk in the park, some one would need to work out the flops.
Um, maybe. For a 10 second average you'd need to process in a really rough round number 100 sets of convoluted data and present it for eyeballs. If you try to "detect it" you need to synchronize with it and multiply it by an estimate of the signal. You'd have to do this for about 100 to 1000 different time estimates for best results. (Um, at a rough guess intuition is suggesting on the order of 500 to 700 to have no more than 1dB loss. Of course, many of them might cross threshold. But there will be a distinct peak that might be worth the effort to get. 100 would be too few. And 200 cuts your summing advantage by half.) So that's say 500 time alignments on 100 different frequencies for 50,000 sets of calculations with a sample rate of at least 8 ksps on a down converted slice of spectrum. That's pushing one single CPU.
I'd go for the Mark 1 eyeball method with only 100 different waterfalls to watch - if I could find enough eyeballs to make it work.
Post processing the data would vastly reduce instantaneous load by spreading it over time - if people were patient enough.
(The 100 frequencies estimate came from trying to sum over 10 seconds. 2 Hz of seconds gives a phase error of 7200 degrees. You want it down under maybe 30. So my 100 is rather small compared to what you'd want. The ping type signal is just not very nice for digital processing.)