Following up to my article “EasyPAL, Linux, Raspberry Pi” where I tried to run RXAMADRM on the Raspberry Pi:
I’ve removed the waterfall code and a lot of the code that output JSON-style data for the TCL interface. CPU usage listening to noise is a steady 60%, so not notably better than before.
It’s interesting to note that the graphics didn’t consume that much CPU power. X has been around for some time, and we were running graphically intense applications on 486 PCs back in the mid 90s, so this shouldn’t be surprising.
The application may still be performing more calculations than it needs to demodulate, as I have not removed the work that supports the debugging output. I have removed the calculation of the power spectrum for the waterfall display.
I have not yet tested receive on the current code base.
I’m going to have to admit defeat on this one, at least for now. The code is very hard to follow. Lots of uncommented methods and globals globals everywhere. I’ve loaded it into Eclipse which helps me to find some of the definitions.
Maybe if I were to go over the signal processing theory I could work out how this fits in the code. Why is it resampling when the signal finding code can work out the clock skew? Could it just adjust the OFDM centres accordingly, or is the problem that something later is expecting an IF signal with a 12kHz carrier – so magnifying the error?
There are some ways I could maybe help performance. It’s making some huge stack allocations in monorec for example. I don’t know if they are a problem or not. The downsample could be done in place, maybe at the same time as reading the ring buffer and converting to float format. Could we do the upsample by creating the signal {a,a,a,a,b,b,b,b,c,c,c,c,d,d,d,d…} then low pass filtering that?
I’d hoped to be able to pick out the bits of code that keep track of demodulating and output that state to the console. It could still be possible. At the moment I have other things to look at.