I'd love to hear it. I imagine it is done so that there are no buffer overflows or underflows and there is always another packet to send. Or on the receiving side, there is always something to output, even if packets have not been received in a timely manner.
Yes, exactly. The circular buffer serves a number of different purposes.
One issue is that we have a constant input stream, which will generate source data at a steady rate, irrespective of what's happening on the computer that's running ScannerCast. We don't want to lose any of that input, even when the computer gets busy.
So, ScannerCast retrieves small blocks of digitized audio data and combines them into larger blocks. These blocks are placed in a ring-buffer that holds about 10 seconds worth of data (depending on the audio bandwidth). The idea here is to create a reservoir of data for clients (which we'll discuss next). This reservoir allows disruptions to occur in ScannerCast's audio retrieval subsystem, without impacting outgoing audio to the clients.
ScannerCast Std Edition supports multiple client connections: A (practically) unlimited number of directly-connect clients plus up to one "push" client (the optional outgoing connection to an Icecast server such as RR). These clients all connect and disconnect at will.
So, different clients connect at different times and experience different latencies. But we have a constant source-data rate. The solution is to use some sort of buffering between the source and the sync to "smooth out" the transitions.
Of course, network connections between clients and servers are subject to all SORTS of disruptions in timing. Packets sent from ScannerCast can arrive at a client at different rates, with delays, and even out of order (this is just TCP/IP we're talking about here). We don't want these inevitable network delays/glitches to cause break-ups in the audio stream at the client
To help eliminate audio playback glitches, many playback (direct-connect) clients support "pre-load" buffering -- that is, accumulating some audio data locally (at the client) before playback. On connection, these clients attempt to retrieve audio data from the server as quickly as possible until their "pre-load" buffer size is reached. To facilitate this preload, ScannerCast uses its internal circular buffer. Thus, when a client connects ScannerCast quickly provides it with blocks of audio data to fulfill the pre-load requirement. This usually gets the user listening faster.
That's just a really quick overview. Hope it didn't cause you to fall asleep
I have two separate issues with this. The first issue has to do with streaming at least two scanners that functionally receive the same audio. One might be receiving from a TRS (800 MHz) and the 2nd from VHF; or both might be receiving VHF but separate frequencies that are the same audio. If I listen to both scanners directly (no computers involved), the audio is identically enough in time with itself to sound good.
Wow. I'm jealous. You're listening to a GREAT system. On a local TRS/VHF system near me, operating exactly as you described, the latency between the TRS transmission and the VHF "simulcast" (NOT!) transmission is at least 1 and probably closer to 2 seconds. Very annoying.
Having expressed my jealousy, I DO see your point.
... So I'll hear the audio from the scanner, then several seconds later hear it via the stream. When the stream is over a VPN it can still be more than 30 seconds later (typical for me is about 45 seconds).
Yeah, I understand that point, too.
Just to be clear: We're talking about a DIRECT connection to ScannerCast, right? Not via RR?
I need to think some more about a low-latency or lower-latency mode. If I were to spin-up a special test build of ScannerCast with a low-latency feature, would you be willing to try it out and comment??
Peter
K1PGV