I am be interested in this part of it. Mostly because it's super easy to deploy a Raspberry Pi 3B(+) & NanoSDR and get really decent audio out of it. I've actually be running three in production for the past 2 years, and another Intel NUC & HackRF doing the P25 system. Having actual live audio, instead of having the push (sftp, ssh, scp, rysnc) the recorded MP3 file is a huge win for me. Especially when it comes multi-channel or coalescing multiple receive sites into a single cohesive interface. The server would actually be useful if it would play the audio, but also encode the mp3 files (that it could do with a +2 nice value) for archive as well. It would make the receive sites ultra light weight in their processing requirements.
But the actual HTML5 correct solution is to use WebRTC.
Getting Started with WebRTC - HTML5 Rocks
The way I see this working is, OP25 takes a -SS parameter or something like that. That tells it to each out to this
Signaling Server with it's configuration and trunk files, in a JSON format, as well as
ICE Candidate information. The signaling server gets this information and waits for listen clients to connect, or accepts other OP25 clients with their information as well. When a listen client connects, the ICE Candidate information is shared with the listen clients for each OP25 instance connected. This gives the listen client the option of listening to what ever channel it wants to among the various OP25 instances that are connected.
WebRTC supports the
Opus codec as well as PCMU. But the highest common denominator across platforms seems to be Opus, so we should use that. But the transcoding would still need to be done somewhere. On the OP25 side?