OP25 Experiments with op25 and liquidsoap

Status
Not open for further replies.
Joined
Dec 19, 2002
Messages
61
You are correct that the -x parameter won't help on the rx.py command line because the audio processing is happening outside of rx.py. Try putting the -x parameter on the ./audio.py command line in your .liq script instead.

Sorry, can't help on the high frequency noise. I've noticed that with my RPi3 as well (doesn't happen on other types of hardware) and I suspect it is something to do with either the power supply or simply poor hardware design of the audio circuit. Perhaps you could use a USB sound card?
Putting -x parameter on the ./audio.py command line worked perfectly. I have now changed my feed RPi over. Thanks for your assistance.
 

boatbod

Member
Joined
Mar 3, 2007
Messages
3,410
Location
Talbot Co, MD
It's probably unrelated to liquidsoap, but I moved my second feed over to the same system and after a few hours it stopped updating tskbks and the stderr had thousands of the following:

Code:
p25_framer::rx_sym() tuning error +1200

interspersed with a few of:

Code:
1548813396.650205 control channel timeout

Restarting OP25 brought it back. First feed still going just fine.

Either frequency drift of the dongle, or a cat sat on the keyboard and adjusted the fine tuning...
(don't laugh, that's exactly what my cat does to one of my systems)
 

james18211

Member
Joined
Oct 28, 2012
Messages
25
Either frequency drift of the dongle, or a cat sat on the keyboard and adjusted the fine tuning...
(don't laugh, that's exactly what my cat does to one of my systems)
That would make sense; I'd set the fine tune to zero last night trying to fix the "Frequency error" (it's a constant -100Hz on that feed no matter what; or it jumps to -500 if I adjust the fine tune too much).

It's stable with -200 on the fine tune, so I'll just leave it there.

Thanks :)
 

boatbod

Member
Joined
Mar 3, 2007
Messages
3,410
Location
Talbot Co, MD
That would make sense; I'd set the fine tune to zero last night trying to fix the "Frequency error" (it's a constant -100Hz on that feed no matter what; or it jumps to -500 if I adjust the fine tune too much).

It's stable with -200 on the fine tune, so I'll just leave it there.

Thanks :)
Use the mixer plot (#5) to find the fine tuning center then put that in on the command line. If you don't have a tcxo dongle, just understand that it takes a few minutes to warm up and stabilize when you first start rx.py. After that it'll probably run forever without drifting much.
 

james18211

Member
Joined
Oct 28, 2012
Messages
25
Use the mixer plot (#5) to find the fine tuning center then put that in on the command line. If you don't have a tcxo dongle, just understand that it takes a few minutes to warm up and stabilize when you first start rx.py. After that it'll probably run forever without drifting much.

I'm using the Nooelec NESDR SMArt sticks, which claim to be TCXO. However, the warm up period would explain why, without the fine tune setting, it stopped working after a few minutes in most cases.

Here's a screenshot of my mixer plot. The balance ranges from zero to 500+. No issues with audio quality, so I'm not overly concerned.
 

Attachments

  • Screen Shot 2019-01-30 at 18.21.50.png
    Screen Shot 2019-01-30 at 18.21.50.png
    191.5 KB · Views: 27

boatbod

Member
Joined
Mar 3, 2007
Messages
3,410
Location
Talbot Co, MD
I'm using the Nooelec NESDR SMArt sticks, which claim to be TCXO. However, the warm up period would explain why, without the fine tune setting, it stopped working after a few minutes in most cases.

Here's a screenshot of my mixer plot. The balance ranges from zero to 500+. No issues with audio quality, so I'm not overly concerned.

With the strong nearby signals, the mixer balance number isn't going to give reliable readings. Better to make sure your signal hump is accurately centered. Looking at the image it's probably only needs a -100 to -200 tweak.
 

quad_track

Member
Joined
Sep 13, 2017
Messages
66
I've seen this "X" pattern in constellations for a long time, but never 100% understood. It's limited to simulcast and as far as I can tell it's most pronounced in /\/\ LSM. The constellation space can be thought of as a circle divided into four quadrants as a Pizza is cut into four slices. If the symbol appears anywhere inside one of the quadrants it's assigned to that quadrant. The dividing lines between quadrants are the X and Y axes. The distance (radius) from the center of the circle to each data point is the magnitude of the point. The angular offset (0-360 degrees) is what solely determines in which of the four quadrants the point resides - the magnitude is ignored. In standard PSK the radius is always constant (modulo any fading) and is normalized to 1.0 via AGC.

Max, I think that amplitude error could potentially be caused by the AGC, and the equalizer (CMA? LMS?) cannot correct for various reasons unless you know the channel characteristics beforehand. I have seen this in high SNR, high speed, micro-bursty transmissions of M-PSK with TDMA involved.
I would check the signal power, the noise floor and the attack/decay values of the AGC. Normally we'd use some sort of non-clipping limiter after the first LNA to ease the job of the AGC. There's a very good reference front-end design paper by Sivan Toledo on this subject, but I can't find it right now.

Adrian
 

KA1RBI

Member
Joined
Aug 15, 2008
Messages
799
Location
Portage Escarpment
Max, I think that amplitude error could potentially be caused by the AGC, and the equalizer (CMA? LMS?) cannot correct for various reasons unless you know the channel characteristics beforehand. I have seen this in high SNR, high speed, micro-bursty transmissions of M-PSK with TDMA involved.
I would check the signal power, the noise floor and the attack/decay values of the AGC. Normally we'd use some sort of non-clipping limiter after the first LNA to ease the job of the AGC. There's a very good reference front-end design paper by Sivan Toledo on this subject, but I can't find it right now.

Adrian

Hi Adrian

It's certainly possible the OP25 software feedforward AGC is introducing this artifact; and OP25 makes no attempt to implement an equalizer. As far as the channel characteristics are concerned, it's possible that a simplified model of an LSM channel could be constructed assuming a single transmitter and multiple proagation paths between it and the receiver. This assumption would be violated where LSM Secret Sauce (US Pat. 6,061,574, at least) is in use, although it's very unclear as to whether that would materially affect such a model. You can probably rest quite assured that if the LSM channel characteristics are being altered dynamically as a result of the Secret Sauce, these modifications would be done for the benefit of the system subscribers, _not_ for you as a third party passively listening on the call. Overall the subject of equalizers is an interesting one. It's pure speculation but it's possible one of the points of the secret sauce is to serve as an equalizer training sequence.

Anyway, the OP25 demod block that is used for LSM demodulation is known to be suboptimal. Practically every aspect of it could be improved!

Best

Max
 

KA1RBI

Member
Joined
Aug 15, 2008
Messages
799
Location
Portage Escarpment
It's certainly possible the OP25 software feedforward AGC is introducing this artifact

From p25_demodulator.py:
Code:
self.agc = analog.feedforward_agc_cc(16, 1.0)

Graham (and others) - next time you see a constellation with the "X", and it's repeatable - try experimenting with values other than 16 ...

Max
 

quad_track

Member
Joined
Sep 13, 2017
Messages
66

That's the one. He used to have the PDF available on his personal website too.

Hi Adrian
It's certainly possible the OP25 software feedforward AGC is introducing this artifact;

P25 is not part of the standards I'm familiar with unfortunately, and the only part of OP25 which interests me is ETSI DMR, but I just wanted to say I never use the feedforward AGC for M-PSK due to various peculiar effects on symbols and instead use the AGC2 or AGC3 blocks as being more flexible. May be worth a shot.

73,
Adrian
 

boatbod

Member
Joined
Mar 3, 2007
Messages
3,410
Location
Talbot Co, MD
From p25_demodulator.py:
Code:
self.agc = analog.feedforward_agc_cc(16, 1.0)

Graham (and others) - next time you see a constellation with the "X", and it's repeatable - try experimenting with values other than 16 ...

Max

Curiously it appears to work best when nsamples=1 (rather than 16)

The higher I go with nsamples the tighter the clusters, but the more the entire pattern ends up in well defined "X".
At lower nsamples, the plot points stay further away from the (0,0) origin but begin to sporadically spread around the circumference of the circle.
 
Last edited:

Dygear

Member
Joined
Nov 18, 2010
Messages
74
Location
Levittown, NY
Third, it would be nice to integrate HTML5 audio into OP25's http console to allow streaming audio in the browser. The actual HTML and JS to do so is more or less trivial with the actual work happening in the backend. I'm not expert at streaming but as far as I can tell the preferred setup employs mp3 with ogg as fallback. If we focus on mp3 initially what would be needed at server side to implement it?

Max

I've often thought about adding html5 audio direct to op25, but essentially it looks like someone would have to implement a complete streaming server (presumably in Python). Being able to install off-the-shelf components such as icecast2, liquidsoap and darkice has it's attractions, but I worry that they may be (a) too resource hungry on low-end hardware, and (b) too complex for the average user to install and configure. Issue (a) is easy to quantify. Issue (b) not so much...

Graham

I am be interested in this part of it. Mostly because it's super easy to deploy a Raspberry Pi 3B(+) & NanoSDR and get really decent audio out of it. I've actually be running three in production for the past 2 years, and another Intel NUC & HackRF doing the P25 system. Having actual live audio, instead of having the push (sftp, ssh, scp, rysnc) the recorded MP3 file is a huge win for me. Especially when it comes multi-channel or coalescing multiple receive sites into a single cohesive interface. The server would actually be useful if it would play the audio, but also encode the mp3 files (that it could do with a +2 nice value) for archive as well. It would make the receive sites ultra light weight in their processing requirements.

But the actual HTML5 correct solution is to use WebRTC.
Getting Started with WebRTC - HTML5 Rocks

The way I see this working is, OP25 takes a -SS parameter or something like that. That tells it to each out to this Signaling Server with it's configuration and trunk files, in a JSON format, as well as ICE Candidate information. The signaling server gets this information and waits for listen clients to connect, or accepts other OP25 clients with their information as well. When a listen client connects, the ICE Candidate information is shared with the listen clients for each OP25 instance connected. This gives the listen client the option of listening to what ever channel it wants to among the various OP25 instances that are connected.

WebRTC supports the Opus codec as well as PCMU. But the highest common denominator across platforms seems to be Opus, so we should use that. But the transcoding would still need to be done somewhere. On the OP25 side?
 

boatbod

Member
Joined
Mar 3, 2007
Messages
3,410
Location
Talbot Co, MD
I am be interested in this part of it. Mostly because it's super easy to deploy a Raspberry Pi 3B(+) & NanoSDR and get really decent audio out of it. I've actually be running three in production for the past 2 years, and another Intel NUC & HackRF doing the P25 system. Having actual live audio, instead of having the push (sftp, ssh, scp, rysnc) the recorded MP3 file is a huge win for me. Especially when it comes multi-channel or coalescing multiple receive sites into a single cohesive interface. The server would actually be useful if it would play the audio, but also encode the mp3 files (that it could do with a +2 nice value) for archive as well. It would make the receive sites ultra light weight in their processing requirements.

But the actual HTML5 correct solution is to use WebRTC.
Getting Started with WebRTC - HTML5 Rocks

The way I see this working is, OP25 takes a -SS parameter or something like that. That tells it to each out to this Signaling Server with it's configuration and trunk files, in a JSON format, as well as ICE Candidate information. The signaling server gets this information and waits for listen clients to connect, or accepts other OP25 clients with their information as well. When a listen client connects, the ICE Candidate information is shared with the listen clients for each OP25 instance connected. This gives the listen client the option of listening to what ever channel it wants to among the various OP25 instances that are connected.

WebRTC supports the Opus codec as well as PCMU. But the highest common denominator across platforms seems to be Opus, so we should use that. But the transcoding would still need to be done somewhere. On the OP25 side?

Op25 lower layers generate raw mono 8khz 16bit LE PCM data and send it out on a UDP port. A python module called sockaudio.py picks this up (either under rx.py or audio.py control) and either dumps it to the ALSA sound subsystem or to STDOUT where it can be picked up by liquidsoap.

If you wanted to implement the solution you propose, I'd recommend either adding a third option to sockaudio.py and do the transcoding there, or (preferably) see if someone has already written a liquidsoap plugin to have that output direct to WebRTC.
 

djshadowxm81

Member
Premium Subscriber
Joined
May 5, 2010
Messages
79
Op25 lower layers generate raw mono 8khz 16bit LE PCM data and send it out on a UDP port. A python module called sockaudio.py picks this up (either under rx.py or audio.py control) and either dumps it to the ALSA sound subsystem or to STDOUT where it can be picked up by liquidsoap.

If you wanted to implement the solution you propose, I'd recommend either adding a third option to sockaudio.py and do the transcoding there, or (preferably) see if someone has already written a liquidsoap plugin to have that output direct to WebRTC.

Its funny that I come here and see this conversation. I was just looking into it this afternoon. Looks like gstreamer(which you can use with liquidsoap) has support using the webrtcbin plugin. However I haven't figured it out yet.
 

boatbod

Member
Joined
Mar 3, 2007
Messages
3,410
Location
Talbot Co, MD
Its funny that I come here and see this conversation. I was just looking into it this afternoon. Looks like gstreamer(which you can use with liquidsoap) has support using the webrtcbin plugin. However I haven't figured it out yet.

Once you figure it out and get it working, I'd be happy to integrate it into my repo.
 

Dygear

Member
Joined
Nov 18, 2010
Messages
74
Location
Levittown, NY
I have a sneaking suspicion that we might be able to pass the 8khz 16bit LE PCM data directly into a WebRTC stream. It looks like it's G.711. If not then, it's really, really close and should be trivial to convert into a G.711 stream.
 

djshadowxm81

Member
Premium Subscriber
Joined
May 5, 2010
Messages
79
I have a sneaking suspicion that we might be able to pass the 8khz 16bit LE PCM data directly into a WebRTC stream. It looks like it's G.711. If not then, it's really, really close and should be trivial to convert into a G.711 stream.
That would be nice if we could do that. The gstreamer method looked kinda convoluted. I look forward to see if you come up with a solution to this. I built something out yesterday that uses JoJoBond/3LAS and it uses ffmpeg to grab the vm's loopback audio and stream to the webpage using websockets. I tried piping audio.py directly into the one script using stdout but got no audio through the stream unless I used ffmepg. I don't know enough about low level audio and file streams to mess with it. But i found a solution for myself anyway for the moment. I would like to switch to WebRTC if its feasible though. If you would like to see a demo of what I have implemented. PM me and ill give you a link to look at.
 

djshadowxm81

Member
Premium Subscriber
Joined
May 5, 2010
Messages
79
Also @boatbod off topic, is it at all possible with the software if I added another sdr, does the software support the ability for me to listen to two different talkgroups much like the dispatcher consoles do? I'm not expecting it to, but was just wondering.
 
Status
Not open for further replies.
Top