• To anyone looking to acquire commercial radio programming software:

    Please do not make requests for copies of radio programming software which is sold (or was sold) by the manufacturer for any monetary value. All requests will be deleted and a forum infraction issued. Making a request such as this is attempting to engage in software piracy and this forum cannot be involved or associated with this activity. The same goes for any private transaction via Private Message. Even if you attempt to engage in this activity in PM's we will still enforce the forum rules. Your PM's are not private and the administration has the right to read them if there's a hint to criminal activity.

    If you are having trouble legally obtaining software please state so. We do not want any hurt feelings when your vague post is mistaken for a free request. It is YOUR responsibility to properly word your request.

    To obtain Motorola software see the Sticky in the Motorola forum.

    The various other vendors often permit their dealers to sell the software online (i.e., Kenwood). Please use Google or some other search engine to find a dealer that sells the software. Typically each series or individual radio requires its own software package. Often the Kenwood software is less than $100 so don't be a cheapskate; just purchase it.

    For M/A Com/Harris/GE, etc: there are two software packages that program all current and past radios. One package is for conventional programming and the other for trunked programming. The trunked package is in upwards of $2,500. The conventional package is more reasonable though is still several hundred dollars. The benefit is you do not need multiple versions for each radio (unlike Motorola).

    This is a large and very visible forum. We cannot jeopardize the ability to provide the RadioReference services by allowing this activity to occur. Please respect this.

P25 simulcast multipath interference

Status
Not open for further replies.

rak313

Member
Joined
Jan 13, 2013
Messages
40
Location
Syracuse ny
Mark,

Everything you said in your description makes sense (about not needing an equalizer for 200 usec symbols)- so back my original question that started this thread.

What do you think is the reason consumer scanners work so poorly in simulcast P25 systems (yet some report no issues in non-simulcast P25)?

Thanks for the info.

Rick

I just got the software radio OP25 working on a $20 DTV USB stick ( The RTL2832U / Elonics E4000 SDR Radio - AKA "The $20 SDR" | Ham Radio Science ) - where the PC does the signal processing. The software application for OP25 is here DecoderPage

It assumes you have already installed the gnu-radio GNU Radio - WikiStart - gnuradio.org

On top of these 2 applications the OP25.grc code ( from here Gr-baz - SpenchWiki ) allows non-programmers to use the gnu graphical companion to change what processing blocks are in the receiver chain. I can now hear my local county's P25 audio (although only 1 channel at a time).

This took me a while - because an updated version of the GNU software radio broke the OP25 application. So patches were required. (not to mention that no one said it was broke in the 1st place- so I was left thinking I must be doing something wrong.) Eventually I stumbled across this link which explained how to fix it: OP25 - SpenchWiki


Now for some observations:

1) I live in Onondaga county, NY - which has a 15 channel simulcast system. There are 4 transmitters within 5 miles of my location. There are a lot of hills around here.

2) The OP25.grc application has a graph that shows the FM discriminator output - and the decoded dibits out ( the FM discriminator output - sampled at the symbol rate). If everything is working correctly - one should see 4 lines on the dibit output - as the samples should be only 1 of the 4 symbol values.

When the tower is not transmitting - the dibit display looks like random noise. When the transmitter is transmitting - you clearly see the 4 lines.

My dibit display looks extremely clean - very clear distinction between symbol values. And most importantly - the decoded audio sounds clean. It may be a bit too soon to judge - but I would say the quality is perfectly acceptable. BTW, this application code does do error correction.

I do not know if a consumer scanner at my location would do better/worse.
Rick
 

beachmark

Member
Joined
Jan 30, 2013
Messages
47
Location
Afton, VA
Interesting note above on the SDR. Sounds like the lines you describe are an eye pattern display.

In thinking about more about this issue, I read up some on the uniden 396XT firmware upgrade that I think is the source of this topic. Correct me if I am wrong but it seems like the issue/improvement is with the voice decoding for P25 Phase II (or Moto X2 TDMA). Running with that idea, I checked the release notes on the Firware upgrade and is says it imrpoves decoding. Well, that can mean a couple of things:
- The symbol decoding in the demodulator, i.e., the actual data demodulation
- The error detectoin and correction process
- The vocoder that reconstitutes the the voice signal from the data stream

The first, demodulation, could be improved due to the use of different matched filter curves around the modulated signal spectrum. I read that the Moto LSM uses a different spectrum windowing filter around the sepctrum than that sepc'd oin the P25 standards. There are other things that can happen in the demodulation process like when to sample the detected phase of the signal relative to the symbol boundaries. (This may be what one is doing when adjusting the factors in the scanner to adjust for local multipath or others propagation distortions.)

The 2nd, error correcction, seems least likely. That is a very well spec'd and rather mechanical signal processing algorithm. If it were messed up in earlier firmware, then maybe it was corrected. But this seems least likely.

The 3rd, the receive vocoder, has a strong probability for change. The vocoder deserves some explaining. In a standard audio samping process, you would get good Bell System quality voice by sampling the analog voice signal at a rate of 8 kpbs and with 8 bits per sample. This gives a data rate of 64 kbps. This is waaaaay too much to shove through a 12.5 or 6.25 kHz channel bandwidth and is hugely wasteful of bandwidth. The vocoder is a super sophisticated DSP process that takes the sampled audio and hugely pares down the encoded voice data rate by a factor of 8, 9 or even 10. It throws out the repetitive and low proirity sample device info, gets rid of noise, and does a lot of other things. The vocoder is also used to restore the audio after demodulation in the receiver; in that case, it smooths gaps in the demodulated voice data stream that are caused by poor reception and noise bursts. This process is quite sophisticated and has taken a lot of R&D.

Vocoders have been heavily in development and improvement for >25 years; the R&D process is extensive, expensive, and the results very valuable; each 10% reduction in data rate allows a 10% increase in voice call capacity in the modern cell systems and that is extremely valuable in the telecomm world.

Using a developed vocoder in one's user equipment or base station usually costs a large licensing fee and royalty per produciton unit in which it is used. I can see a consumer product company producing consumer level scanners needing to avoid those cost, and 'rolling their own' vocoder ofr reception. So this is a possibility for firmware improvement. It sounds like one of the improvements is in reduced garbling for weaker signals; a vocoder plays an key role in this type of situation in TDMA reception.

And if you want to know how a poor vocoder sounds: If you had a GSM or TDMA cell phone in the mid 90's and you heard the 'bings' and 'boing's' and 'chirps' in the audio when near the cell coverage edge, that was due to the old vocoders; they made people sound like retarded space robots when in weak signal situations! TDMA adopted the same vocoder used initially in GSM and guess what; they both sounded equally bad. TDMA adopted a newer one and presto: a great improvment resulted and the GSM world had to scramble to catch up. Since then, vocoders have undergone many improvements.

So, those are some more thoughts on this, for what they may be worth.

Mark B.
 

RoninJoliet

Member
Premium Subscriber
Joined
Jan 14, 2003
Messages
3,392
Location
ILL
Thats a excellent explanation, i may not have understood some info being im not a electronics whiz but now have some idea of the complexity of what these scanners cannot do and why....My city in IL now has "ENC" and i understand that....What a let down after 45 years of scanning...Thank You for your expertise on TDMA set-ups.....
 

rak313

Member
Joined
Jan 13, 2013
Messages
40
Location
Syracuse ny
I ... It may be a bit too soon to judge - but I would say the quality is perfectly acceptable. BTW, this application code does do error correction.

I do not know if a consumer scanner at my location would do better/worse.
Rick

I may have spoken too soon. I listened last night and it was pretty much unintelligible - lots of qarbled voices. Today it;s perfectly fine.

Not sure why.
 

rak313

Member
Joined
Jan 13, 2013
Messages
40
Location
Syracuse ny
Interesting note above on the SDR. Sounds like the lines you describe are an eye pattern display.

I checked the release notes on the Firware upgrade and is says it imrpoves decoding. Well, that can mean a couple of things:
- The symbol decoding in the demodulator, i.e., the actual data demodulation
- The error detectoin and correction process
- The vocoder that reconstitutes the the voice signal from the data stream

The first, demodulation, could be improved due to the use of different matched filter curves around the modulated signal spectrum. I read that the Moto LSM uses a different spectrum windowing filter around the sepctrum than that sepc'd oin the P25 standards. There are other things that can happen in the demodulation process like when to sample the detected phase of the signal relative to the symbol boundaries. (This may be what one is doing when adjusting the factors in the scanner to adjust for local multipath or others propagation distortions.)

The 2nd, error correcction, seems least likely. That is a very well spec'd and rather mechanical signal processing algorithm. If it were messed up in earlier firmware, then maybe it was corrected. But this seems least likely.

The 3rd, the receive vocoder, has a strong probability for change. The vocoder deserves some explaining. In a standard audio samping process, you would get good Bell System quality voice by sampling the analog voice signal at a rate of 8 kpbs and with 8 bits per sample. This gives a data rate of 64 kbps. This is waaaaay too much to shove through a 12.5 or 6.25 kHz channel bandwidth and is hugely wasteful of bandwidth. The vocoder is a super sophisticated DSP process that takes the sampled audio and hugely pares down the encoded voice data rate by a factor of 8, 9 or even 10. It throws out the repetitive and low proirity sample device info, gets rid of noise, and does a lot of other things. The vocoder is also used to restore the audio after demodulation in the receiver; in that case, it smooths gaps in the demodulated voice data stream that are caused by poor reception and noise bursts. This process is quite sophisticated and has taken a lot of R&D.

Vocoders have been heavily in development and improvement for >25 years; the R&D process is extensive, expensive, and the results very valuable; each 10% reduction in data rate allows a 10% increase in voice call capacity in the modern cell systems and that is extremely valuable in the telecomm world.

Using a developed vocoder in one's user equipment or base station usually costs a large licensing fee and royalty per produciton unit in which it is used. I can see a consumer product company producing consumer level scanners needing to avoid those cost, and 'rolling their own' vocoder ofr reception. So this is a possibility for firmware improvement. It sounds like one of the improvements is in reduced garbling for weaker signals; a vocoder plays an key role in this type of situation in TDMA reception.

And if you want to know how a poor vocoder sounds: If you had a GSM or TDMA cell phone in the mid 90's and you heard the 'bings' and 'boing's' and 'chirps' in the audio when near the cell coverage edge, that was due to the old vocoders; they made people sound like retarded space robots when in weak signal situations! TDMA adopted the same vocoder used initially in GSM and guess what; they both sounded equally bad. TDMA adopted a newer one and presto: a great improvment resulted and the GSM world had to scramble to catch up. Since then, vocoders have undergone many improvements.

So, those are some more thoughts on this, for what they may be worth.

Mark B.

I agree with what you are saying. When the SDR version messes up - its awful - with screeches and is very annoying. How lost bits are handled in the vocoder can make a big difference in how bad things sound when you loose some data.

The "more important" bits in the vocoder stream are protected with error correction bits (for every 11 bits of the voice data - 24 bits are transmitted and up to 3 bits can be corrected). But that does not mean the consumer scanners implement the error correction.
 

rak313

Member
Joined
Jan 13, 2013
Messages
40
Location
Syracuse ny
I agree with what you are saying. When the SDR version messes up - its awful - with screeches and is very annoying. How lost bits are handled in the vocoder can make a big difference in how bad things sound when you loose some data.

The "more important" bits in the vocoder stream are protected with error correction bits (for every 11 bits of the voice data - 24 bits are transmitted and up to 3 bits can be corrected). But that does not mean the consumer scanners implement the error correction.

see attached file that shows the DI-Bit (symbol) output. The transmission starts at about 2.5 seconds and continues through to the end the 6 second plot.

You are looking at the FM descriminator output - sampled (in the center of the symbol) at the symbol rate.

There are 4 destinct lines corresponding to the 4 symbol values.

Note this is a Motorola LSM modulated system - decoded with a C4FM decoder.

as you can see from this display - there are no symbol values that you would think were received in error.
 

Attachments

  • Screenshot from 2013-02-02 15:03:21.jpg
    Screenshot from 2013-02-02 15:03:21.jpg
    41.9 KB · Views: 1,314

beachmark

Member
Joined
Jan 30, 2013
Messages
47
Location
Afton, VA
That is interesting looking; looks like the 4 possible symbol bit pairs. This is clearly out of the demodulation process and nothing else; i.e., before any data processing. Yes, that looks pretty solid. Again, was this a C4FM sgi\\ignal or QPSK? (I can't remember which is used for LSM, and what phase and all that.) And was this a strong signal that sounded good?? Just thinking aloud, if there were any errors biases in the demodulation, it might show up as the median values of these lines being shifted up or down, or a 'spread' of the samples around the lines.

And speaking of biases and such, a thought just struck me. Are these scanners designed primarily for plain old analog FM use? The reason I ask is the the actual demodulator for TDMA is quite different from the typical demodulator (discriminator) for FM. Most all radios today use what is called a quadrature detector to recover FM; it is a bit different from the older discriminators but works fine for FM and is easy and cheap to implement. All of the receiver IC's since the late 80's that I have seen use them.

For TDMA's QPSK demodulation, an IQ demodulator is used. There is a huge difference between an IQ detector and a quadrature detector. An IQ detector allows the amplitude variations in the received QPSK waveform to be part of the demodulation process. This is important as the narrow shape filtering applied to the transmitted QPSK signal puts amplitide variations on the signal that have to preserved in the detection process. (More below.)

A quadrature detector has a hard limiter before the final detection process that strips off all the amplitude variations. The only time it responds to amplitide variations is when the signal is weak, and then it has an undesired characteristic of squaring the amplitude variations; i.e, it is not linear when the RF/IF signal level falls below the hard limiter level. (That is why, when an analog FM receiver's signal gets weak, the noise popping is so much more severe than for an AM radio signal.)

I think you can recover C4FM fairly well from a quaradture detector, as C4FM has a constant amplitude (as I presently understand it). But receiving QPSK with a quadrature detector will be a compromise at best, and I would expect it to really fall apart in weak signal conditions. So, does anyone have a link to a schematic for one of these scanner receivers, so we can maybe see if it has a quadrature detector?

And if you want to see what a real IQ constellation looks like for a QPSK signal with all its important amplitude variations, and how it effects demodulation looks at this link and read on below:
http://www.aeroflex.com/ats/products/prodfiles/appnotes/548/lsm.pdf

In the bottom left corner on page 3, show what a real IQ (constellation) diagram looks like for QPSK; this is a clean QPSK signal BTW, not a weak, noisy one. The 8 sharp points on a constant diameter circle (unit circle) are the ideal phase-amplitude points. See all the lines inside of, and loops outside of, the unit circle on which the phase points lie? Those are the actual transitions of the instantaneous signal phase and amplitude between one QPSK point and the next. In ideal conditions, your symbol sampling of the signal take place precisely at the moment that the instantaneous signal crosses through one of those sharp phase points. And if you get a weak signal, the noise causes the instantaneous signal to 'miss' the ideal phase-amplitide point by some amount, and the result is that sharp points blur and get fuzzy and spread out, and you get detection errors if the instantaneous signal misses an ideal point by too much.

If you don't faithfully preserve the instantaneous phase-amplitide 'meanderings' in the QPSK signal in the detection process, then the instantaneous signal will consistently miss the ideal points, even with a good signal. A quadrature detector's limiter will strip off all the 'loops' of signal outside of the unit circle, and it will distort the 'crisscross' lines inside of the unit circle. So, no matter how you sample it, you will always have some detection errors as the instanteous signals will consistently miss the ideal points.

And, the above link is good to show the signal's shape filtering on the same page. (I called it a window filter in the last post, which is the wrong term.)

Well, that is one other thought on this topic, for what it may be worth; I hope the explanatoin is clear enough to follow. Again, if anyone has a link to the schematic for this scanner, I'll try to look at it and decipher the demodulator type. Regards, Mark B.

PS: And the paragrpah on "Why LSM is utilized" in this Aeroflex app note is mostly marketing bull-cr*p, especially the claim that the amplitide variation in QPSK help in-building penetration. I run a premier engineering biz in the distributed antenna system market (and spent extensive R&D time in all of this), and that is pure fiction! Where is the 'rolling in uproarious laughter' emoticon !?! (Sorry, but I tend to react to this fictional marketing junk, as it inevitably takes hold and people who know the truth gets looked at askance when they try to tell the truth.)
 

RoninJoliet

Member
Premium Subscriber
Joined
Jan 14, 2003
Messages
3,392
Location
ILL
I think u fellows should go to work for one of the "scanner" companys, maybe they would learn something altho that "Upman" is pretty smart.....Thanks again for the great info...
 

beachmark

Member
Joined
Jan 30, 2013
Messages
47
Location
Afton, VA
You're welcome, Robin. As you can tell, I am still digging into P25 and this issue. BTW, I use the word 'dectector' as a general term and interchange it with 'discriminator' so I hope that is not confusing. And I admittedly am still fresh to this issue, and am still sorting out the different P25 Phase II TDMA scanner implementations. I am still sorting out in my mind what receiver implementations are suffering from what problems, so that is the reason for the 'possible problem' write-ups moving all over the map!

I also hope that the long 'dissertations' help people understand TDMA receive processing a bit better, even at a general level. It ain't plain ole' FM anymore!

At this point, I want to focus on the actual detectors (discriminators). I have read up on one post-discriminator signal processor, and any such implemenation that uses a 'discriminator tap' in an originally FM receiver that feeds the detector (discriminator) output into a converter box to a PC will be stuck with whatever type of detector that is in the receiver. A quadrature detector will never be very good for detection of QPSK in TDMA. (And multi-path and simlucast reception may effect a quadrature detector in a bad way for QPSK; that's just an idea thrown out there with zero thought or analysis...!)

And, any receiver shape (matched) filter to match up to the transmitted signal's shape filter will be impossible with that type of 'discriminator tap' set-up; the matched filter has to go before the detector. So these set-ups are 99.9% likely to never match the weak signal performance of a modern TDMA receiver, or a good software defined receiver that samples the IF or baseband signals and processes them.
 

xmo

Member
Joined
Aug 13, 2009
Messages
383
Yes, total B.S.

What it says is:

"Why LSM is Utilized
Linear Simulcast Modulation is utilized mostly in dense metropolitan areas. The amplitude component of the PSK modulation allows for better in-building penetration of the signal. In addition, due to the use of highly linear amplifiers, the range of the signal is significantly extended, sometimes up to twice the distance."

--------------
The real reason?

Must public safety systems need to cover a larger geography than is possible with a single site. If the desired coverage is obtained through the use different frequencies are at multiple sites this presents complications - particularly with obtaining licensing for the required number of frequencies. Simulcast is used in these systems so that a relatively small number of channels can cover the required area.

Once a simulcast solution is chosen, the number of sites must be determined - first by their RF coverage but also by their physical spacing from each other. The different length signal paths from each site to a user's radio cause two or more signals to arrive at slightly different times.

The differences in these signals causes the demodulator to produce errors. This is known as inter-symbol interference or delay spread. It is directly affected by the delay time difference between two signals which is a function of the site separation and by the width of the symbols themselves. For the maximum theoretical possible site separation the modulation would look like square waves.

Unfortunately, the bandwidth of a signal is also related to the shape of the modulation so the transitions from one state to another have to be less abrupt which makes the size of the symbols smaller as more time is used in transition. The smaller the width of the symbols [actually the time that the carrier spends at each symbol state] the closer the sites must be to each other before inter-symbol interference will be unacceptable.

If the number of sites required to resolve delay spread is greater than the number necessary to provide adequate RF coverage then the cost of the system is driven up unnecessarily - strictly by the modulation format.

Hence the development of LSM whose sole reason for existence is to permit wider symbols and thus fewer sites and lower cost for simulcast systems.
 

beachmark

Member
Joined
Jan 30, 2013
Messages
47
Location
Afton, VA
I may have spoken too soon. I listened last night and it was pretty much unintelligible - lots of qarbled voices. Today it;s perfectly fine.

Not sure why.
I looked a bit at what you are using; I sure don't know it all, but such a simple and cheap receiving SDR device may be a big question in terms of good front end characteristics or anything to mitigate strong in-band or out-of-band signals. And weak signal performance would be questionable. I would expect it to have true IQ demodulation however. You don't have a spectrum analyzer do you?
 

beachmark

Member
Joined
Jan 30, 2013
Messages
47
Location
Afton, VA
Yes, total B.S.

What it says is:

"Why LSM is Utilized
Linear Simulcast Modulation is utilized mostly in dense metropolitan areas. The amplitude component of the PSK modulation allows for better in-building penetration of the signal. In addition, due to the use of highly linear amplifiers, the range of the signal is significantly extended, sometimes up to twice the distance."

--------------
The real reason?

Must public safety systems need to cover a larger geography than is possible with a single site. If the desired coverage is obtained through the use different frequencies are at multiple sites this presents complications - particularly with obtaining licensing for the required number of frequencies. Simulcast is used in these systems so that a relatively small number of channels can cover the required area.

Once a simulcast solution is chosen, the number of sites must be determined - first by their RF coverage but also by their physical spacing from each other. The different length signal paths from each site to a user's radio cause two or more signals to arrive at slightly different times.

The differences in these signals causes the demodulator to produce errors. This is known as inter-symbol interference or delay spread. It is directly affected by the delay time difference between two signals which is a function of the site separation and by the width of the symbols themselves. For the maximum theoretical possible site separation the modulation would look like square waves.

Unfortunately, the bandwidth of a signal is also related to the shape of the modulation so the transitions from one state to another have to be less abrupt which makes the size of the symbols smaller as more time is used in transition. The smaller the width of the symbols [actually the time that the carrier spends at each symbol state] the closer the sites must be to each other before inter-symbol interference will be unacceptable.

If the number of sites required to resolve delay spread is greater than the number necessary to provide adequate RF coverage then the cost of the system is driven up unnecessarily - strictly by the modulation format.

Hence the development of LSM whose sole reason for existence is to permit wider symbols and thus fewer sites and lower cost for simulcast systems.

I had to read this 2 or 3 times, but after digesting it all, what you say is spot on. I did not catch this as the real explanation of why LSM has better site range but it is true; with the modulation symbol times lengthened to a bit over 200 usec, the coverage overlap area wherein intersymbol interference tolerable, is increased over that when ~100 usec symbol length was used. And the time point of best detection of the shape-filtered symbol does narrow as you say, so all that hangs together.

Thanks for the new view of an old problem. I was told by a consultant last week that the max allowed intersymbol delay due to site overlaps for the Wash DC P25 system is 70 usec; that makes sense, as I assume they are going past LSM to Phase 2, with the 160 usec CCH symbols.

(And the funny part is I had to explain why to him; as a consultant, he only has to make his customers think he is smarter than them.....!! LOL)
 
Last edited:

rak313

Member
Joined
Jan 13, 2013
Messages
40
Location
Syracuse ny
That is interesting looking; looks like the 4 possible symbol bit pairs. This is clearly out of the demodulation process and nothing else; i.e., before any data processing. Yes, that looks pretty solid. Again, was this a C4FM sgi\\ignal or QPSK? (I can't remember which is used for LSM, and what phase and all that.) And was this a strong signal that sounded good?? Just thinking aloud, if there were any errors biases in the demodulation, it might show up as the median values of these lines being shifted up or down, or a 'spread' of the samples around the lines.

And speaking of biases and such, a thought just struck me. Are these scanners designed primarily for plain old analog FM use? The reason I ask is the the actual demodulator for TDMA is quite different from the typical demodulator (discriminator) for FM. Most all radios today use what is called a quadrature detector to recover FM; it is a bit different from the older discriminators but works fine for FM and is easy and cheap to implement. All of the receiver IC's since the late 80's that I have seen use them.

......

A quadrature detector has a hard limiter before the final detection process that strips off all the amplitude variations. The only time it responds to amplitide variations is when the signal is weak, and then it has an undesired characteristic of squaring the amplitude variations; i.e, it is not linear when the RF/IF signal level falls below the hard limiter level. (That is why, when an analog FM receiver's signal gets weak, the noise popping is so much more severe than for an AM radio signal.)

I think you can recover C4FM fairly well from a quaradture detector, as C4FM has a constant amplitude (as I presently understand it). But receiving QPSK with a quadrature detector will be a compromise at best, and I would expect it to really fall apart in weak signal conditions. So, does anyone have a link to a schematic for one of these scanner receivers, so we can maybe see if it has a quadrature detector?
....)


Attached is a pic of the processing string for the SDR version. Most of the blocks are either input constants, or displays (scopes/FFT waterfalls, etc). Notice there is a block that detects a DC offset - and asks the tuner to change frequency (The message callback from the OP25 decoder block)

The signal was output from the OP25 decode block. But the only "processing" was chosing the correct sample time. The input to the OP25 decode block is sampled at 8x the symbol rate (4800). And the decode block is running a timing loop to determine which sample to use. (I have not figured out the details yet - but looking at the code - it looks like they use the error between the actual symbol value and the ideal symbol value to update the sample time loop).

LSM is motorola speak - Linear Simulcast Motulation (TM). It is not a constany envelope like C4FM. It uses a different bit shaping filter that has a more wide open eye pattern than DQPSK.

Its hard to say what is in one of those scanners. They cost alot more than the analog counterparts - so you would think they would have proper RF sections. But I kinda suspect they might just be trying to evolve an old design.

Given what one can do with the E4000 ( http://www.google.com/url?sa=t&rct=...=pCHjp_oaQkX3zoXnVd8esw&bvm=bv.41867550,d.dmQ )

The E4000, a pair of audio A/Ds (e.g 100 kHz sample rate), and a DSP chip should be all that is necessary (along with a micro to control the display - keyboard - etc.) to build a decent P25 scanner.

But that is still a major undertaking for a design team - for what is likely - not a very big market.

Rick
 

Attachments

  • Screenshot from 2013-02-03 14:26:50.jpg
    Screenshot from 2013-02-03 14:26:50.jpg
    76.6 KB · Views: 1,231

rak313

Member
Joined
Jan 13, 2013
Messages
40
Location
Syracuse ny
I looked a bit at what you are using; I sure don't know it all, but such a simple and cheap receiving SDR device may be a big question in terms of good front end characteristics or anything to mitigate strong in-band or out-of-band signals. And weak signal performance would be questionable. I would expect it to have true IQ demodulation however. You don't have a spectrum analyzer do you?

You have hit on my major point - that I don't believe the problem requires having a receiver that can reject large out of band signals, or have a 1 dB noise figure.

These P25 systems are designed to work in real world urban/industrial environments - where noise is 10-20 dB higher than random noise. Overload by a strong signal can be a problem, (and my consumer scanners will overload if I key a personal FRS radio nearby).

The $20 device I'm using does not have any issue receiving these signals. The SNR is around 20-30 dB. While I dont have a spectrum analyzer - the SDR does have one (via 1024 point FFT) - and of course there are other signals around - but none are overloading the front end. If it was overloaded - it would be obvious.
 

Attachments

  • Screenshot from 2013-02-03 15:39:29.jpg
    Screenshot from 2013-02-03 15:39:29.jpg
    39.5 KB · Views: 1,190

beachmark

Member
Joined
Jan 30, 2013
Messages
47
Location
Afton, VA
That looks' really intriguing, Rick. Thanks very much for sharing. I'll take a long look at this.

Yes, with that SNR you should be golden. Somewhere in the range of 8-10 dB SNR, no fading, should be the minimum. I don't know what kind of front end these things have, or what the effectvie noise figure of the SA function is; so you may have a better SNR than is showing. It looks like it has something in front, due to the roll off at the lower end of the band - but that's just pure guessing from the SA plot.

But don't be fooled to think that strong out of band signals can't effect it. Any receiver, and particularly DSP's, needs some degree of front end protection. Scanners may be less susceptible as they stand alone and aren't really designed to be near strong sources so you could tolerate a lower threshold. (We deal with indoor DAS's and you have to deal with 2-3W portables transmitting just a few feet from the indoor antennas, and we can't compress/block.) Without anything, a early stage can block/compress, or the A/D input can be overranged. And any decent receive only device should easily have a few dB noise figure; it'll get a bit higher with a very wideband unit but 6-8 db should be easy even then.

As an FYI for LMR, including P25 signals, system sensitivities and dynamic ranges are computed down to the RX noise figure; seeing a -115 dBm minimum in the link budgets is normal. Yes, real urban noise floor will be some dB higher (we never use anything as high as 10-20 dB though), but when you do the system link calcs, you go down to the minimum. And in a rural area, that is what you will realize, and both urban and rural use the same equipment. The site noise figures on the rcvr side are usually around 5 dB, and subscriber units can be a dB or 2 above that in the field. But TIA standards actually have the sensitivity quals set around -116 dBm for both. A scanner should be comparable.

Not sure how this helps the original question!! I anyone has some digital scanner schematics, how 'bout some links?
 

beachmark

Member
Joined
Jan 30, 2013
Messages
47
Location
Afton, VA
BTW Rick, I looked a bit at this SDR chip, the e4000. It seems to have all the right stuff. Some notes that would effect it for LMR scanner use:
- The front end RF filter is quite broad; even a mildly strong TV channel would knock the gain down and you could lose a weak, desired UHF signal.
- The narrowest IF/baseband filters are a few MHz wide. The sensitivity for 12.5 kHz wide signals would be compromised by 20+dB without further DSP filtering. 20 dB more noise and other undesired signals burns up 3-4 bits in the A/D; that reduces dynamic range in the filtering and detection process by that much. Could be an issue with low bit A/D's, might not be if you have 16 bits.
Looks good for a TV receiver......needs some serious filtering for LMR,,,, just IMO.
 
Last edited:

rak313

Member
Joined
Jan 13, 2013
Messages
40
Location
Syracuse ny
BTW Rick, I looked a bit at this SDR chip, the e4000. It seems to have all the right stuff. Some notes that would effect it for LMR scanner use:
- The front end RF filter is quite broad; even a mildly strong TV channel would knock the gain down and you could lose a weak, desired UHF signal.
- The narrowest IF/baseband filters are a few MHz wide. The sensitivity for 12.5 kHz wide signals would be compromised by 20+dB without further DSP filtering. 20 dB more noise and other undesired signals burns up 3-4 bits in the A/D; that reduces dynamic range in the filtering and detection process by that much. Could be an issue with low bit A/D's, might not be if you have 16 bits.
Looks good for a TV receiver......needs some serious filtering for LMR,,,, just IMO.

I agree that 8 bits in the A/D is a weak spot, and the quality of those 8 bit A/Ds may be a question. But at 1 MHz sample rate - there is 25 dB of processing gain by the time one lowers the bandwidth to the 3 kHz BW of the FM descriminator out. So with an ideal pair of 8 bit A/Ds - that lets one have an interfering signal with 50 dB more power than the desired signal, and still have 25 dB SNR.

For a scanner - one could use the E4000, an analog LPF, and a <$5 consumer stereo 24 bit (18+ effective) A/D - sampling at 48kHz or 96 kHz with over 100 dB of dynamic range.

I have been looking at the code in the P25 decode. I said something that was not correct in my explanation of the sampling time. The code uses the last 8 samples (and a filter ) to calculate the signal at the desired sample time. It then determines the symbol value, and difference between the signal and the ideal symbol value. It filteres this difference to modify the future sample time. The filter has 128 fractions of 1 sample - effectively 128x oversampling.
 

beachmark

Member
Joined
Jan 30, 2013
Messages
47
Location
Afton, VA
Thanks for the informative note on the symbol sampling....yes, that is a lot of sampling!

I'll have to read up on the dynamic range increase from oversampling the signals.....not sure I see/buy all of that. This is not like CDMA coded symbols, where you get that type of compression gain; you are just oversampling the same info over and over and not gaining any real new info to help pull the signals out of the noise or suppress strong other signals from what I can see. (But my DSP experience is getting pretty outdated!) BTW, be aware that the detection BW for simple signals like this the BW of the actual RF signal, not the data stream output BW; the framing bits and eroor correction bits and such are just a S/N price to be paid to gain other benefits. The signal BW is the 12.5 kHz. And you use up a couple of bits in the lower end of the A/D range to sample the noise; otherwise you distort the noise statistics, and raise the effective system noise figure. It's not like clean audio where noise is relatively non-existent in the signal.

And you other guys......bring on some scanner schematics! I'd still like to figure out the original question!
 

rak313

Member
Joined
Jan 13, 2013
Messages
40
Location
Syracuse ny
Thanks for the informative note on the symbol sampling....yes, that is a lot of sampling!

.....not sure I see/buy all of that. ...you are just oversampling the same info over and over and not gaining any real new info to help pull the signals out of the noise or suppress strong other signals from what I can see......

... And you use up a couple of bits in the lower end of the A/D range to sample the noise; otherwise you distort the noise statistics .....

I can understand you doubting this - so did I when I first came across it. But your ascertain that you are just getting the same info over and over - is not correct. As long as there is (unbiased) noise in the system of a few A/D bits, you will get new information each sample.

And you are right - you need to make sure that the LSB's of the A/D are into the noise. Otherwise the noise contributed by the LSB of the A/D will not be splattered uniformly across all frequencies - but can act like a non-linear device, and create spurious signals.

Attached is an example i made using Matlab. This is not a very practical example - but it illustrates how one can get information from oversampling.

I created 2 complex signals sampled at 1,048,576 Hz - sig1 is at amplitude 120, and frequency 4096 Hz.
sig2 is at amp 0.2 and frequency of 1024 Hz. I also created a gaussian noise source with RMS amplitude of 4.

I add these 3 items (sig) and truncate (throwaway the fraction of the number) with the resulting signal representing an 8 bit signal (with values between +/- 127 ). At this point - one would think that the sig2 would be lost - as its less than 1 LSB.

Note: that 120 is 55 dB larger that 0.2.

I took 65 k samples and process them in an FFT, The results were scaled such that an amplitude of 128 would be 0 dB. Notice in the top figure you see both signals at the correct relative amplitude, and way above the noise.

The 2nd figure shows the matlab window where the 1st 10 samples (both real and imaginary components) of the signal (sig) are printed in the window to illustrate that the signal is made of only integers -128 to + 127, meaning 8 bits. The 3rd figure is the matlab source to create this example.

Now it is unlikely that one can get (65k) 48 dB of gain in any real 8 bit A/D before some spurious signal showed up. An A/D will spec SPDR - spurious free dynamic range - and that tells you how much gain you can expect to get.

As an aside - most 24 bit (18-19 bit effective) audio A/D converters are 1 bit devices sampling in the 1-5 MHz range, followed by complicated noise shaping feedback and digital filters. They achieve 18 bit performance by not only oversampling, but by making sure the spectrum of the noise introduced by sampling only 1 bit, is out of the desired band. There are seismic versions good to 22 bits (with a very low bandwidth),
 

Attachments

  • fig1.png
    fig1.png
    64.7 KB · Views: 1,144
  • fig2.png
    fig2.png
    67.2 KB · Views: 1,161
  • fig3.jpg
    fig3.jpg
    54.2 KB · Views: 1,181

rak313

Member
Joined
Jan 13, 2013
Messages
40
Location
Syracuse ny
This is an eye pattern from the control channel. Looks pretty open to me.
 

Attachments

  • Screenshot from 2013-02-05 17:46:27.jpg
    Screenshot from 2013-02-05 17:46:27.jpg
    51.3 KB · Views: 1,433
Status
Not open for further replies.
Top