AirSpy comparisons?

Status
Not open for further replies.

Voyager

Member
Joined
Nov 12, 2002
Messages
12,059
I've always been impressed with the CR-1's low noise floor and sensitivity, and it sounds like the Airspy is doing pretty well in those areas as well.

Even the $15 RTLs have great sensitivity. I can pick up signals any other receiver can pick up using the same antenna. Now selectivity is another matter...
 

Voyager

Member
Joined
Nov 12, 2002
Messages
12,059
I missed that it only goes down to 24Mhz, that's kind of important to me...I do have an up-converter though.

There have recently been some posts about progress in lowering the "24 MHz floor" on several devices. Maybe the AS will be able to lower its lower limit as well. (Spyverter aside)
 

prog

Member
Premium Subscriber
Joined
Nov 18, 2014
Messages
73
But the CR-1a uses the same Mirics tuner that you have been so disparaging of. How on earth can it be of “excellent quality” and look “very promising” if it uses a tuner that you have said performs so poorly? At least the Mirics tuner works down to 100 KHz and from other accounts is capable of giving excellent performance.

I think you missed the entire point. The CR-1a does NOT use the RSP chip. We all use silicon tuners more or less successfully. Some opted for cheap ADCs, some preferred good ADCs with significant amount of DSP behind for filtering. It's worth noting that the CR-1a uses an excellent 14bit ADC from Linear Technology and a real DSP processor from Analog Devices (a BlackFin model). This is more than adequate to achieve the same processing quality done in a PC and cannot be compared to the RSP.
A lot of people are still not used to SDRs and have very vague idea of what to look for to check the actual performance; especially those who are used to the much simpler analog heterodyne architectures. The physics are the same though, just a bit more Complex ;-)

Merry Christmas to all!
 
Last edited:

SCPD

QRT
Joined
Feb 24, 2001
Messages
0
Location
Virginia
A lot of people are still not used to SDRs and have very vague idea of what to look for to check the actual performance

I have a pretty clear idea what I'm looking for but still haven't found it.
Wide spaced blocking dynamic range.
What about phase noise?

especially those who are used to the much simpler analog heterodyne architectures.
Merry Christmas to all!

What's inside the R820T chip? Isn't that some sort of heterodyne system I see there? :)

Merry Xmas!
 

prog

Member
Premium Subscriber
Joined
Nov 18, 2014
Messages
73
I have a pretty clear idea what I'm looking for but still haven't found it.
Wide spaced blocking dynamic range.
What about phase noise?

Concerning the phase noise, the 57 minutes video from Leif demonstrates how other SDRs are limited by their own phase noise. You must not be looking hard enough. In the other hand, Leif is preparing another video where he compares the wide band dynamics of many SDRs in the FM broadcast band. As explained before, SDRs perform less optimally in a crowded environment compared to single channel receivers that have sharp filters around the *single* signal of interest. Yet, this same filtering does not allow them to see what's going around. Unfortunately, with today's technology even very expensive SDRs like the latest USRP's cannot offer both. I think we are all looking for the same thing here!
The good thing is even at that game Airspy performs better than the other players in the same range of price thanks to its better dynamic range.

What's inside the R820T chip? Isn't that some sort of heterodyne system I see there? :)
Merry Xmas!

The R820T and most silicon tuners are quadrature direct conversion receivers. The R820T also has an auto-calibrating phasing IF filter that cancels out one of the images to output a Low-IF. For example, if you assume positive image configuration (USB,) when you tune the LO to the lower edge of a signal that is 1MHz wide, you get an IF signal between DC and 1MHz. At that point you can sample the IF at 2MSPS with a single ADC then do another conversion to IQ in DSP by a simple phase rotation and some filtering. the DC leakage is easier to eliminate when it's located in the side, and of course you won't have any IQ (phase or amplitude) imbalance because it is calculated rather than sensed.
We're "sort of" very far from the conventional High-IF model with its many conversion stages and their associated LO's and SAW/crystal filters. I hope this clarifies things to you.

Merry Christmas!
 
Last edited:

Flatliner

Member
Joined
Aug 10, 2014
Messages
391
Location
UK
But the CR-1a uses the same Mirics tuner that you have been so disparaging of. How on earth can it be of “excellent quality” and look “very promising” if it uses a tuner that you have said performs so poorly? .

The answer, do doubt, depends on whether one has a vested interest, or not.
 

SCPD

QRT
Joined
Feb 24, 2001
Messages
0
Location
Virginia
Concerning the phase noise, the 57 minutes video from Leif demonstrates how other SDRs are limited by their own phase noise. You must not be looking hard enough. In the other hand, Leif is preparing another video where he compares the wide band dynamics of many SDRs in the FM broadcast band.

The 3 other SDRs in his test are limited *more* by phase noise, yet others outside the test will be limited less as well :)

Looking forward to Leif's further testing, Airspy is in good hands in Sweden.

I believe I will be checking the Rafael in another DS SDR HF hybrid design soon.
With added preselection for the obvious critical cross-overs afaik.

73
Paul
PD0PSB
 

Voyager

Member
Joined
Nov 12, 2002
Messages
12,059
In the other hand, Leif is preparing another video where he compares the wide band dynamics of many SDRs in the FM broadcast band.

I would just as soon see it tested in the VHF and/or UHF bands. How many people really listen to weak signals in the FMB band? I know some do, but many more want to use it in the 146-174 and 406-512 MHz bands.
 

michael77

Member
Joined
Dec 11, 2014
Messages
5
I think you missed the entire point. The CR-1a does NOT use the RSP chip. We all use silicon tuners more or less successfully. Some opted for cheap ADCs, some preferred good ADCs with significant amount of DSP behind for filtering. It's worth noting that the CR-1a uses an excellent 14bit ADC from Linear Technology and a real DSP processor from Analog Devices (a BlackFin model). This is more than adequate to achieve the same processing quality done in a PC and cannot be compared to the RSP.
A lot of people are still not used to SDRs and have very vague idea of what to look for to check the actual performance; especially those who are used to the much simpler analog heterodyne architectures. The physics are the same though, just a bit more Complex ;-)

Merry Christmas to all!

Once again, it seems to be you that is actually missing the point. Several points in fact. Looking at the RSP from SDRplay (I assume that is what you are referring to), there is no such thing as “the RSP chip”. The RSP appears to use the MSi3101, which is the MSi001 (same as the CR-1a) and the MSi2500. Looking at the datasheet for the MSi2500 you can see that it contains dual 12 bit ADCs which give 10.5 ENOB at their maximum sample rate of 10 MS/s for both I an Q. There is also a programmable decimation filter which is contained within a custom DSP this gives very high levels of selectivity. Your comments on the DSP are utter nonsense. The BlackFin in the CR-1a is simply performing functions that are done on the MSi2500 AND the PC together. The BlackFin does not in any way enhance the dynamic range of the signal presented by the ADC. If the receiver is already blocked by interferers at this point, it is blocked!

The major difference between the approach from Mirics and what you have implemented in the Airspy is that your ADC is exposed to the full range of signals within the bandwidth of the Rafael tuner which cannot be set at less than 6 MHz. The Mirics approach uses high dynamic range active R-C filters which can be used to narrow the bandwidth down as low as 200 KHz (see earlier post from Yagi23) which significantly helps protect the ADCs from overload if you have a lot of strong interferers (which you commonly can have at HF and VHF). Your approach seems to be to set the tuner gain such that the tuner noise makes a negligible contribution to the overall noise at the output thus allowing the ADC to define the dynamic range of the receiver (excluding the effect of reciprocal mixing which the LO architecture will make more prevalent at higher frequencies). I.e. the noise floor as measured at the ADC output is dominated by the quantisation noise of the ADC. This is done to give the ADCs the maximum signal headroom and thus maximise protection from overload from interferers. Decimation now allows a trade off of SNR against bandwidth. That is a GOOD APPROACH, if your tuner is incapable of giving you meaningful selectivity which appears to be the case with the Rafael tuner. You have to keep the gain down in the tuner because if the noise floor was dominated by the tuner noise, no amount of decimation will improve matters. Decimation can only improve the noise floor to the point where the noise becomes dominated by the noise from the tuner.

I have no problem with what you have done architecturally with the Airspy. In fact, as I said, I think it is good approach given the constraints of the tuner that you have used. What I do have a problem with though is you trashing other products that are capable of giving just as good performance or possibly even better if used to their full potential and peddling myths about your own product to help your sales numbers. I have seem comments from you in other threads where you have claimed 85-90 dB of spurious free instantaneous dynamic range for the Airspy with a 9 MHz bandwidth and stating that the Mirics tuner gives poor overload performance and the “RSP chip” gives only 7 ENOB in the ADCs. From what I can see, these comments are deliberately designed to mislead or are as a result of a very poor level of technical understanding.
 

Yagi23

Member
Joined
Dec 11, 2014
Messages
27
Concerning the phase noise, the 57 minutes video from Leif demonstrates how other SDRs are limited by their own phase noise. You must not be looking hard enough. In the other hand, Leif is preparing another video where he compares the wide band dynamics of many SDRs in the FM broadcast band. As explained before, SDRs perform less optimally in a crowded environment compared to single channel receivers that have sharp filters around the *single* signal of interest. Yet, this same filtering does not allow them to see what's going around. Unfortunately, with today's technology even very expensive SDRs like the latest USRP's cannot offer both. I think we are all looking for the same thing here!
The good thing is even at that game Airspy performs better than the other players in the same range of price thanks to its better dynamic range.



The R820T and most silicon tuners are quadrature direct conversion receivers. The R820T also has an auto-calibrating phasing IF filter that cancels out one of the images to output a Low-IF. For example, if you assume positive image configuration (USB,) when you tune the LO to the lower edge of a signal that is 1MHz wide, you get an IF signal between DC and 1MHz. At that point you can sample the IF at 2MSPS with a single ADC then do another conversion to IQ in DSP by a simple phase rotation and some filtering. the DC leakage is easier to eliminate when it's located in the side, and of course you won't have any IQ (phase or amplitude) imbalance because it is calculated rather than sensed.
We're "sort of" very far from the conventional High-IF model with its many conversion stages and their associated LO's and SAW/crystal filters. I hope this clarifies things to you.

Merry Christmas!

The R820T and R820T2 use a complex (I/Q) heretordyne down-conversion and a complex polyphase filter to provide both image rejection and transformed low-pass to bandpass filter response. This has become a common approach in silicon TV tuners which were primarily designed to replace 'can' tuners and therefore need to interface to the demodulator at an IF. Image rejection will be finite and I believe the stated image rejection for the R820T is typically 65 dB. Of course this means that it can be either better or worse than this in practice, but if the IF is 4.57 MHz, at a 9.14 MHz offset one way, there WILL be a detectable image response and a risk of degraded blocking performance. As prog says, a complex down conversion to I/Q baseband can then be performed in DSP and in principle, the quadrature amplitude and phase balance can be perfect. DC offsets principally occur as a result of LO- RF leakage (self mixing of the LO) and this does not happen in a DSP based down conversion. Zero IF architectures only suffer from an in-channel image response, which is in practice unlikely to be a problem as all it can do is degrade the ultimate SNR of the wanted signal. 1 degree of quadrature error leads to a 'self image' at around -40 dB. The self image that results from quadrature amplitude and phase errors in ZIF architectures can be corrected in software, but the image response of Low IF architectures such as that used in the R820T/T2 cannot. That is because the polyphase filter has already performed the complex re-combination of the I/Q paths from the first mixer pair to create a 'real IF'

Regarding phase noise, the situation is more complex than you describe. The approach that all of these tuners take is to run the synthesizer with VCOs to provide greater than one octave of tuning range. Successive divider by 2,4, 8,16,32 etc are used to provide the LO for the various bands and the necessary LO phase quadrature can be extracted directly from the divider outputs. Because the VCO phase noise is divided down, the SSB phase noise will be reduced by 20*Log (N) where N is the LO divide ratio.

I do not know that the native VCO frequency for the R820T/T2 synthesizer is, but it is likely to be in the range of 2-4 GHz. As such in all probability, the LO divide ratio is probably 32 when used at VHF. This means that both the integrated and SSB phase noise of the VCO will be improved by around 30 dB by the LO path dividers. As a consequence, phase noise is unlikely to be a significant factor in determining the blocking performance at lower frequencies such as HF and VHF, but is much more likely to be a problem at UHF, where the LO divide ratio is lower.

What Leif has done in his measurements is very good work, but unless you understand the architecture of the tuner (i.e. LO architecture, where the image response lies etc) you will never know where to look for likely weaknesses.
 

prog

Member
Premium Subscriber
Joined
Nov 18, 2014
Messages
73
The BlackFin in the CR-1a is simply performing functions that are done on the MSi2500 AND the PC together.

That's exactly the problem I'm pointing: Cheap (low number of FIR taps) filtering like in the RTL2832 and the MSI2500. Only a limited number of ASICs can do that DSP work properly and they are very expensive like the AD936X series. The other way is to do it in an expensive FPGA. The problem is, once aliased inside the 2500, the signal cannot be fixed inside the PC.

The major difference between the approach from Mirics and what you have implemented in the Airspy is that your ADC is exposed to the full range of signals within the bandwidth of the Rafael tuner which cannot be set at less than 6 MHz.

That is a GOOD APPROACH, if your tuner is incapable of giving you meaningful selectivity which appears to be the case with the Rafael tuner.

To get an idea of how misinformed your assertions can be, have a look on the filters that can be achieved with the 820: http://sdrsharp.com/downloads/airspy_filter_sweep.png . That's a filter sweep from 1.5 to 9.5 MHz BW.
But I agree about the good approach ;-)

What I do have a problem with though is you trashing other products

That might be your interpretation; but don't get me wrong, I believe every single product has its segment of the market and there's no doubt the RSP was designed to fill some gap Mirics identified in the TV/Radio Broadcast market when they started their business. I pointed some of compromises done in the Msi2500 and you don't seem to like it. That's your right; but you must also understand that the market of Hobby SDR has other requirements than Digital TV. It will be very hard to convince your end users that a "full stack" solution designed for TV will magically offer the flexibility and performance of purpose designed SDR solution for the scanning crowd. There's a minimum of re-architecture work to get there; and that's what we have done and it seems to work :)
My best wishes of success with the RSP stack!
 

prog

Member
Premium Subscriber
Joined
Nov 18, 2014
Messages
73
It looks like you already have access to the 820's datasheet :) You are correct on most points but I have a few remarks:

1 degree of quadrature error leads to a 'self image' at around -40 dB. The self image that results from quadrature amplitude and phase errors in ZIF architectures can be corrected in software

Been there. I developed the IQ correction algorithm in use in SDR# and GNU Radio (gr_iqbal implementation by S. Munaut). This approach improves the behavior of ZIF architectures but has its own limits: It needs some time to converge (not suited for fast scanners) and you still end up with a "hole" at the center of the spectrum; which is fine for wide band digital modulations like DVB-T but not suitable for narrow band signals; and finally; it adds some light phase noise when readjusting the correction parameters.
One more precision: The reference R820T driver does not take full advantage of the flexibility offered by this chip; especially concerning the image rejection: You can calibrate it either automatically or manually at any point of time if needed and that's a huge advantage when exploited properly.

What Leif has done in his measurements is very good work, but unless you understand the architecture of the tuner (i.e. LO architecture, where the image response lies etc) you will never know where to look for likely weaknesses.

He's having fun with the spies at the moment and he's fiddling with the config of the R820T2 via libairspy. I think we will hear something good very soon. In the other hand, the firmware source code is planned for release under GPL so everyone can customize the Airspy. I hope it will be useful for the SDR community we are building.
 
Last edited:

Yagi23

Member
Joined
Dec 11, 2014
Messages
27
It looks like you already have access to the 820's datasheet :) You are correct on most points but I have a few remarks:



Been there. I developed the IQ correction algorithm in use in SDR# and GNU Radio (gr_iqbal implementation by S. Munaut). This approach improves the behavior of ZIF architectures but has its own limits: It needs some time to converge (not suited for fast scanners) and you still end up with a "hole" at the center of the spectrum; which is fine for wide band digital modulations like DVB-T but not suitable for narrow band signals; and finally; it adds some light phase noise when readjusting the correction parameters.
One more precision: The reference R820T driver does not take full advantage of the flexibility offered by this chip; especially concerning the image rejection: You can calibrate it either automatically or manually at any point of time if needed and that's a huge advantage when exploited properly.



He's having fun with the spies at the moment and he's fiddling with the config of the R820T2 via libairspy. I think we will hear something good very soon. In the other hand, the firmware source code is planned for release under GPL so everyone can customize the Airspy. I hope it will be useful for the SDR community we are building.

Interesting, thanks. I didn't really mention DC offsets in my comments on ZIF architectures, but you are quite correct that DC offset compensation will always put a hole in the middle of the spectrum and the width of this 'hole' is a function of the length of time any DC estimation (integration) is performed for. However, I am not sure in reality whether this is actually a problem. If you integrate for 1 second and subtract then crudely speaking, you will have a hole of around 1 Hz. Initially, you might think that 1 second might be a problem for a fast scanning receiver, but I simply don't see it, as it presents more of a latency/memory issue. In other words, the signal processing for DC-offset correction is done non-real-time and DC offset is removed from the baseband signal that has been buffered in memory before demodulation. Latency is always present in SDR platforms as there are multiple contributors: CPU speed, Memory/HDD bandwidth, USB latency and so on. Just listen to a SDR platform and a traditional analogue receiver side by side tuned to the same signal and the SDR receiver audio will always be slightly delayed in time when compared to the analogue receiver. Memory requirements can be a challenge in a DSP based system with constrained embedded memory, but with a PC, it is very unlikely to be.

I am intrigued about your comments on calibration for improved image rejection. I fully understand that calibration can be (and often is) used for enhanced image rejection, but in my understanding, this can only be done by injecting a signal at the image frequency and then nulling it out via a trim of the LO phase error. I can see this working as part of a manufacturing test setup, but it still has limits due to drift with temperature and voltage. I am puzzled as to how this can be re-done dynamically or 'on the fly' in normal usage of the receiver as it requires a known signal to be injected at the image frequency.
 

prog

Member
Premium Subscriber
Joined
Nov 18, 2014
Messages
73
Interesting, thanks. I didn't really mention DC offsets in my comments on ZIF architectures, but you are quite correct that DC offset compensation will always put a hole in the middle of the spectrum and the width of this 'hole' is a function of the length of time any DC estimation (integration) is performed for. However, I am not sure in reality whether this is actually a problem. If you integrate for 1 second and subtract then crudely speaking, you will have a hole of around 1 Hz. Initially, you might think that 1 second might be a problem for a fast scanning receiver, but I simply don't see it, as it presents more of a latency/memory issue. In other words, the signal processing for DC-offset correction is done non-real-time and DC offset is removed from the baseband signal that has been buffered in memory before demodulation. Latency is always present in SDR platforms as there are multiple contributors: CPU speed, Memory/HDD bandwidth, USB latency and so on. Just listen to a SDR platform and a traditional analogue receiver side by side tuned to the same signal and the SDR receiver audio will always be slightly delayed in time when compared to the analogue receiver. Memory requirements can be a challenge in a DSP based system with constrained embedded memory, but with a PC, it is very unlikely to be.

I am intrigued about your comments on calibration for improved image rejection. I fully understand that calibration can be (and often is) used for enhanced image rejection, but in my understanding, this can only be done by injecting a signal at the image frequency and then nulling it out via a trim of the LO phase error. I can see this working as part of a manufacturing test setup, but it still has limits due to drift with temperature and voltage. I am puzzled as to how this can be re-done dynamically or 'on the fly' in normal usage of the receiver as it requires a known signal to be injected at the image frequency.

I'd add that the DC canceling is not a problem on its own, the "hole" can be made very small indeed. But the method you are describing (linear phase DC blocking) requires storing a lot of samples in a delay line and subtract the calculated DC offset from every sample on the go. This can be problematic with very high sample rates. The other alternative is to use a IIR integrator which requires only one sample to be stored, but it is not linear phase. The phase response around the hole is not pretty. There's no one size fits all solution.
Concerning the auto-calibration of the polyphase IF filter, it can be done with the live signals, just like the IQ correction I have done before. There aren't many ways to do it anyway. AFAIK Analog Devices also use the same technique in their AD936X transceivers.
 

Yagi23

Member
Joined
Dec 11, 2014
Messages
27
I'd add that the DC canceling is not a problem on its own, the "hole" can be made very small indeed. But the method you are describing (linear phase DC blocking) requires storing a lot of samples in a delay line and subtract the calculated DC offset from every sample on the go. This can be problematic with very high sample rates. The other alternative is to use a IIR integrator which requires only one sample to be stored, but it is not linear phase. The phase response around the hole is not pretty. There's no one size fits all solution.
Concerning the auto-calibration of the polyphase IF filter, it can be done with the live signals, just like the IQ correction I have done before. There aren't many ways to do it anyway. AFAIK Analog Devices also use the same technique in their AD936X transceivers.

Regarding DC offset, that was exactly the point I was making, this is more of a memory issue. I think we are on the same page here.

Could you elaborate a bit more on your last point? I/Q correction can always be performed in a ZIF architecture because you still have all of the complex quadrature components. In LIF architecture, after the signals have been combined you no longer have the necessary information to be able to make a correction. In other words, it is impossible to distinguish between what is due to the image and what is in fact the wanted signal. I have been an RF IC designer for over 30 years and I cannot see how this can be made to work unless you use a 'calibration signal' at the image frequency with no other signal present. Finite image rejection comes from a number of sources in this kind of architecture, but principally it results from quadrature phase errors in the LO path and component mismatches in the two I/Q signal paths (mixers/polyphase filters). It is simply not possible to correct these errors by sensing them directly as (for example) using a delay locked loop to trim LO phase errors will only null the error out to a level defined by the equivalent input phase error of the phase detector. So in fact, you can actually make the situation worse if your initial phase error was smaller than that of your phase detector. The situation is compounded by the fact that LO quadrature error can come from several sources or which device mismatches is just one. For example, divide by 2s are extremely sensitive to errors that result from VCO (or VCO harmonics) coupling to their DC supply, as are divide by 4s. If you inject a 'calibration image reference' in the absence of any other signals and then trim phase to null the level of this reference signal at the output of the IF, it will always work irrespective of whether the error has resulted from static mismatches or RF coupling effects. You can always null out the effect of component mismatches in the polyphase filter by trimming the phase of the LO, but this will only work at a spot frequency and not across the full bandwidth of the filter.
the situation is not totally dissimilar to that of RF tracking filters. These too are calibrated accurately by the injection of a test signal and the resultant parameters stored either in the software or firmware and then reloaded to avoid the need for constant re-calibration because the problem with on-the-fly calibration with test signals (even where they are self generated by the IC itself) are that the receiver is inoperative during the calibration period and that can be for quite a long time. So whilst in principle, you could calibrate the image response each time the synthesizer is re-locked for a given IF frequency, it would tend to make the receiver dynamically 'sluggish'
 

michael77

Member
Joined
Dec 11, 2014
Messages
5
To get an idea of how misinformed your assertions can be, have a look on the filters that can be achieved with the 820: http://sdrsharp.com/downloads/airspy_filter_sweep.png . That's a filter sweep from 1.5 to 9.5 MHz BW.

Err, Dude? Perhaps you didn’t realise, but all of these chips have to have the ability to trim on-chip filters to account for manufacturing variations of the on-chip Rs and Cs. The un-trimmed variations on silicon processes can be huge and so the trim range also has to be correspondingly large. Allowing for temperature variations only increases the required trim range. This trim is usually done in an automated form as part of an on-chip calibration routine. You had mentioned in another thread that you were looking at overriding this self calibration to allow the filters on these chips to be trimmed manually. Well good luck pal, because what matters is what the manufacturer will WARRANT the chip will do, NOT what any individual device will do at a given temperature. If a part has to have a guaranteed minimum bandwidth of 6 MHz, then many individual devices will be capable of being trimmed to a lot lower bandwidth to account for process and temperature variations. Problem is buddy that not all chips will be capable of hitting the same numbers! Looking at your ‘plot’ you can even see the individual trim steps. But what is that HUGE step in the middle???? Now I don’t pretend to know all that much about the tuner in the Airpsy, but are you really stating that the manufacturer warrants that the filter can be set to 1.5 MHz under all conditions? Well are you? Or are you really just stating that some of them can? Besides, 1.5 MHz isn’t what I would exactly call SELECTIVITY when dealing with a mass of signals in the 40 Meter band!

That's exactly the problem I'm pointing: Cheap (low number of FIR taps) filtering like in the RTL2832 and the MSI2500. Only a limited number of ASICs can do that DSP work properly and they are very expensive like the AD936X series. The other way is to do it in an expensive FPGA. The problem is, once aliased inside the 2500, the signal cannot be fixed inside the PC.

Aliasing occurs within the ADC buddy and not in any signal processing that follows it. I don’t pretend to know all of the details of what is in the RTL2832 or the MSI2500. I have no real clue what is inside them or how many taps the FIR filters have. I am just going on the publically available data. But perhaps you know better. Hey! Perhaps you even designed the freakin chips! Maybe you have real data that you can share with the rest of this community as opposed to numbers that you seem to just pluck out of thin air ;-)

I don’t own any of these SDR platforms yet. I am still considering all of the options and I may yet even buy an Airspy (if you don’t completely put me off with your lousy attitude). I think I have figured out its strengths and weaknesses and I think it is a pretty good product. It has some advantages over competing products and some disadvantages as well. I just don’t think it is the ‘super-decimating, paradigm shifting, miracle worker’ that you seem to claiming it to be

P.S. Please don’t forget to tell us what that huge step on your ‘sweep’ is ;-)
 

prog

Member
Premium Subscriber
Joined
Nov 18, 2014
Messages
73
Regarding DC offset, that was exactly the point I was making, this is more of a memory issue. I think we are on the same page here.

Could you elaborate a bit more on your last point? I/Q correction can always be performed in a ZIF architecture because you still have all of the complex quadrature components. In LIF architecture, after the signals have been combined you no longer have the necessary information to be able to make a correction. In other words, it is impossible to distinguish between what is due to the image and what is in fact the wanted signal. I have been an RF IC designer for over 30 years and I cannot see how this can be made to work unless you use a 'calibration signal' at the image frequency with no other signal present. Finite image rejection comes from a number of sources in this kind of architecture, but principally it results from quadrature phase errors in the LO path and component mismatches in the two I/Q signal paths (mixers/polyphase filters). It is simply not possible to correct these errors by sensing them directly as (for example) using a delay locked loop to trim LO phase errors will only null the error out to a level defined by the equivalent input phase error of the phase detector. So in fact, you can actually make the situation worse if your initial phase error was smaller than that of your phase detector. The situation is compounded by the fact that LO quadrature error can come from several sources or which device mismatches is just one. For example, divide by 2s are extremely sensitive to errors that result from VCO (or VCO harmonics) coupling to their DC supply, as are divide by 4s. If you inject a 'calibration image reference' in the absence of any other signals and then trim phase to null the level of this reference signal at the output of the IF, it will always work irrespective of whether the error has resulted from static mismatches or RF coupling effects. You can always null out the effect of component mismatches in the polyphase filter by trimming the phase of the LO, but this will only work at a spot frequency and not across the full bandwidth of the filter.
the situation is not totally dissimilar to that of RF tracking filters. These too are calibrated accurately by the injection of a test signal and the resultant parameters stored either in the software or firmware and then reloaded to avoid the need for constant re-calibration because the problem with on-the-fly calibration with test signals (even where they are self generated by the IC itself) are that the receiver is inoperative during the calibration period and that can be for quite a long time. So whilst in principle, you could calibrate the image response each time the synthesizer is re-locked for a given IF frequency, it would tend to make the receiver dynamically 'sluggish'

From a user perspective I can only "guess" how Rafael Micro implemented the automatic calibration. Here's my analysis from the usage : The I and Q paths after the mixers and LPFs are sampled with low resolution ADCs (1 bit may be?) and the output is used generate a calibration signal that is integrated and used to inject a fraction of one path into the other *in analog*. Then the signals are phased with a polyphase filter then summed (or subtracted) to get the LIF signal. Another low resolution ADC feeds the polyphase filter calibration loop to max the output signal. This loop is more subtle: If there's any imbalance, the image will attenuate the legit signal as they are folded on IF spectrum AND have opposite phases. The "objective function" is to max out the output. There are many settings to control the integration time, the clocks etc. of the different stages I described. But if you sign an NDA with Rafael you will certainly get more insight than in a random forum post ;-)
I remarked there are two ways of implementing IQ calibration with LIF architectures: Rafael and NXP do this job with multiple calibration loops in analog. Others like Silicon Labs and AD prefer sampling the signal right after the LPFs and process the entire calibration with a DSP. In the case of Silabs, they even reconstitute the analog LIF with a DAC. In the other hand I have no idea how the Msi001/2 do their calibration in LIF mode, if any. I know that in ZIF mode one has to calibrate the I/Q signals on the host. May be you have an idea?
 

Yagi23

Member
Joined
Dec 11, 2014
Messages
27
From a user perspective I can only "guess" how Rafael Micro implemented the automatic calibration. Here's my analysis from the usage : The I and Q paths after the mixers and LPFs are sampled with low resolution ADCs (1 bit may be?) and the output is used generate a calibration signal that is integrated and used to inject a fraction of one path into the other *in analog*. Then the signals are phased with a polyphase filter then summed (or subtracted) to get the LIF signal. Another low resolution ADC feeds the polyphase filter calibration loop to max the output signal. This loop is more subtle: If there's any imbalance, the image will attenuate the legit signal as they are folded on IF spectrum AND have opposite phases. The "objective function" is to max out the output. There are many settings to control the integration time, the clocks etc. of the different stages I described. But if you sign an NDA with Rafael you will certainly get more insight than in a random forum post ;-)
I remarked there are two ways of implementing IQ calibration with LIF architectures: Rafael and NXP do this job with multiple calibration loops in analog. Others like Silicon Labs and AD prefer sampling the signal right after the LPFs and process the entire calibration with a DSP. In the case of Silabs, they even reconstitute the analog LIF with a DAC. In the other hand I have no idea how the Msi001/2 do their calibration in LIF mode, if any. I know that in ZIF mode one has to calibrate the I/Q signals on the host. May be you have an idea?

I suspect that 65 dB of image rejection is realistically the best of what can be achieved with any form of analogue calibration. Certainly you would not achieve 65 dB of rejection without calibration. 65 dB is actually very good, but it still represents a spurious response albeit a relatively minor one, but you can never get completely away from some level of image blocking susceptibility with a heterodyne architecture. One neat approach is to be able to flip the image response by inverting the phase of the LO. That way, if you are (for example) high side injection with the LO, but found very strong blocker at the image frequency then you switch to low side injection a put the image on the other side, where hopefully there may not be such a strong blocker. Its a bit clunky, but gives you an extra knob to 'twiddle' if you find yourself in trouble. Personally, I favour the SiLabs approach, although it does cost you in terms of silicon complexity
 
Status
Not open for further replies.
Top