If it's a Type II Motorola system, TRUNK88 can be used to generate the data. Reports show individual talkgroup loads as well as total loading; I assume you need the latter.rrokie said:Is there any way to use UniTrunker to generate a channel loading report or erlangs report for the system you are scanning?
Hr Calls Min Calls Min
-- ----- --- ----- ---
Talkgroup 800019 - Works Operations/Utilities
--- Friday 01-Feb-2008 ---
00 36 5.4 ** ++++++
01 6 2.4 * +++
02 5 1.1 * ++
03 2 0.1 * +
04 7 0.9 * +
05 20 3.2 * ++++
06 23 2.7 * +++
07 30 2.3 ** +++
08 44 3.3 ** ++++
09 48 4.3 ** +++++
10 61 5.1 *** ++++++
11 128 11.4 ****** ++++++++++++
12 52 4.4 *** +++++
13 26 2.4 ** +++
14 56 3.5 *** ++++
15 9 0.6 * +
16 8 0.8 * +
17 12 0.9 * +
18 0 0.0
19 1 0.1 * +
20 1 0.1 * +
21 12 1.2 * ++
22 7 0.7 * +
23 0 0.0
Since you know that your average rounding up per call is a half second or so, you can compensate for it in any calculations.Unitrunker said:The end of a call is inferred by a lack of activity pertaining to that channel. Basically, the length of every call would be rounded up by half a second or more.
Do you have a way to measure that, to validate your assumption?slicerwizard said:Since you know that your average rounding up per call is a half second or so, you can compensate for it in any calculations.
I made no assumptions. I've already verified my results by comparing them to airtime reports downloaded from a central controller.Unitrunker said:Do you have a way to measure that, to validate your assumption?
If call durations are calculated properly, those errors tend to cancel out. Why would they accumulate?Suppose you logged 100 calls taking 350 cumulative seconds so the average call time is 3.5 seconds. If the +/- 0.5 second assumption is true then you may have a 50 second error in either direction. If the assumption is false - the error might be bigger.
If you're working on an Erlang report, you make sure your monitoring setup is clean.Brief moments of poor signal quality throw the numbers off further.
For the last decade, system operators have been using CTech loggers driven by over-the-air control channel signals to bill customers for airtime usage. Plenty of prior art in this field.I'm hesitant to publish code that someone might try to use to justify adding or removing channel resources to a running system or for internal billing.
Unless one really messes something up, the accuracy will easily be within a few percent. You write decoder software, so you can see at what rate late entry OSW's are being sent. When they stop being sent, you know during what time interval the call ended. For example, if late entry OSW's are being sent every half second, you know that a given call ended some time during the half second after the last OSW. So assume that the call ended 0.25 sec after the last OSW and use that to calculate call duration:There'd have to be a major disclaimer somewhere. I don't mean this isn't useful - just that the end user needs to understand the accuracy of the numbers presented.
Not always. A few years back, our city was transitioning from several non-networked municipal (fire, works, dog catcher, etc.) trunked systems to a public safety (police, fire, EMS) SmartZone system; the city had to figure out where all of the displaced radio users (works, etc.) should go, so they hired a consultant to figure it out. Step one for him was figuring out how much airtime the plebes used, but the central controllers didn't provide data broken down by talkgroups. I monitored each of the old systems for the same seven day period with TRUNK88 and he had his numbers, nicely broken down by user (fire, works, etc.) Those numbers were easily accurate enough for him to make his recommendations.The folks that really care about this stuff have more accurate means of measuring call times.
slicerwizard said:I made no assumptions. I've already verified my results by comparing them to airtime reports downloaded from a central controller.
If call durations are calculated properly, those errors tend to cancel out. Why would they accumulate?
If you're working on an Erlang report, you make sure your monitoring setup is clean.
For the last decade, system operators have been using CTech loggers driven by over-the-air control channel signals to bill customers for airtime usage. Plenty of prior art in this field.
Unless one really messes something up, the accuracy will easily be within a few percent. You write decoder software, so you can see at what rate late entry OSW's are being sent. When they stop being sent, you know during what time interval the call ended. For example, if late entry OSW's are being sent every half second, you know that a given call ended some time during the half second after the last OSW. So assume that the call ended 0.25 sec after the last OSW and use that to calculate call duration:
Call duration = [time of last late entry OSW] - [time of first channel grant OSW] + [late entry repeat period] / 2
For any given call, your error is up to +/- 0.25 sec, but the average error over many calls is much smaller, since they tend to cancel each other out.
Not always. A few years back, our city was transitioning from several non-networked municipal (fire, works, dog catcher, etc.) trunked systems to a public safety (police, fire, EMS) SmartZone system; the city had to figure out where all of the displaced radio users (works, etc.) should go, so they hired a consultant to figure it out. Step one for him was figuring out how much airtime the plebes used, but the central controllers didn't provide data broken down by talkgroups. I monitored each of the old systems for the same seven day period with TRUNK88 and he had his numbers, nicely broken down by user (fire, works, etc.) Those numbers were easily accurate enough for him to make his recommendations.