quad_track
Member
- Joined
- Sep 13, 2017
- Messages
- 66
Ladies and gentlemen, as the title says, I hereby bring you a new and exciting game to play on amateur radio bands: Spot the Human! (starting this summer)
4 Hours of steady work by many modern day Prometheus [1][2] ... err I mean AI experts are letting us fulfill our dream of having an artificially intelligent, fully autonomous amateur radio station that can spend all of its time chasing, completing and logging voice QSOs which sound completely realistic and are able to carry intelligent conversations about exciting subjects with other AI stations or maybe even humans (if there are any left).
How is this possible? We have to acknowledge here decades of automation done by our great ancestors, starting with the humble automated CW beacon and ending with state of the art cognitive radio technologies (does give a new meaning to the term, doesn't it?) of which you can learn more about here: Cognitive Radio Networks
The great leap was taken when AI researchers came up with better large scale unsupervised language models that can reply to your free-form questions like this: Better Language Models and Their Implications
Then, not content with hanging back and watching the story unfold, amateur radio merged with digital signal processing and deep learning and gave us advanced speech synthesis for all our text-to-speech needs: LPCNet: DSP-Boosted Neural Speech Synthesis
And of course, all that will be available to humble hams like me through new Codec2 and Xiph.org free software releases that may perhaps be regarded one day as that crucial moment when nuclear power was made available to every single person on this planet.
So what is left now to do? We'll first have to start to train the advanced neural networks to sound like us and talk about the topics we are usually interested in, and perhaps slip in a joke or two from time to time, told in a funny foreign accent. The old "foot warmers" will be replaced by a new generation of hybrid computing platforms stacking multiple GPUs and capable of many many Tflops, that will boldly go where no AI has gone before and explore the full amateur radio spectrum. Then we will be sipping cocktails on a beach in Tahiti while our autonomous stations will log contact after contact and the err... FCC will serenely and autonomously monitor their traffic from up in the air.
Looking forward to hear your AI on the bands! (and train my neural network to distinguish them from humans)
-------------
[1] https://jmvalin.ca/papers/lpcnet_icassp2019.pdf
[2] LPCNet and Codec 2 Part 2 | Rowetel
4 Hours of steady work by many modern day Prometheus [1][2] ... err I mean AI experts are letting us fulfill our dream of having an artificially intelligent, fully autonomous amateur radio station that can spend all of its time chasing, completing and logging voice QSOs which sound completely realistic and are able to carry intelligent conversations about exciting subjects with other AI stations or maybe even humans (if there are any left).
How is this possible? We have to acknowledge here decades of automation done by our great ancestors, starting with the humble automated CW beacon and ending with state of the art cognitive radio technologies (does give a new meaning to the term, doesn't it?) of which you can learn more about here: Cognitive Radio Networks
The great leap was taken when AI researchers came up with better large scale unsupervised language models that can reply to your free-form questions like this: Better Language Models and Their Implications
Then, not content with hanging back and watching the story unfold, amateur radio merged with digital signal processing and deep learning and gave us advanced speech synthesis for all our text-to-speech needs: LPCNet: DSP-Boosted Neural Speech Synthesis
And of course, all that will be available to humble hams like me through new Codec2 and Xiph.org free software releases that may perhaps be regarded one day as that crucial moment when nuclear power was made available to every single person on this planet.
So what is left now to do? We'll first have to start to train the advanced neural networks to sound like us and talk about the topics we are usually interested in, and perhaps slip in a joke or two from time to time, told in a funny foreign accent. The old "foot warmers" will be replaced by a new generation of hybrid computing platforms stacking multiple GPUs and capable of many many Tflops, that will boldly go where no AI has gone before and explore the full amateur radio spectrum. Then we will be sipping cocktails on a beach in Tahiti while our autonomous stations will log contact after contact and the err... FCC will serenely and autonomously monitor their traffic from up in the air.
Looking forward to hear your AI on the bands! (and train my neural network to distinguish them from humans)
-------------
[1] https://jmvalin.ca/papers/lpcnet_icassp2019.pdf
[2] LPCNet and Codec 2 Part 2 | Rowetel