A story for the Future of Life Institute's "Superintelligence Imagined" contest
by Karl von Wendt
About this project
We are currently in a global race toward developing artificial general intelligence (AGI), an AI that is capable of performing essentially all cognitive tasks at least at human level. But what if we don't stop there - what if such an AGI becomes much smarter than any human, either by self-improving or with human help? Would this mean global dominance for those who first develop such a superintelligent AI, as some people seem to hope? Or would we permanently lose control over our future to an entity we don't understand, with potentially catastrophic outcomes?
The story "The Great Plan" imagines such a future: When the President announces The Great Plan, created by his superintelligent AI advisor, the crowd is ecstatic. But some people still have doubts ...
While this story is purely fictional, the problem of a potentially uncontrollable AI is very real. We must not develop an AI that we won't be able to control, at least until we know how to make sure that it will always act in our best interest, a still unsolved problem. If we continue to blindly race ahead, any catastrophic outcome will not be the fault of a superior artificial intelligence, but of our unbounded human stupidity.
"The Great Plan" is a winning contribution to the "Superintelligence Imagined" contest by the Future of Life Institute. It is available below in various media formats. All files
are free and may be distributed under the Creative Commons CC BY-NC-SA 4.0 license.
Credits
Story, editing, music: Karl von Wendt
Graphics: Midjourney
Voices: Elevenlabs, Amazon Polly
Sound effects: Pixabay
YouTube video (click on image to watch)
Audio story (click on image to listen)
Graphic novel (click on image to read online)
Plain text/ebook (click on image to read online)
About me
I am a German writer of children's books and science fiction novels, usually publishing under the pen name "Karl Olsberg". Since I wrote my Ph.D. thesis on symbolic AI in the 1980's, I have been fascinated by the development of artificial intelligence and have always believed in its huge potential to benefit humanity. I have developed the first commercial chatbot in German language in 1998 and founded several AI-related companies, one of them being named "Start-up of the Year 2000" by German business magazine Wirtschaftswoche. So I am definitely not against AI in general. However, every technology brings both opportunities and risks - the more powerful the technology, the greater they are. I believe AI is the most powerful technology ever invented, and thus the potentially most beneficial, but also most dangerous one. To use its potential safely, we need to proceed very cautiously. Most importantly, we need to better understand which kinds of AI are safe and which risky types we must not develop, at least until we understand how to provably keep them safe. This concerns the future of all of humanity, so we can't just rely on some entrepreneurs in Silicon Valley to make the right decisions. Therefore, I try to inform the general public and the German scientific community about the risks of AI through my fictional work (some of it available in English), my blog on AI risks (in German) and my YouTube channel (mostly in German). I also occasionally write about AI safety in English on the forum LessWrong.
More information on AI existential risks
The Future of Life Institute's AI Safety cause area
AISafety.info (introduction to AI existential risks)
Yoshua Bengio's blog (one of the world's leading AI scientists)