Why I read and blog about Sci-Fi. Life 3.0, Superintelligence, and the Sirens of Titan

This is a fun read. My father never understood my passion for fantasy (The Hobbit, the Lord of the Rings) in middle school or sci-fi in high school (Ender’s Game, entire libraries of Asimov, Heinlein, PK Dick, and countless others). I’d try to explain, (not nearly as cogently as this journalist) that science fiction was imagining about our future, and that so many predictions from sci-fi authors have come true.

I’m currently reading Life 3.0 and SuperIntelligence for an upcoming book club, and also stumbled across The Sirens of Titan by Kurt Vonnegut, from 1959! Vonnegut is prescient; he predicts future concerns of machine intelligence, indeed artificial general intelligence, the concept (and worry) that, once created, a superintelligent being will be difficult or impossible to control and may find its human creators tiresome and unnecessary.

Hmm. The same theory is proposed, 60 years later by the authors of Life 3.0 and Superintelligence, but with more evidence and detail.

CMIO’s take? Where is the sci-fi about the future of Electronic Health Records? Ready to write one?

Author: CT Lin

CMIO, UCHealth (Colorado); Professor, University of Colorado School of Medicine

2 thoughts on “Why I read and blog about Sci-Fi. Life 3.0, Superintelligence, and the Sirens of Titan”

  1. Just finished SuperIntelligence and Life 3.0. I enjoyed the epistemological framework laid out in 3.0 and the considerations both make for humanity’s future.

    With full respect for the authors I would say that I do not think General AI is coming anytime soon. Most of the advances in AI today are in Narrow AI. i.e. what is that image? is this dude sick? what is the signal in this noise? To actually model AGI we have to completely understand how the brain works. And from my reading we need so much more information to be able to get to that point. Even in current state there are too many technological and epistemological limitations for anyone to model and predict the movements of a paramecium or amoeba. At the heart of this is that there is an assumption that all the computation is going at the cellular or neuronal level whereas nature is parsimonious. There is a lot of machinery inside the cell that are doing calculations that that aren’t accounted for in machine learning. The best estimates are that it would take 50 years of Moore’s law before we can simulate what is going on within even a cell. So saying that “I’m going to make a model in which replicates the behavior of neurons, with on/off switches representing the neurons and then use that to build a replica or intelligent system that can rival the human brain” is overly simplistic in my view. I would also say that in order for AGI to take over it would need to replicate some systems that as far as I can tell are non-negotiable for evolving a complex lifeforms. It would need its own energy source and way to process the consumption of that energy – digestion. It would need to have a reproductive system. It would need to introduce processes in its code which would allow for mutation and a social mechanism for selection of the best “genes”/alterations to its code. I would also posit there is no such thing as general intelligence. Every intelligence is contextual within the the environment that it evolves in. For all these reasons I think a lot of the people talking about the rise of AGI – the burden of proof is on them. I haven’t seen anything that would lead me to believe that AGI will come to be – instead we are solving deterministic closed set finite problems using large amounts of data but its not sexy to talk about that. But Narrow AI is here and it is going to take millions of jobs.

    I AM frightened by the social implications of narrow AI. I think dumb Chatbots, or phone routing agents lead to a decay in the moral fabric of our society. When a virtual assistant can do so much for you and you don’t have to be polite to it that will reverberate throughout society and individuals will have a difficult time navigating the line between how to treat humans and ai.

    Also it will automate a lot jobs. I like Andrew Yang’s answer to that question.

    But I love being a part of this. The upside is huge even considering all of these negatives. But I’ve written too much now and am too tired to get to the positives. 🙂

    Thanks CT for all of your posts. 🙂

  2. Great blog, CT! I really enjoy your posts. As an avid SciFi fan, I love hearing what you are currently reading. For myself, I’m in the middle of the Machinery of Empire series by Yoon Ha Lee. If you are not familiar with it, the first book Nine Fox Gambit is brilliant.

    Thanks! John

Leave a Reply

%d bloggers like this: