Hyperdimensional Computing Reimagines AI (wired.com)

Either I’m full of myself and over-optimistic about what I can learn, or I begin to understand dimensional computing as explained in this wired article. If so, super cool what this new way of computing holds for “explainability” of AI models in the future…

From wired.com

https://www.wired.com/story/hyperdimensional-computing-reimagines-artificial-intelligence/

In college I got all the way through Calculus I and II and into differential equations and a little into matrices and vectors. I can honestly say I have used NONE of that knowledge, and it has withered completely away in the intervening decades.

THIS article got me interested. Our contemporary problems: Large Language Models, at their root artificial neural networks, compute in a way that is very power-intensive. We are seeing this already in how OpenAI and others are worrying about scaling LLM’s to more users, moving the sophistication upward from GPT 3.0 to 3.5 to 4.0 with more and more layers.

Vector or Dimensional computing holds the promise of changing the paradigm for tracking findings, storing concepts, and manipulating them more easily in, lets say, instead of a flat table of data, into 10,000 dimensional vector space.

CMIO’s take? Although this sounds like a lot of woo-woo, read the article to get the low down. Ready slowly. It took me some time to begin to get it. The reward is, maybe glimpsing a future where AI models can be made to be Explainable. Something not possible at present with LLM’s. Could be game changing.

Author: CT Lin

CMIO, UCHealth (Colorado); Professor, University of Colorado School of Medicine

Leave a Reply

%d bloggers like this: