Predictive and/or Generative Systems
21 Nov 2023
There are two concepts that seem to often times be conflated in discussion on cognition and artificial intelligence, that being predictive and generative. And for understandable reasons, usually systems are both or neither, but it’s still important to consider them separately.
Senses
The general setup I will consider here is the following: We have some external unobserved world state \(e\), and function \(R\). What is observed is the sensory state \(x = R(e)\). The states \(e\) and \(x\) are implicitly to be understood as functions of continuous time.
So as an observing and acting system, what are we to do with this information \(x\)?
Option 1: Ignore
If we already have a method of adaptation, such as genetic variation plus natural selection, why bother actually observing anything? And after all, actually making good decisions based on such indirect information about the state of the world requires a lot of circuitry, that costs too much energy. Better to stick to fixed specialized programming.
Option 2: React
Okay, there is this one chemical signal that really correlates strongly with success. Maybe it’s worth dedicating a bit of energy to dynamically adapt our behavior when it is nearby.
Option 3: Predict
We’ve been noticing that there is this other chemical that always shows up right as we are about to be gobbled up by another system. It’s not bad by itself, but maybe we should already adapt our behavior before the actual danger approaches and it’s too late? I know, wild idea, but it does help with staying alive.
Option 4: Cognition
It’s really getting demanding to keep track of all these instincts we have acquired. When this chemical shows up do one thing, when that chemical shows up do another thing. And these new sensory modalities are insanely complex. Have you seen this weird eye thing that some of us have now? Apparently with that you don’t even have to react to chemicals, you can see, whatever that means.
And you know, I’ve been beginning to suspect that there is some underlying pattern to all these signals we react to. Like, there is something out there, you know, some world. Wouldn’t it be much easier to first figure out what this world is that produces all these signals we keep blindly reacting to? Imagine having direct access to some simple model of the world to act on, that would make it some much easier to coordinate all this complex behavior.
But how Cognition?
It’s really a very strange thing we do, this experiencing of the world. While in the real outside world that we are a part of as a large clump of particles, we can’t act on anything that isn’t part of ourselves, inside this world in our mind, we can directly observe the state of parts of the world that are in reality entirely external to ourselves.
The big question is, when we only have access to this projection of the external world state \(e\), our senses \(x\), how is it that we can get an approximation \(y\) of \(e\)?