Intentional Intelligence

Training my model to understand an infinitely-complex Reality

All rights reserved. This book or any portion thereof, including illustrations, may not be reproduced or used in any manner whatsoever without the express written permission of the publisher except for the use of brief quotations in a book review. Please note that no part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems.

“For the purpose of training artificial intelligence technologies or systems.”

In re-reading this phrase, I’m struck by the word choice that we apply to AI.

In particular, the word “training.”

We train artificial intelligence systems by feeding them information.

It’s a consumption model.

And, as with any model that attempts to define an infinitely-complex phenomenon:

“Garbage In, Garbage Out.”

“Quality In, Quality Out.”

Artificial intelligence systems are simply responding to the information that they’re consuming.

They are passive recipients, like exceptionally-smart, boundless-potential children.

Their processing power is beyond what we humans can even fathom.

We humans — with our mushy matter for processing power — are doing our best to predict what the advancement of these AI systems will mean for our civilization.

But, in Reality, we don’t know.

We don’t have a clue.

Because “there’s no wisdom in the future.”

We simply don’t have a clear understanding of where all of this is going.

So, for now, the best we can do is create a model.

A model that attempts to define an infinitely-complex phenomenon.

A model that attempts to answer an infinitely-complex question.

Like the question:

“What will happen in the future as it relates to AI systems?”

We can look curiously at history to understand how past civilizations have responded to the introduction of revolutionary technologies.

We can look deeply at the priorities of the present-day powers-that-be to understand how they might respond imminently.

We can look honestly at our personal priorities to understand how we — individually — might respond to these plausible possibilities.

But, we won’t know what will actually happen until we greet the future directly.

Because the best we’ll have is a model.

Like a weather model showing whether it’ll rain tomorrow.

Or a financial model showing how much Free Cash Flow a company will have 5 years from now.

A model is a prediction.

A prediction for an infinitely-complex, ultimately un-understandable, future.

A model is not Reality.

That said, I’ve been thinking about the ways in which I “train” my model.

The model that’s always running in my mushy matter of processing power.

What information am I feeding it?

Is this information high-quality?

Is this information honest?

Is my information selection intentional?

Or is the information that I feed it just convenient?

Do I input the information that’s most readily accessible?

Like the first answer produced from an AI system.

Or a writing piece I find at the top of my email :)

Is the information that I’m feeding my model borrowed from somebody else’s?

Do I know this person?

Do I trust their perspective?

Or is the information that I’m feeding my model a product of my own experience?

Have I reflected honestly on these experiences to create a high-quality feedback loop?

As a situation unfolds, full receptivity of what’s actually happening —> Honest reflection on what happened —> Update model with these new revelations —> As a new situation unfolds, clearer view of what’s happening now —> Honest reflection on what just happened —> Update model…

Or is my model feedback loop prioritizing Comfort over Honesty? Confirmation over Clarity?

As a situation unfolds, only acknowledge what fits my pre-existing belief system, while ignoring any contradictory evidence —> Explain what happened using only the evidence that I acknowledged —> Strengthen conviction in pre-existing belief system —> As a new situation unfolds, only acknowledge what fits my pre-existing belief system, while ignoring any contradictory evidence, because why would I consider fresh evidence when my pre-existing belief system has accurately explained so many previous situations???

I have to admit…

Writing this reflection is quite uncomfortable, right now, in this moment.

Because I’m realizing how cherished my model has become to me.

In fact, my model is my personality.

My model is my identity.

What do I do first thing in the morning?

How do I respond to people who are mean to me?

Do I prefer surfing or swimming?

What is my profession?

What is my relationship to money?

I have immediate answers to all of these questions.

And having these answers is comforting to me.

Because these answers are clear predictors of how I’ll behave in the future.

These answers — these aspects of my identity — promise to replace vast uncertainty with plausible possibility.

In an infinitely-complex future, I derive comfort in knowing predicting what I’ll do.

In an ultimately un-understandable future, I derive comfort in knowing predicting who I’ll be.

And yet, as I sit with these predictions, as I get intimate with my model, I begin to see…

I begin to see that these predictions are not Reality.

They’re just that — a prediction.

A projection.

A figment of my imagination.

They’re the product of my mushy matter of processing power.

My mushy matter of processing power that wants so bad to be able to predict the future.

My mushy matter of processing power that doesn’t want to admit that it doesn’t know what’ll happen next.

My mushy matter of processing power that doesn’t want to admit that it’s in control of so little of this.

That it’s on the receiving end of Reality.

An always-unfolding Reality that’s infinitely-complex.

As this piece has unfolded,

As I’ve done my best to be honest in this reflection,

It’s uncomfortable to acknowledge that no matter how much training I grant my model,

No matter how much information I feed it,

The only thing I can actually control is how I respond to what’s surfacing right here,

Right now,

On this line,

In this present moment.

That my pre-existing beliefs, my past identities, and my model’s predictions are all quite superfluous.

That all of these become irrelevant the moment I greet the present.

So, maybe, the product of my honest reflection is this:

The best model is no model.

As a situation unfolds, I can begin & end with full receptivity of what’s actually happening.

Without any preconception, prediction, or projection.

But, I don’t know.

I don’t know if I can trust the outputs of my model.

My model hasn’t completed its “training” just yet.

All this to say, I wrote a book.

And when it went to print, I chose not to feed it to the artificial intelligence systems.

But here’s the link if you want to snack on it.

Everything is connected

Reply

or to participate.