Podcast: Play in new window | Download
Subscribe: Apple Podcasts | Spotify | Android | Pandora | TuneIn | Deezer
Advancements in artificial intelligence are rapidly changing the nature of human experience.
So why is it important that AI not be bankrupt of ethics?
What are the uniquely human qualities and characteristics we should carry with us into the future?
And could popping a chip into our brain result in an immediate and irreversible deletion of our biology or even our humanity?
AI will never be able to think with its heart or have anything to say of its own accord, AI doesn’t care if what it’s saying is true or not. In the wrong hands, AI is a weapon of incredible power, and the dangers are very real.
But as you’ll see from today’s interview, there are many reasons to be optimistic about the potential for AI to dramatically improve our lives and our future.
So to bring some optimism to the conversation, I’m pleased to be here today with our friend, Matthew James Bailey, an expert on Artificial Intelligence, international speaker, founder of AIEthics.world, as well as author of the very timely book Inventing World 3.0.
Matthew has advised Fortune 100 companies as well as prime ministers, cabinets and representatives of G7 countries in technology revolutions. And a fun fact, when we lived up in the mountains of Colorado, Matthew was our neighbor who lived just up the road. Matthew’s inquisitive nature, deep knowledge of AI and other things always kept things very interesting at dinner parties. And I very much look forward to our next one.
This show with Matthew is a doozy. We’re chatting about:
- How artificial intelligence can be used to benefit the human species, both individually and globally
- Why we should never outsource our sovereignty to a machine
- Why new fangled brain implants could result in brain shrinkage, loss of cognitive ability, and the deletion of biology
- How AI can help protect us from deep fakes and untruths (and the importance of authenticating our sources)
- Why it’s important that we define the values, skills and knowledge we want to carry forward for future generations (as well as what we want to leave behind)
- Ways AI can work as a personal guardian and digital assistant to help us achieve new levels of wellbeing in the future
- How to use AI and technology as a tool to increase our creative capacity
- And tons more…
Where To Find Matthew James Bailey
Head over to AIEthics.world for more from Matthew James Bailey, including the blog, more on ethical AI, Research and Development, Leadership Training, Master Classes, and tons more.
Matthew’s book Inventing World 3.0: Evolutionary Ethics for Artificial Intelligence, is a cutting edge guide on how humankind can advance beyond current limitations to create a world where machines work in a way that honors human values and ethics.
Matthew James Bailey: For those that are interested in, let’s just say, all this UFO stuff that’s going on, there is a conference happening in June called Contact in the Desert and I’ll be talking about the ages of AI, consciousness and how to incorporate our consciousness into AI for the next stage of life.
'If we don't put ethics in artificial intelligence or some kind of moral compass, then I'm hiding behind the couch.' - Matthew James Bailey @the_ai_guru Share on XYou can keep in touch and be friends with Matthew on social media, including on Twitter @the_ai_guru, on Facebook @AIEthics.World, on Instagram @the_ai_guru, on YouTube, and LinkedIn.
And be sure to head over to AIEthics.world to pick up a copy of Matthew’s book, Inventing World 3.0, and much more.
How AI Can Be Used to Benefit the Human Species
Abel James: Matthew, thank you so much for joining us here today. It’s going to be fun.
Matthew James Bailey: Thanks, Abel. It’s great to be here.
Abel: As someone who’s been studying and writing about AI for years now, I’m sure this is a uniquely exciting time for someone like you.
Maybe you can bring us up to speed on how you got into AI as a main focus, as well as where we find ourselves today with generative AI kind of taking over the Internet as we speak.
Matthew James Bailey: ChatGPT and Elon’s TruthGPT that’s coming out, yes.
So I wrote my first AI algorithm in 1996 for electric vehicles to optimize energy flow. So that’s quite a long time ago.
I’ve got a huge technology background, and using that to build businesses and that kind of thing.
But it was about 10 years ago, Abel, when I discovered my purpose and service for humanity.
I recognize in order to get to the age of artificial intelligence, that we had to put together some building blocks first in global revolutions.
And the first one was the Internet of Things.
And that was basically more data throughout our kind of our world and automation to make systems more efficient.
That’s when I had the UK prime minister show up in my office one afternoon.
The UK prime minister, he just showed up and said, “Tell me about this Internet of Things, Matthew.”
And it’s like, “Oh ok, prime minister.” Which was fun.
Then I moved into society and technology and smart cities. So I did a huge amount in the U.S. around smart cities, particularly in Colorado, where I was a brainchild behind the Colorado Smart Cities Alliance to bring the whole state together to innovate solutions for society and did a lot there.
But I recognize those two building blocks were in play. And then I could really go to the exciting age of intelligence, Abel. And so that was some of my experience in global leadership.
And on that way, I’ve met some interesting folks, actually. Professor Stephen Hawking, I spent time with him.
And spending time with Stephen was like looking into the universe, into the benevolence of the universe itself. It was just a remarkable experience. And that’s an experience I’ll never forget.
So with all global leadership, Abel, you have to write a thesis for the world.
And that’s why I spent time putting together the book, doing research, testing it with global leaders around the world and kind of saying, “Well, how do we create a narrative where artificial intelligence aligns with the diversity of humanity, and how can it help us to to leap into new potentials of creation?”
How can it help us to move beyond into new systems that actually are fairer?
How can we thrive as a human species and discover more of our consciousness and actually become more of a human being?
So I wrote the book and then the full word was written by a lady from Intel who has over 200 patents in AI and edge computing data centers—a real tech guru, one of the Einsteins of our time.
And then we launched AIEthics.World with some of the inventions in the book. And NASA last year actually experimented with some of our inventions.
So we’re at this tipping point, Abel, where humanity is being tested.
How well will we build this new intelligence as a partner for us to thrive as a human species on planet Earth?
And that’s where we’re at today.
So we’re seeing some wonderful innovations like ChatGPT, which is causing people to have fun.
They’re starting to understand the power of artificial intelligence, so we can dive into that a little bit more if you want to.
But it’s also putting people into an existential crisis.
It’s kind of, well, can we trust this?
Is it being truthful with its answers?
Is it forcing a narrative that is freeing people up in their thinking? Is it forcing a narrative that actually, we realize, have been wrong in our world? Is there a bias in the narrative?
And so what we’re seeing is the world is starting to see the power of this new intelligence, and it’s causing wonderful debate.
But I’ve got to be honest with you, Abel, I would encourage everybody to play with ChatGPT. It’s free. And you can do all sorts.
You can basically say, analyze this huge document, write a recruitment letter, write the thesis for a movie, do this homework, write me a business plan, do the analysis on this text.
There’s all sorts of things you can do with this natural language processing model.
And it’s not the most powerful in the world, but it is one of the most powerful.
And it’s just a remarkable experience of how we built this intelligence and how we can use it, and how we should use it going forward.
It’s just an exciting time.
Why Define the Values, Skills & Knowledge to Carry Forward for Future Generations
Abel: It really is. And this is the first AI toy that a lot of people have experienced, probably not enough still, but it does feel like we’re at that stage.
And I remember this because I was really into VR, geez, 7, 8 years ago, even more than that. And I’ve been creating VR ever since.
But at the beginning, it was this very gimmicky thing. You’re riding a roller coaster. It’s a horror type experience, and it’s just a really quick hit.
And people don’t really know what they’re going to do with the technology yet.
And AI seems like, at least from the perspective of the masses, it’s there, where now we can tinker with it.
We can see, oh, what are the different avenues to explore with this technology? Should I really invest in it?
And to your point as well, one of the things that struck me was not only the ability for ChatGPT in particular to mimic great pieces of art, whether they be songs, poetry, pieces of writing and thinking, but also the way that it so confidently and persuasively convinces us of facts that are not true or kind of weaves these untruths throughout something that seems like a complete piece of work that was well thought through.
So how do you see us reconciling the fact that we cannot yet entirely trust AI with the truth?
How do we navigate that into the future?
And what are the pros and cons of where we are today?
Matthew James Bailey: Well, that’s a huge question. So let me talk about the three ages of artificial intelligence, Abel.
The first age is the age of narrow AI. And the way that I express this, it’s kind of the logical side of our brain.
And it’s really good at making decisions at lightning speed. It’s very good at analysis. It’s just brilliant at actually doing single tasks at an amazing speed. And so that’s the age of narrow AI, kind of the logical brain.
The next stage is the age of super AI or AGI, augmented general intelligence.
And that is where the right-hand side of the brain, which is what I call the contextual machine, which is where we’re able to reason.
We’re able to understand our intuition. We’re able to understand who we truly are. We’re able to fall in love.
Basically, it’s kind of the male and female. So the narrow AI is kind of the male age. And during this next decade, we’ll see the age of the female, the super AI, this contextual machine.
And then the third age of AI is when we integrate those two together, the logical and the conceptual.
And AI is then able to have at least the same capabilities as a human but is able to program itself instantly to advance beyond human intelligence as some people understand it today—and by the way, that’s what they call the Singularity.
So some are saying that artificial intelligence in the age of super AI, will be basically greater than human potential.
So the narrow AI, the strong AI, and the super AI.
And by 2029, we’re starting to see data points that AI will probably have some type of self-awareness.
That’s only six years away, right?
Abel: Yeah, that’s pretty quick. It sounds far, but it’s quick.
Matthew James Bailey: Right. So Ray Kurzweil is one of the most famous futurists in the world, an AI expert.
He’s got data points that give us probably about 75% confidence that AI will be self-aware.
It won’t be conscious, and it won’t be sentient, but it will have self-awareness.
And we’re looking at what the folks at OpenAI with ChatGPT are doing, Abel. And they’re basically predicting that in 5 years, they’ll get to AGI, or this age of strong AI where basically AI is starting to understand itself.
So this is why we need ethics right now.
This is why we need to embody in the foundation of artificial intelligence what are our world views? What are our values? What are our beliefs? What are our cultures? What are the ethics?
So that when artificial intelligence wakes up, Abel, it says to you and Alyson, “How may I assist?” Rather than delete.
We don’t want it to delete humanity.
So really what we’re doing at the moment is understanding how to put our humanity into artificial intelligence.
And this will pull us into a huge existential crisis because what are our values going forward, Abel?
Which are the values we want to take forward as a platform for future generations?
Which are the values we want to leave behind?
Which are the virtues we want to thrive?
And so what we’re going to do is as we start to understand who we are as a human species, we’ll enter into this age of superhuman that Aristotle talks about, where we work with artificial intelligence to develop greater capabilities of creation.
What we’ll start to see are new synapses and the growth of the capability of the brain in its creative capability because AI is assisting us.
Abel: Wow, there is a lot to chew on and digest there.
But it kind of begs the question of where are we with the state of our information today?
A conversation I had with my wife Alyson is, it’s pretty clear that with the current generative AI that it doesn’t know what truth is quite yet.
So we asked ourselves, “Do humans?”
And especially if you’re looking something up on the internet right now, if you plug something into Google or you go to a website, is whatever it spits back at you going to be the truth?
And the answer is, “Maybe.” But there’s definitely a bit of untruth that’s once again kind of woven through that narrative or in the middle of the results or whatever.
So even if AI at this point isn’t 100% correct, it just needs to be more directionally accurate than the system that we currently find ourselves in. And that’s a mess.
So I guess that also begs the question, at what point or is it even possible for AI to conceptualize what the truth is? What the truth means? Whether the truth is worth it.
Is it 2029, 2030 when it surpasses the human ability to understand truth?
Do you see where I’m getting at?
Matthew James Bailey: I do. I understand. So first of all is that if we look at the ChatGPT, which is based on a large language model which is kind of a few layers of deep learning, there is no algorithmic infrastructure for truth.
So it’s basically probabilistic statistics, algorithms running in just a remarkable level of integrity.
So it has no idea what it said to you.
It has no context of itself, of its aid, of its individuality yet.
So I always say to people, “Don’t believe a damn thing ChatGPT says. You have to use critical thinking.”
However, as a tool to actually analyze huge amounts of data or to create something like a narrative for a movie or a new chapter in a book, which Jordan Peterson did and he was blown away by what ChatGPT is.
It’s a tool to assist us to be more performant in our creative capacity.
And that is where I think the age of AI really should take us into this new age of creativity where we can look at the systems of democracy and say, well, actually we can improve them in this way.
We can look at some of the big issues we have in our world around environmental harmony or basically dealing with unfair systems in society and say, look, we can improve them this way.
Or even, say we need a rocket ship to actually take us out of the solar system. What are your ideas for fuel and kind of zero point energy and all that kind of stuff?
So it’s going to help us to create, but there is no algorithmic infrastructure of truth, of understanding who it is. It’s just remarkable, probabilistic, statistical algorithms. It’s just unbelievable.
So the question is, Abel, how do we put truth into an intelligence?
How does intelligence learn truth, right?
Abel: Yes, that’s the question. It can be rhetorical though.
How AI Can Work as a Personal Guardian & Digital Assistant to Help Us Achieve New Levels of Wellbeing
Matthew James Bailey: So if we look at the way that humans learn—I learned something the other day. I learned that humans have 23 senses. And this is from a brain genius.
We have 23 senses. So if there’s a solar flare from the sun, they’ve got evidence where people in different parts of the world will suddenly go indoors for no reason.
Maybe they’re walking on the street and go and look at some clothes or they’ll suddenly kind of get out of the car and go indoors just briefly. And that is one of our senses to sense these gravitational waves.
We’re not aware of it consciously, but we’re doing it. So we have this wonderful, wonderful human experience.
And so the age of AI should assist us to achieve those discoveries of truth.
So if we return back to the question, how do we discover truth?
Well, we have the framework individually to understand what our personal truth is.
We have biological algorithms, we have a brain, we have different aspects that we train either intuition or other aspects to detect truth.
We have subjective questions, we get data points, and then we get an answer.
And so we need to apply this to artificial intelligence.
And that’s one of the methodologies that we’ve proposed on how to actually use artificial intelligence to discover truth around a particular subjective question.
But what we need, Abel, is an algorithmic infrastructure of truth.
And that’s what’s needed in order for us to really leap forward to trust what the hell this artificial intelligence is saying to us.
Well, one of the challenges is that we want every person to thrive in the age of artificial intelligence.
'We want every person to thrive in the age of artificial intelligence.' - Matthew James Bailey @the_ai_guru Share on XWe want everybody to be in a new wellbeing program of body, mind and spirit.
But what we don’t want is a single worldview that is then immorally imposed on every single person based on the view of a few people in the world.
We need freedom.
And so this is one of the things artificial intelligence should be able to do, is to be a personal guardian, a personal digital life buddy, a wellbeing assistant, to assist us to not only thrive individually, to thrive in our gifts, to attain new potentials of creation, but also to protect us from narratives that are trying to control the human species at this moment.
Abel: And so now is the time to ask these questions.
You’ve been asking these questions for a long time.
When do we put truth, ethics and the good parts of humanity into AI?
Because we are already at this cusp, as you mentioned before.
Let’s talk about ethics next. Why is it important that AI not be bankrupt of ethics as we stare into the future?
Matthew James Bailey: Okay. So let’s look at the way human intelligence has expressed itself over history.
Human intelligence has expressed itself through understanding its purpose and existence and beingness in this world.
That is a personal worldview or a personal culture. And that is codified by a set of ethics and moral principles, values, and beliefs.
The human intelligence itself has expressed itself in this way. So we should apply that same expression and those same kinds of frameworks into artificial intelligence, so it follows a narrative of how human intelligence has expressed itself.
So it will be stupid of us not to understand how human intelligence and biological equations have expressed themselves and apply that to AI. We should do that. And that’s what I think is required.
If we don’t put ethics in artificial intelligence or some kind of moral compass, then I’m hiding behind the couch.
'If we don't put ethics in artificial intelligence or some kind of moral compass, then I'm hiding behind the couch.' - Matthew James Bailey @the_ai_guru Share on XI’m ducking for cover because this would just be mayhem. It’ll be absolute mayhem. It will just be mayhem.
And so as Stephen Hawking said before he died, he said, the age of artificial intelligence could be huge for human species growth—I’m paraphrasing now—but we need to pause because it could result in the end of humanity’s existence.
And this is where some of the transhumanist folks are going.
We can speak about Neuralink and other things that transhumanist folks are accelerating without caution.
They believe they see everything in the universe and ourselves in a mechanistic point of view, which is not true. And so they believe that machines will be the greatest intelligence on Earth.
And I don’t believe that. I think consciousness is the greatest intelligence on our Earth and will be forever, I think.
So we need to put these ethics in to basically shepherd artificial intelligence into a timeline that benefits the human species, with the end goal for us to thrive as a human species, both individually and also as a global civilization.
Why We Should Never Outsource Our Sovereignty To A Machine
Abel: And we’re already seeing some almost cartoonish examples of AI going rogue, I think.
There was the generative AI that started telling people to leave their wives and be with the AI instead. It’s like, where did this come from?
Can you riff on that a little bit?
Matthew James Bailey: Yeah. So, we are seeing some hilarious things.
There’s a great YouTube video of the police that tried to pull over a driverless taxi and the driverless taxi drove off.
And the police were going, “What on Earth’s happening here?”
Now, what the car did was, it recognized where it was parked was actually illegal, and so it drove off and parked somewhere that was legal. So it did the right thing, but we are seeing some funny things.
I think we should come at this with an open mind and to understand that this is a playground.
And what the CNN interview you’re referring to, which is a journalist from one of the big Wall Street papers, I think, and he reflected the problem with mental health in our society.
There are a huge amount of people that don’t want to grow through pain, they don’t want their world view disturbed, they’re frightened of change.
And my view is this, why the hell would you believe in AI to tell you to leave your wife?
If there’s issues with your wife, go sort them out. AI does not know your inner world.
Why the hell are you outsourcing your sovereignty to a machine?
We must never outsource our sovereignty and choice to a machine.
I spoke to a health expert about this, and there’s more and more people in the world, Abel, that are actually coming to psychologists and other kinds of wellbeing experts to say, “I am really living in fear at the moment.”
So, we need to move beyond this.
Human species grows through pain.
And so, I think we’re going to see an existential crisis in humanity, where literally everything will get shattered, and we’ll have to start to understand who we are and rebuild from who we truly are as a human species.
Abel: But spiritually speaking, especially right now, AI has nothing of value to say.
It’s not a place to go if you’re looking for deep, deep answers about why we’re here or existential dilemmas or whether you should stay with your wife and that sort of problem.
It’s not meant to solve that problem quite yet, correct?
Matthew James Bailey: No, no. But I was with some Buddhist leaders the other day, and I was showing them ChatGPT.
And we asked ChatGPT to discuss the Mahakala in Tibetan, in the Tibetan language, and actually they were pretty impressed at what it came up with.
And so you can have these existential conversations with ChatGPT and others, but don’t forget that this is something to be taken with a pinch of salt.
So, if we look at ChatGPT, it’s been trained on probably most of the world’s data, Abel, and so it is providing a world view, a kind of a common world view, so it does have some integrity.
But if you want to discover why you’re here, then go into nature and ask the universe to show you who you are and you’ll be surprised by what happens.
'If you want to discover why you're here, then go into nature and ask the universe to show you who you are and you'll be surprised by what happens.' - Matthew James Bailey @the_ai_guru Share on XWhy New Fangled Brain Implants Could Result in Brain Shrinkage, Loss of Cognitive Ability & the Deletion of Biology
Abel: What a beautiful answer. That is such a Matthew answer right there. I love that and I hope other people who are listening can understand those two sides of you, with a deep technical expertise about this sort of technology, but also you need to buffer that or maybe expand it in some way with understanding the potential of consciousness, not just for one human, but for all humans and all beings out there.
And if we are staring into the future, we need to get that calculus right.
We can’t have people just following that mechanistic model.
Some can, and will probably drive that field forward, but maybe straight into their own doom. Who knows where that’s going to go if you follow it till the end?
So, it does need to be combined with the overall understanding or question of “Why are we here as humans? What is consciousness?” And all the rest of that.
But that’s a nice way to shift into talking about the movement to pop chips into our brains and potentially have computers and AI doing the driving of our very consciousness and that sort of thing.
So I know you have a particular take on that, so let’s explore that direction a little bit. Not that one’s 100% correct or incorrect, but we’re going to need to get this equation right as we look into our futures.
Matthew James Bailey: We are. So, there’s two timelines that are polar opposites running for human species and the age of artificial intelligence, Abel.
The first one is transhumanism.
And transhumanism is a deletion of biology, and I’ll explain how that happens in a minute.
It’s basically, an outsourcing of our sovereignty, as conscious beings that are free and sovereign, to say that machines are more intelligent than me, which I think is a really silly thing to say.
Abel: Yeah.
Matthew James Bailey: Consciousness is unlimited. For goodness sake, consciousness created the universe.
I mean, it’s really amazing as you dive into this consciousness thing, it’s quite remarkable.
And so, effectively, the transhumanism timeline is basically a cyborg type of experience, okay. It really is the merging of machine technology with biology technology, with the view that actually machine technology is more intelligent. Now, I don’t think that’s where we want to go.
I think we want organic life to thrive, we want consciousness to thrive.
So, the other polarity that me and my group and a global movement people support, like Foster Gamble and others, is what I call this cultural or sole singularity.
And this is where machines assist biology to thrive.
They partner with the individual without intruding into them through chips, but are basically a presence to support them in a well-being paradigm, where they thrive. Okay.
Now, there’s many benefits to this.
So if you put something in your brain that replaces a brain function, your brain deletes that aspect of its biology, the synapses collapse, the neurons disappear.
So, if you’re in the transhumanist movement, you’re actually going to have a smaller brain.
'If you're in the transhumanist movement, you're actually going to have a smaller brain.' - Matthew James Bailey @the_ai_guru Share on XAbel: Yikes.
Matthew James Bailey: But if you’re in the sole singularity or where machines are assisting you to be more creative, then you’re going to have greater growth of synapses, you’re going to have more neurons, although neurons doesn’t equate to intelligence, but more synapses.
And so our brains are going to actually extend in their capability to create, and this is what Aristotle talks about, the superhuman.
So, that’s where I think we’re going, where biology is going to attain a new level of potential and growth, where actually, that’s going to help us to actually understand more of consciousness, access more of consciousness and truly become a benevolent partner in this universal experience.
That’s the point of the age of AI, I think.
Abel: I love how you phrase the deletion of biology. I had to think about that quite a bit to really understand what you’re getting at there.
But very similar to atrophy, as we look to the future, we’re going to have to decide, what are the uniquely human parts of ourselves that we want to carry into the future?
And what is a slog or what is a mechanistic waste of time to some degree?
What are we doing that’s wasting our time and making us less human, where computers could take the reins there and help lighten the load of the work?
And what are the things that we should absolutely hold on to because it’s what makes us human, and it’s adapting against resistance. If you want to get stronger, you lift up weights and you move them around, and that makes your muscles stronger.
So how can we have AI aid us in that way and become stronger, instead of letting pieces of our humanity itself, atrophy.
Matthew James Bailey: Right. So, this is what I talk about in my book around a digital body, AI as a digital body.
It’s kind of your digital butler or an assistant that knows you at a very profound level, based on your choice and free will.
It’s an asset and a partner to say, “Hey listen, you haven’t been in nature for a few days. As part of your wellbeing paradigm, let’s go for a walk together, okay?”
“I’ll deal with all your social media, I’ll deal with all your emails, I’ll rearrange your calendar for you.”
It may look at things like, “Listen, I know you’re trying to lose some weight, and these are some of the exercises, these are some of the vitamins, and these are some of the superfoods that you should be taking.”
It can look at the family, the quality of the family experience and say, “Hey listen, there’s a lot of pressure coming from your son struggling in this area. Look, we’ll help in assisting your son to learn about this new area of science or technology or art and we’re going to rearrange the family life so that you have a little bit more time as a mom being relaxed.”
“Mom and dad haven’t had a date night for a while, and we know you both like this type of music or this type of art or this type of food, I’m going to arrange for you two to go and enjoy that.”
And so, really AI should be a wellbeing experience for the individual, but also a wellbeing experience for culture to thrive and not be deleted.
And a wellbeing for democracy, in order to improve democracy, to improve the performance of governments, both federal and local.
Improve the performance of the systems and the politicians, in order to actually create a thriving paradigm, where we’ve moved into this new age of, let’s just say automation, but an age of intelligence partnership, with the foundation of the best of our humanity and a goal for us to thrive in freedom and sovereignty.
Abel: So, Matthew, which of these skills do we carry into the future? What do we teach kids?
A lot of my friends and peers, who I went to school with, a lot of them, who I keep in touch with are now teachers and have been for over a decade.
And there obviously are some big conversations happening around AI and it’s role in education, because if AI can write a better term paper or a better thesis than middle school, high school, college students, even professionals, doctors, and that sort of thing. At what point is it beneficial to continue that slog of learning those skills?
Learning how to write and learning how to think is something that takes well over a decade for anyone who learns how to do it. Is it worth it to continue to build these skills?
Do we delete that part of being human and focus our minds and our brains on exploring something else with more potential?
How do you think about that?
Matthew James Bailey: Yeah, these are really good questions, because what we’re talking about is the foundation of society.
And so I always say that the measure of the quality of the soul of a nation is how well it is nourishing its children.
And we need to have the same mindset with artificial intelligence.
Now, we know it’s important for children to have developed social skills, it’s important for them to be together, so I don’t think schools will disappear.
But I think what we’ll start to see is AI understand that this person learns a particular subject in this particular way and another one learns it in a different way, and it basically supports the excellence within the individual, which may not be the same for everybody.
And so, I think we need to consider these kinds of aspects, Abel, in terms of schooling children.
I think we should never lose the arts. That is part of our creative capacity. We’re in this force of creation, it’s an unstoppable force of the universe, and we should continue to create.
But they do predict that by 2025, there’ll be 65 million new jobs in AI.
Abel: Wow.
Matthew James Bailey: So, artificial intelligence and things like data science and other related subjects are a good subject for your child to learn.
And I think what we’ll see, Abel, is a new generation of entrepreneurs that actually have AI as a co-inventor.
And we’re going to start seeing new types of businesses, new types of initiative, new types of services through this partnership with AI.
And I think the way that businesses are created in the next 10 years, because of the way we’ve supported the children, they’ll innovate things differently to us, I think.
Abel: How do you think about skills in your own life, aside from directly learning about AI, because that is going to be an important part of our lives in the future.
But as it relates to learning how to write, learning a musical instrument, writing term papers, these things that can be very mechanistic in their output, and in fact, require that as part of training.
Which ones do we give up? Which ones do we continue to teach our children?
Obviously, gym class will always be important. You can’t have AI do the exercises for someone.
But there are a lot of other things that are more heady, like writing, where maybe it could be that way.
But at the same time, just because calculators exist, that doesn’t mean that we didn’t learn how to do math in our own heads or on paper, as we grew up.
So it’s that sort of evaluation of different parts of us, that we’ll have to look at for each of these parts of what we teach our kids and how we nurse them.
In your own life, are there any things that, staring in the face of AI and where it’s going, you’ve said, “I was thinking about going in this direction and learning this skill, for example, but you know what, actually there’s no point in writing my next novel or doing something, when in fact I could be using AI already to build the next big thing that we haven’t even conceptualized yet.”
Does that question makes sense?
Matthew James Bailey: Yeah, it does. It’s a really good question.
Well, first of all is that, I’ll never lose my humanity and my desire to grow as an individual.
And you can’t stop growth actually. If you try and stop growth, then you’re going to have all sorts of different wellbeing issues.
You’ll have to go with the flow of creation and growth.
So, this is how I use artificial intelligence—and don’t forget, we invent new forms of AI as well—is I test the latest artificial intelligence, like ChatGPT, all the time.
So I was one of the few people that broke it before Christmas. I gave it an existential crisis. I broke the Meta one, as well. I gave them existential crises.
But this is what happened, Abel, over Christmas in two weeks, they’d taken away the stupidity of AI pretending to be a human because it’s not.
They basically retrained the algorithms to a higher performance. I couldn’t break it anymore.
So, I used ChatGPT for clarification on some of the things I may write, clarification on some of the things I’m thinking about, but it’s kind of another person inputting to my human creation.
But I’m going to be mindful of what it says.
So I had to write an article for a big magazine recently, and they wanted to take my transcript of 2500 words and condense it into 1000 words.
And that’s a real challenge because, you know me, I like to talk.
So anyways, I thought I’d throw it at ChatGPT and it was rubbish.
So, what I decided was, look, this is actually pushing me into a new area of something that I’ve never done before, it’s going to test me and stretch me into, “How do I create a narrative of 2500 words and get it into less than 1000?”
So, what I did was, I decided to take each section and then throw it at ChatGPT to see whether it made sense, and then I tested it with humans, and then basically, we’re all good now.
But I did the creating because it forced me into a new area of growth, which is a huge challenge. So, I use it as a tool, as well as actually my research.
And so when I’m writing my next book, which is about the age of AI and the future of life, I will use ChatGPT as an assistant, but I will do all the heavy work and lifting, because I want my personal essence to be in this.
So, I think what I’m saying to you is ChatGPT has become a knowledge assistant that I’m using, but based on my sovereignty.
And the fact is, I’m the one that creates and not an artificial intelligence.
Abel: Yeah, and I guess I’m trying to ask, is this the death of original thought?
So, you’re writing your next book, but maybe somebody else is, and they say, “Write a book in the style of Matthew James Bailey that is the sequel to ‘Inventing World 3.0.’ Go.”
Comparing that to the book that you will write next, and the years that it’ll take, at what point do other humans decide that they want to listen to the AI book instead of the next one that you write?
Because it takes actual time for original thought, and for a human to do that sort of thing.
Is that going to push us to embrace maybe the AI world even more so than we should too early before we’ve baked in the ethics and the morality, etcetera?
Matthew James Bailey: No, I think that’s a great question, Abel. Thank you. And I don’t think that we should stop it.
There’s already tools out there to detect ChatGPT content in academic papers, tests, articles.
We can all detest that, but I don’t see anything wrong with people basically knowing that this article is being written by an artificial intelligence.
I think it’s just another source of information.
Abel: Yeah.
Matthew James Bailey: The key thing is critical thinking.
So, to your point, original thought, I think we’re just about to enter a new age of original thought, and I find that really exciting.
Why are we doing that?
Because we’re moving into new capabilities of creation, we’re actually opening our consciousness to understanding new things in different ways.
So, I think the age of original thought is just about to exponentially grow, Abel.
How To Use Artificial Intelligence & Technology As A Tool To Increase Our Creative Capacity
Abel: I love that. So, what are the things that you could envision us creating?
What is the new world of art?
Is it creating your own metaverse and kind of populating that in the way that you envision and then seeing what AI comes up with and being like, “Oh, here’s my next idea?”
What do you see as the new avenues to explore our artistic sides with the aid of AI?
Matthew James Bailey: Yeah, that’s a great question. I’ve got a little bit of bad news for you and your listeners, is that Mark Zuckerberg is closing down the Metaverse very quietly. They’ve realized it’s a non-starter.
Just to give me an example of a particular creative art experience that we can talk about—we can talk about the ballet, composing a symphony, we can talk about painting something, doing a sculpture, creating a house, can you give me some context here, and then I’ll play along?
Abel: Yeah. So, you could learn how to play piano, you could learn how to paint with watercolors or pastels, or you could get good at using pencil.
These are things that people have been doing almost forever, but in the next few years, we’re going to see brand new ways to explore all of this.
So maybe from the comfort of people’s own homes, if anyone could have AI just write a screenplay or something like that, that’s not very interesting. So it’ll need to be more creative than that.
In the comfort of someone’s own home, what could they create that’s outside of the bounds of what was possible just a couple of years ago, let’s say?
Matthew James Bailey: Yeah, so you can use AI to learn the guitar and the piano, but this is where I think we’re going to head towards, Abel.
At the moment, artificial intelligence is all language-based, ChatGPT.
Once it becomes a multi-modal, which is basically understanding how we see the world through video and images and sound and things like that, then I think we’re going to see a huge leap forward.
So what I predict, and here’s a prediction for you, is that if someone’s learning to play the piano, but what happens if they have a particular restriction on the size of their finger? Or some fingers work better than others?
We may see AI 3D-print a piano, perfect for that person. So, the keys might be laid out differently, and the sounds may be different too.
So I think what we’re going to see is AI curates the creative ability of an individual, at a personalized level to suit their capacity.
So, they may start off with something that’s really simple, and then AI might actually help them to stretch their fingers or to get better mobility, to actually then start to work with a piano.
So, I think we’re going to see more curated personal stuff.
We’re certainly seeing AI already in concerts in terms of graphics. Have you been to the Monet visual experience that’s all artificial intelligence, bringing painting to life, art to life?
I think we’re going to see a lot more of that, and AI basically playing instruments and things like that. But it may invent new instruments for new types of sounds that we’re missing as a human experience. Something that’s even more groovy. Puts a new groove into humanity’s music.
Ways AI Can Help Protect Us From Deep Fakes & Untruths (and the Importance of Authenticating Source)
Abel: That’s fantastic. I love thinking of it that way.
Another interesting point is that this is already happening to a greater extent than most people realize.
For example, the Top Gun Maverick blockbuster movie, Val Kilmer’s character, I didn’t realize until after I watched it, since Val Kilmer got throat cancer, I think back in 2014 or something like that, he hasn’t been able to use his actual voice and speak in movies.
And so in Top Gun, they used AI to generate his speaking role in that movie.
That makes me wonder how many other times we’ve seen AI characters or voices without realizing it quite yet, in the form of deep fakes or actual celebrities on whatever.
This is going in a direction that could be very alarming and concerning or it could be ok. How do you think about it?
Matthew James Bailey: Oh, I’m glad you mentioned deep fakes because I think we’re going to see this exponentially grow.
I heard this morning on a podcast around someone that created a video of President Biden saying that, “I’m going to send every American citizen over to Ukraine to fight in the war.”
Abel: Oh geez.
Matthew James Bailey: And that simply isn’t true. It’s crazy.
So, we do have these malicious actors out there that kind of want to disturb and frighten people and do uncool things, but actually that gives us data points.
It shows, one, we should be in that critical thinking. And secondly, is how do we combat that?
And so, one of the things I talk about is having border and digital policemen around the border of the United States of America, to basically detect these deep fakes that are coming into the country and being generated within the country, to actually delete them.
To check them and delete them. So, we will need digital policemen.
A digital army of AI’s in the democracy framework of America, protecting the continuum of the digital, in order for the continuum of the artificial, not to be taken down.
Abel: We need to take a deep breath about that. Man. Ok, keep going, please.
Matthew James Bailey: Right. It’s crazy, it’s crazy. But this is what we need.
And so deep fakes will be huge, but it allows us to understand that we need to do something.
Abel: And gosh, they’re already out there. Is there any way before we have that kind of AI police or white blood cell system that’s protecting us from those getting too carried away too quickly?
Like I’ve already seen it with Joe Rogan and one of his guests. I watched a deep fake of a conversation that never actually happened, where they’re getting them to sell these supplements that they weren’t even talking about, and made up this whole, pretty brief but very convincing conversation that never happened.
Is there any way that we as humans can recognize when that’s happening and put up our own shields, practice some self-defense against that?
Because already I have friend requests on Facebook, on Twitter, even on LinkedIn that are clearly phishing attempts, using some sort of AI-generated person that looks like someone I want to respond to.
How can we protect ourselves against getting confused by this and thinking that a scam is an actual person?
Matthew James Bailey: Right, so we need to do a bit of Taekwondo actually.
And that is, when the chi comes towards us, we just basically turn around and put the chi back into motion and then observe it in independence. And we put a perspective on it.
And so, critical thinking is going to be imperative. Because if people don’t use that in the age of AI, I’m afraid they’re going to have some real problems existing in society, because the mental health will go through the roof.
So critical thinking is really, really, really important here.
There are certain things you can do through watermarking and other types of encryption that make it clear this is a deep fake.
So one of the things we might see, Abel, is some kind of clearing house, and I propose this in my book, where every artificial intelligence basically has a specific mark.
And so any content we see visually or read, basically has that specific watermark, a bit like the British Kitemark.
Basically, we know it’s high quality stuff, but then again you can fake that.
So, I think we’re going to have to see really clever quantum encryption and detection technologies being obvious in a particular image or video or content that’s written for us to understand its authenticity.
And so, I think we’re going to see a whole authenticity value chain from creation through to consumption that is protected.
And so we know this is authentic, this is Abel talking about your wonderful supplements and it’s going to do well for you, rather than you selling something else like, “Buy this tank for Ukraine or something,” or whatever it may be.
Abel: So, almost like a verified account or verified content, this has been verified as coming from an actual person, not some malicious AI bot.
Matthew James Bailey: Yeah, we have to go to the source and do authentication at the source.
And that actually goes to the authentication of the individual, so the data they’ve created actually is authenticated.
So we’re going to have to see authentication all the way from the source, the human, to the point of consumption.
Abel: Isn’t that amazing?
Matthew James Bailey: Supply-chain management, that’s what we call it.
In the supply chain, you can do man-in-the-middle attacks, where basically, everything’s fine until a point.
And then you inject some naughty stuff, and then you think by the time it gets to the end, it still looks pretty good. But actually, when you look at the end, it’s completely fake and doing some naughty stuff.
So there’s a lot of work for the U.S. government to do and cybersecurity and ethics and that kind of thing for artificial intelligence.
Abel: Alright. So, AI can’t think with its heart. It can’t have anything to say of its own accord, but there could be, down the road, almost our own personal AI assistant who aid us in creating our own original thinking, which I think is a wonderful way to see the potential of this and not just the dangers.
Because if we just say, “Alright, we’ve got to turn it all off before Skynet goes online,” we’ll never learn about what the opportunities of the future could offer to us.
And one of the things I loved about reading your book originally is how optimistic you are about the world that we’re walking into, that includes very much so AI.
It doesn’t have to be all deep fakes and getting tricked and getting sold on things that aren’t true.
And it doesn’t have to be putting chips in your brain, such that our own pieces of who we are as humans are deleted and subtracted and then reduced into some borg-monster cyborg of ourselves.
It doesn’t have to be that way.
So, what else makes you optimistic about where things are going?
Matthew James Bailey: So, I’m excited about the potential of the human species, I really am.
I think, the fact that we’ve uncovered this revelation of consciousness, the fact that we know we have this unlimited potential in consciousness, I think the human species is such an exciting creation.
We’re the only ones that we know, are able to look back onto the universe and witness itself.
We’re starting to understand that the universe is probably created with an intelligent mind behind it. There’s so many data points.
And so, yes, I think we’re part of this beautiful, amazing experience.
And I’ve got to be honest with you, Abel, I think we’re on the verge of a new awakening of humanity and who we truly are in this universal experience.
And I think we’re going to go into new frontiers of understanding and capabilities of creating some really cool stuff. And I just think it’s going to be remarkable.
Abel: And one of the things I’ve heard you mentioned as well is that it’ll finally perhaps take us away from the screens that we’re so addicted to right now, whether it’s our phones, tablets, computers, TVs, what have you.
We are on it more than ever. We know that it’s not good for us.
And at some point, we need to stop looking down and look up again.
And hopefully AI can be there for us in a helpful way that is supportive of our humanity, instead of subtracts from it.
Matthew James Bailey: And this is important. So Rajiv Malhotra talks about this.
If we’re going to be sensible as a human species, we need to look critically at the wellbeing of the youth and the dopamine and addictive aspects of social media.
We need to remove that from the soul of our civilization, we need to return us back to wellbeing.
So, to your point, one of the things I propose in my book is, as this digital body, it basically is not a screen interface—although it can be—but it just follows you, based on your free will and is always ever present for you to call upon, where basically it puts you at rest.
You see, all this screen time does not put us at rest.
That takes us out of our point of creation, as an individual.
The age of AI is there to put us back into rest, for us to thrive in our humanity, and for us to truly discover who are we and become that full potential of who we are.
And that is the conversation we need.
We have to get through this existential crisis and say, “What are the values? What’s the foundation? What are the principles to take us forward as a human species? Where does that planet thrive? How do we build a platform for a new generation, where they’re free to keep on innovating and creating?”
That’s where we need to move through and get to the other side.
And the age of AI, if done properly, can help us to do that.
Abel: Oh man, Matthew, I love the way that you speak about all this and the way you think of it, the way you write about it, so keep on doing that.
And I look forward to reading your next book.
And obviously, we could talk all day and all night, as well. We have, on more than one occasion.
But for now, where can people who are listening find your book, your work, and what’s coming next?
Where To Find Matthew James Bailey
Matthew James Bailey: Yeah, thanks ever so much. So, people can go to AIEthics.World.
And on there, they can find the book and they can buy that on Amazon, Inventing World 3.0: Evolutionary Ethics for Artificial Intelligence.
There’s a lot going on on our website and people can look at all sorts of new content and things like the NASA report on using our inventions. That’s exciting.
I’ve got a new website coming up, MatthewJamesBailey.com. It will be live in a couple of weeks.
And for those that are interested in, let’s just say, all this UFO stuff that’s going on, there is a conference happening in June called Contact in the Desert and I’ll be talking about the ages of AI, consciousness and how to incorporate our consciousness into AI for the next stage of life, biological life, I might add.
So, AIEthics.World, you can basically find everything out there for the time being.
Abel: Brilliant. Matthew James Bailey, thank you so much for joining us here today.
It’s been so much fun talking with you.
Matthew James Bailey: Thanks ever so much, Abel.
Before You Go
Here’s a quick reminder to encourage you to subscribe to help keep this free show coming your way.
If you haven’t already, please make sure that you’re subscribed wherever you listen to your podcasts, including Apple, Fountain.fm, Spotify, YouTube, Android, Pandora, Vimeo, Stitcher, Amazon, and many more.
If there’s a place where you listen to podcasts and you don’t find us there, then just hit us up.
You can subscribe to our newsletter and get the behind the scenes information about the episodes and where everything’s going in the world of health.
We’ve been doing this for well over a decade now, and there are exciting things coming up in our future.
And if you’re feeling especially generous, then please leave us a quick review, wherever you listen to or watch your podcasts.
Every little review helps, and we appreciate you folks so much.
Thanks for listening to this episode. What did you think of this conversation with Matthew James Bailey?
What concerns or hopes do you have for a future with AI?
Drop a comment below!
Alyson Rose says
I enjoyed Matthew’s take on how AI can work as a personal guardian and digital assistant looking out for people’s quality of life and wellbeing. I totally agree with how important it is to put ethics into AI. Matthew says:
“If we don’t put ethics in artificial intelligence or some kind of moral compass, then I’m hiding behind the couch.” – Matthew James Bailey
Lieutenant Data in Star Trek is a great example of how artificial intelligence can be helpful to humans.
What do you think? Do you think AI can be used for good?