Here’s a question that’s a little bit more in the zone of philosophy when it comes to artificial intelligence – how do we think about these technologies impacting our world? In what way do we understand the discovery of LLMs and their capacities, and what happens to our societies, not just in business, but beyond that, where we live?
A corollary question would be how we apply it to commerce and development, and how we see those crucibles where the technologies are actually developed and worked on – before they get to consumer technologies.
Listening to some words from Alvin Graylin recently, I was thinking about some helpful analogies and frameworks that he pointed out for an AI roadmap that we could all use a look at when considering the future of our coexistence with powerful digital sentience.
Three Types of AI Approaches – and a Fourth One
First Graylin enumerated three distinct approaches that many people are taking toward new technology.
The first one is to slow down, and move as close to a snail’s pace as possible. This is where you hear the most about heavy regulation of AI project and deployment – where some urge universal caution, and suggest that we can’t allow these things to spiral out of control.
There’s another school of thought, though, that may be called “accelerators.” They feel like it’s in our best interest to go as fast as possible, to improve the world at the most rapid pace.
Then there’s selective accelerators, which is a third way of looking at the issue,
“We’re heading to a world of abundance, and we understand that that’s the future,” he said. “We’re going to change the way that we think.”
But Graylin has his own construct, too, and it’s pretty different. He asks us to think about all of us as standing on a bridge, either with or without tethers, and thinking about how we stay on that bridge, rather than falling off into oblivion.
“We have 8 billion people already, everybody’s tied together,” he said. “So (if) we fail, we all fail. I think that’s something that is not encapsulated in any of these three concepts.”
I thought that was rather important to think about as we try to figure out how we’re going to implement the power of a quickly evolving artificial intelligence capability.
The Big Issues
This was also impressive to me: Graylin mentioned that one of the biggest impacts will be job displacement. Not just job displacement, he noted, but the mental health issues that come along with job displacement.
This is especially true in America, where we tie someone’s job to their identity, their routine, even their healthcare. For most of us, our jobs are our identification – they inform our sense of self. So if we lose them to AI, you have some considerable angst and discomfort that goes along with that, if not existential despair that can work itself out in all kinds of disturbing ways.
He also mentions misinformation and the potential of AI to introduce confusion and chaos.
Then, too, Graylin notes that there’s a geopolitical race on around development of AI technologies. We’ve heard people call the DeepSeek announcement the “Sputnik” moment for the U.S., as American officials try to tighten export controls and contain China in this way. There’s no doubt that the Americans and the Chinese are in competition in certain aspects of AI development.
However, Graylin also talks about the prospect of AI maturity. Initially, he suggests, certain problems will crop up, but in the end, they will subside as the technology matures. And that’s something we can hope for, since the power of AI seems to be growing exponentially.
All of this informs what we do at events and conferences, and in the classroom, and in the business world. We simply have to reckon with both the good and the bad sides of AI disruption, and figure out how to tackle these problems now, before they get worse. We have to move forward confidently, while holding onto our humanity, and making sure that it is valued, in a world where robots can do almost anything.
Read the full article here