In today’s column, I examine the emerging attempts to surreptitiously blur the distinctions between contemporary AI and the vaunted but not-yet-attained artificial general intelligence (AGI). As if that’s not enough, there is also a concerted effort to further overstretch into proclaiming the imminent realization of artificial superintelligence (ASI).
The shenanigans are happening right before our eyes, yet seemingly remain out of sight.
Let’s talk about it.
This analysis of an innovative proposition is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).
Thought Experiment On The Value Of Defining Things
Socrates famously said that the beginning of wisdom is the definition of terms.
Definitions make a world of difference. They are vital for how we communicate and what we mean when we communicate with each other. You see, without agreeing to specific definitions, people can speak right past each other, oftentimes not even realizing they are doing so.
One of my favorite examples of this kind of definitional dilemma is the following riddle.
Two people are standing and facing each other. They are both holding something behind their backs. One of them says that they have an apple behind their back. The other remarks that they too have an apple behind their back. Upon a count of three, they both reposition their respective objects and hold them straight out in front of them. Hey, one of them loudly exclaims, you told a lie, angrily declaring that the person isn’t holding an apple but instead is holding a tangerine.
I ask you this question – which of the two is holding an apple?
Well, the answer is that you can’t say for sure. You might be tempted to contend that the person calling the other one a liar must be the one with the apple. The thing is all of this is dependent on exactly what the word “apple” means. It could be that the person holding the proclaimed tangerine believes that an apple is defined as a small citrus fruit having an orange-red color. Thus, they truly believe they are holding an apple.
The two people had not discussed and agreed beforehand as to what the definition of an apple consisted of. I realize this seems farfetched because we all commonly know what an apple is. But suppose we are faced with something that doesn’t have a hard-and-fast definition. In such an instance, the loosey-goosey facets might allow for a great deal of confusion, finger-pointing, and slippery contrivances.
Yes, indeed, that takes us to the definitional dilemma entailing the meaning of bandied-around terminology consisting of artificial intelligence (AI), artificial general intelligence (AGI), and artificial superintelligence (ASI).
Formally Defining AI, AGI, And ASI
I have previously explored the formalized legal definitions associated with AI, AGI, and ASI, see the link here.
Here’s why the legal definitions are crucial. All kinds of new federal, state, and local laws are being written that ostensibly are intended to oversee and regulate AI of any kind (the same goes for international laws encompassing AI). The principled basis for these laws is that we need to try and rein in AI before we suffer adverse consequences and especially before AI becomes a said-to-be existential risk. For more on the latest AI-related laws, see my coverage at the link here.
There is a twist that might seem surprising and spur a tad bit of dismay.
By and large, these burgeoning wanton sets of AI-related laws tend to define AI in their own idiosyncratic way. Each definition of AI that is contained in a respective law is often devised from scratch. Sometimes an already spread-around definition is used, but even there the AI definition is typically tweaked quite extensively. It is a mess.
Why care?
Because the hodge-podge of AI definitions means that this law or that law might be referring to apples when it really intends to cover tangerines or inadvertently covers tangerines when it was meant to oversee apples.
That is a seriously troubling situation.
I’ve predicted that we are ultimately going to find ourselves in a legal quagmire on this.
Any organization or person brought up on charges associated with a law that has an AI definition will legally fight tooth and nail in the following manner. They will first contend that the AI as defined in the law is not the AI that they made use of, and ergo the law doesn’t apply to them. That might provide sufficient wiggle room to escape the AI-focused law. Second, they will undoubtedly claim that the AI definition is “wrong”, and that AI is properly defined in some other way – likely proffering a definition that will swing to their advantage.
Boom, drop the mic.
The gist is that rather than those laws making clear what they encompass, the lack of an all-together commonly agreed definition across the board for AI, AGI, and ASI is setting in stone a legal time bomb of sorts. Whereas lawmakers might think they are doing good and are going to curtail those baddies that go too far with AI, the chances are that the courts will be overburdened with trying to nail down thorny legal issues of what is meant in the law as to a reasonable and acceptable definition of AI, AGI, and ASI.
Whoa, you might be thinking, if that legal morass is pending, why hasn’t it already happened?
The answer is that we are still so new to these AI-focused laws that there aren’t yet sufficient cases to have this become a noticeable issue. It will take a smidgen of time for AI systems to get companies in real trouble. Once that occurs, you can bet that the legal mechanisms will start to grind and savvy lawyers will take any shot they can to undercut the law that allegedly their client broke.
Time will tell.
Societal Definitions Of AI, AGI, ASI
Let’s shift away from formalized legal definitions and discuss informal societal definitions.
You indubitably see references made to AI, AGI, and ASI in the media. Artificial intelligence is hot, and brazen headlines make this or that proclamation about breathtaking breakthroughs in AI. I dare say you would be hard-pressed to find postings and articles that don’t refer to AI in one fashion or another. It is a ubiquitous topic these days.
The rub is this.
Societal definitions for AI, AGI, and ASI are just as messed up and even worse than the mess associated with their presumptive legal definitions. Anybody can readily get away with using the terminology of AI, AGI, and ASI in just about whatever wild or sneaky way that they desire. No one seems to be policing this. Stretching the terminology to fit a desired purpose has become the norm.
Allow me to give you a strawman to see what this portends.
A brisk definition of AI that you might find in an everyday dictionary is this:
- “AI is a system that exhibits intelligent behavior.”
I’d ask you to take a reflective moment and mull over that definition of AI.
Does the definition seem airtight and ironclad?
Regrettably, no. It is full of ill-defined terms and an imperfect conglomeration of loopholes. For example, what does “intelligence behavior” mean? How are we to judge whether a system purportedly meeting this definition is exhibiting intelligent behavior? There is a wide variety and heatedly debated theoretical tests that are floating around, such as the famed Turing Test, see my analysis at the link here.
Currently, intelligent behavior is in the eye of the beholder.
How We Got From AI To AGI Definitions
Another big problem with the foregoing definition of AI is that it doesn’t stipulate the level or degree of intelligence when it comes to exhibiting intelligent behavior.
Here’s what I mean. Would you be willing to say that a dog or a cat exhibits intelligent behavior? I think we can generally and collegially agree that yes, it is fair to say that a dog or a cat does things sometimes that we would declare as being intelligent. Not all the time, but certainly some of the time.
Returning to the strawman definition of AI, many AI researchers became concerned that just about any system could be construed as exhibiting intelligent behavior. The reason this can happen is that the level of intelligence is not specified in the definition. Just like my bringing up dogs and cats, I think we could agree that on the whole humans are more intelligent than those beloved animals (a smarmy person might disagree, but I trust that you get the overarching drift that humans are by and large considered more intelligent than dogs and cats).
AI researchers pushing the limits of so-called conventional AI were rightfully upset that their advanced AI was being lumped into the less impressive AI. Again, this is the case because the level of intelligence exhibited is up for grabs. A presumed AI system that controls a toaster seems to be on par with an AI system that runs an entire factory. They are both said to be AI.
Voila, a new piece of terminology evolved and gained traction, namely the moniker of artificial general intelligence and the associated acronym of AGI. The beauty was that AI researchers could say that their more advanced AI was closer to AGI and not mired in the less-stellar conventional AI.
Let’s use this as a strawman definition for AGI:
- “AGI or artificial general intelligence is an AI system that exhibits intelligent behavior of both a narrow and general manner on par with that of humans, in all respects.”
You can see that the level of intelligence is now defined as being associated with that of humans.
Human-level intelligent behavior is the AGI demarcation. Furthermore, this includes narrow kinds of intelligence such as specialties of repairing a car engine, knowing how to operate a crane, being able to fly a plane, etc. It also includes general facets of intelligence that we might think of as common-sense aspects and overall day-to-day intelligence as a functioning human.
The Next Move Was AGI To ASI Definitions
The tale isn’t over.
AI researchers are aiming to exceed human-level intelligence in AI. The problem then with the definition of AGI is that though it specifies a lower bound, namely a minimum of achieving human-level intelligence, it lacks any specificity above that level. Imagine the consternation. If you went to the hard work of devising AI that could beat every human at chess, you would say it attained AGI in the niche of chess, but this is somewhat underplaying the accomplishment. You would be eager to say that AGI chess playing not only met the level of the requirement, it notably exceeded human capacities as well. It was said to be superhuman with respect to playing chess.
How are we to describe AI that goes beyond human-level intelligence?
Another strawman is needed:
- “ASI or artificial superintelligence is an AI or AGI system that exhibits intelligent behavior of both a narrow and general manner that exceeds that of humans, in all respects.”
Unfortunately, this opens another can of worms. You might keenly note that this again has no leveling above that of exceeding human intelligence. In other words, I make an ASI that is 2x the capabilities of humans, while someone else devises an ASI that is 5x. Both of those systems are going to be labeled as ASI. We probably will get into debates about having something that means a substantive leap over ASI, maybe we’ll call it super-duper AI (ASDI or ASUI). I’m not sure that’s catchy enough, so maybe it needs more work on catchiness.
The Moment Of Truth
I dragged you through those three definitions of AI, AGI, and ASI to arrive at a notable point.
Here it goes.
Because those are slippery definitions that are just strawmen for purposes of this discussion, and we don’t have any ironclad societal definitions for AI, AGI, or ASI per se, the societal claims about AI, AGI, and ASI are all over the map. This raises a potpourri of very weighty AI ethics concerns, see my coverage at the link here.
You can declare that something is AI or AGI or ASI, and nearly get away with doing so scot-free.
Go along on a bumpy ride with me on this conundrum.
A person or company trying to grab attention announces they have devised AI that is AGI. Are they giving the straight scoop? Well, it could be that they are describing an apple while the rest of us are thinking of a tangerine. In that sense, let’s say that they genuinely believe they have achieved AGI, though, upon closer inspection, others discern it is not AGI, at least as pertains to a definition of AGI such as the strawman listed herein.
That is slippery slope #1: Conflating AI with AGI.
There is a slippery slope #2: Conflating AI with ASI, or conflating AGI with ASI.
Someone announces that they have devised AI which is ASI. Wow! Amazing! Are they giving the straight scoop? Again, they might believe they have achieved ASI. Upon closer inspection, others discern it is not ASI, at least as pertains to a definition of ASI such as the strawman listed herein.
Be wary and highly skeptical about any of the spewing forth claims about AI, AGI, and ASI. It is murky. It might be underhanded.
Those Plentiful Predictions Of AGI And ASI Arrival
Some final comments before I conclude this discussion.
A popular trend these days consists of pundits and soothsayers predicting when we will reach AGI, and when we will reach ASI. A gambit of one-upmanship is occurring. Someone declares that AGI will be reached by the year 2035. This then becomes old news. To get headlines, the next declaration will up the ante and proclaim that AGI will be achieved by the year 2030.
And on it goes.
The same applies to ASI predictions. In fact, it gets extremely convoluted because someone might be thinking of a prediction for AGI, but it gets portrayed as a prediction for ASI. Heck, you might as well lump them all into one bucket and be done with it, some of the media seems to suggest. No need to have a dividing line between AGI and ASI. Don’t make things complicated.
Let’s just mush together AGI and ASI if that’s what sells.
There is a whole lot of sneaky shiftiness going on concerning AI, AGI, and ASI.
I’ve tried to bring you up to speed on the trickeries and mischiefs underway. Whenever you see any claims or pronouncements regarding AI, AGI, or ASI, I sincerely hope that your spidey sense will start tingling and you’ll give suitable scrutiny to the matters at hand.
Keep in mind the words of Abraham Lincoln that dutifully defined for us the nature of human behavior: “You can fool all the people some of the time and some of the people all the time, but you cannot fool all the people all the time.”
Read the full article here