Gaurav Tewari, founder and Managing Partner of Omega Venture Partners.
Rapid recent advances in artificial intelligence, evident from systems like ChatGPT and Google’s Bard, are astounding for some and concerning for others. However, the notion that AI poses an existential threat to humanity reflects misguided anthropomorphism and ignores facts.
Yes, generative AI is becoming omnipresent, and its capabilities are remarkably good. But digital intelligence is nothing like biological intelligence. In my view, there is no evidence that general-purpose AI intelligence, on par with humans, is imminent or inevitable. AI systems lack generalized reasoning and common sense. For the foreseeable future, AI capabilities are narrow, not broadly intelligent.
We must view AI’s progress in full context. This is not the first time a revolutionary technology has changed our lives. We have successfully navigated seismic technology shifts—the advent of electricity, the PC, Internet and mobile—and in retrospect, these technologies have greatly improved our lives. We should not ignore the immense potential for AI to empower humanity if guided ethically.
AI’s Benefits Far Outweigh Risks
As a longtime investor focused on AI investing, I have seen firsthand AI’s potential to transform society for the better. The choices we make today will determine whether AI’s future is utopian or dystopian. However, the notion that AI represents an existential threat reflects a profound misunderstanding of the capabilities and trajectory of AI.
Decades ago, visionaries like Alan Turing imagined the potential for thinking machines, with measures such as the Turing Test, which tests whether a machine can fool a person into thinking that it is a human. Today, alarmists anthropomorphize AI and reason that such advanced intelligence would prioritize selfish goals and perceive humanity as a lower life form to be ignored or exploited in service of its own ends. Science fiction reinforces this fallacy through dystopian narratives, and viewers often mistake entertaining fantasy as fact. Not surprisingly, the percentage of people who perceive technology as a threat has increased from 34% to 47% over the past four years.
By dispelling misconceptions and grounding discussions in facts, we can better realize AI’s upside. When used appropriately, AI has the potential to mitigate the world’s worst inequities. In the United States, AI can enhance educational opportunities for the underprivileged. Globally, AI can democratize specialized skills to elevate human productivity and well-being.
We Need To Put Alarmist Fears In Context
The notion of computers indistinguishable from humans is the idea of Artificial General Intelligence (AGI). AGI refers to artificial intelligence that is so advanced that it can complete a broad array of tasks as well as or better than humans. Traditionally, the definition of AGI supposes computers with common sense, selfish motivation and a conscious self-identity. In theory, AGI could operate and make decisions autonomously, navigate complicated environments and perhaps unlock a new kind of “superintelligence.”
Building on this idea, a group of people, including employees of Google, OpenAI and Microsoft, recently issued a statement to warn against the threat they believe AI poses to humankind. A Microsoft research then released a paper claiming that GPT-4 was demonstrating “early signs” of AGI, asserting that the model “can solve novel and difficult tasks” spanning a variety of disciplines.
Such concerns related to AI are natural, given its novelty and many unknowns. But these concerns don’t pass scrutiny when subject to a fact-driven, evidence-based analysis. Other researchers have correctly pointed out that the AI frameworks in development today are often easily confused and lack a robust conceptual understanding of the world. Conjectures that AI could spontaneously become conscious and turn against people make for entertaining media stories. But I believe there is simply no evidence that AI can attain human-like consciousness and agency.
Besides, the anthropomorphizing of AI ignores facts and fuels irrational fears. Intelligence does not necessitate a drive to dominate. Even as AI becomes more capable, there’s no reason to assume that it will malevolently seek to dominate humanity. Intelligence is orthogonal to ambition—just because an entity is smart does not mean it seeks to dominate others. Even among people, intelligence does not imply hostility—Einstein wasn’t bent on subjugating others.
Generative AI and large language models (LLMs) will undoubtedly continue to improve and will interact with people at increasingly sophisticated levels of (perceived) understanding. But this is cause for celebration, not catastrophizing. As AI models get better, they can be better harnessed to increase productivity, automate routine and mundane tasks and augment human potential. Simply put, it is more accurate to regard AI as Ironman than Terminator.
Our Choices Will Shape Our Future
AI’s risks and rewards, both today and in the future, will ultimately reflect the choices we make as innovators, entrepreneurs, investors, developers, creators, users and regulators. As with other technological innovations, we will need to exercise sound oversight and good judgment to chart the right course.
When governed ethically, AI represents an amplifier, not an annihilator, of human potential. In public policy, we need laws and guidelines to encourage lawful, ethical uses of AI and should penalize those who use the technology for malicious activities. Companies that develop AI systems need to implement mechanisms that drive accountability and transparency and bolster public trust. Within the technology industry, we need to frame standards that mitigate the potential for bias, privacy breaches and misinformation in the development and deployment of AI systems. And as a society, we need to thoughtfully consider how AI’s benefits can be equitably harnessed by diverse segments of our communities.
AI will not magically lead to world peace, cure cancer or eliminate all inequality. It will also not lead to a nuclear war, apocalypse or mass extinction. There is both a dystopian and utopian narrative relating to our future with AI, and the reality will ultimately be shaped by the choices we make. To do so, we must first dispel unfounded fearmongering. Progress will come not from fearing technology but from proactively shaping its development to benefit all humankind.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
Read the full article here