Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Trending

OpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway

March 9, 2026

What AI Models for War Actually Look Like

March 8, 2026

Wall Street Has AI Psychosis

March 7, 2026
Facebook Twitter Instagram
  • Newsletter
  • Submit Articles
  • Privacy
  • Advertise
  • Contact
Facebook Twitter Instagram
Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Subscribe for Alerts
Startup DreamersStartup Dreamers
Home » What Is AGI (Artificial General Intelligence) And Why You Should Care
Innovation

What Is AGI (Artificial General Intelligence) And Why You Should Care

adminBy adminNovember 21, 20230 ViewsNo Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email

AGI (or Artificial General Intelligence) is something everyone should know about and think about. This was true even before the recent OpenAI drama brought the issue to the limelight, with rumors speculating that the issues may have been caused due to disagreements about safety concerns regarding a breakthrough on AGI. Whether that is true or not, and we may never know, AGI is still serious. In this article, we discuss what AGI is, or could be, what it means to all of us, and what – if anything – the average person can do about it.

What is Artificial General Intelligence?

As expected for such a complex and impactful topic – definitions vary:

  • Wikipedia defines AGI as a machine agent that can accomplish any task that a human can perform. This includes reasoning, planning, executing, communicating, etc.
  • ChatGPT defines AGI as “highly autonomous systems that have the ability to outperform humans at nearly any economically valuable work. AGI is often contrasted with narrow or specialized AI, which is designed to perform specific tasks or solve particular problems but lacks the broad cognitive abilities associated with human intelligence. The key characteristic of AGI is its capacity for generalization and adaptation across a wide range of tasks and domains. (Contd..) “

Given the recent OpenAI news, it is particularly opportune that the OpenAI Chief Scientist, Ilya Sutskever, actually presented his perspective on AGI just a few weeks ago at TED AI. You can find his full presentation here, but some takeaways –

  • He described a key tenet of AGI as being potentially smarter than humans in anything and everything, with all of human knowledge to back it up
  • He also described AGI as having the ability to teach itself – thereby creating new, even potentially smarter AGIs.

We can already see distinctions even within these definitions. The first and third are far broader – reflecting any human endeavor, while the second appears to be more economically targeted. With both come benefits and risks. The risks of the first group are existential, while the risks of the second may lean more toward massive workplace displacement and other economic impacts.

Will AGI happen in our lifetimes?

Hard to say. Experts differ in whether AGI is never likely to happen or whether it is merely a few years away. A lot of this discrepancy also has to do with the lack of broadly agreed upon precise definition – as the example above shows.

Should we be worried?

Yes, I believe so. If nothing else – the current drama in OpenAI shows how little we know about the technology development that is so fundamental to humanity’s future, and how unstructured our global conversation on the topic is. Fundamental questions exist – such as “who will decide if AGI has been reached?”, “would the rest of us even know that it has happened or is imminent?”, “what measures will be in place to manage it?”, “how will countries around the world collaborate or fight over it?”, and so on.

Is this Skynet?

I don’t think this is the cause for the biggest worry. While certain parts of the AGI definition (particularly the idea of AGIs creating future AGIs) are heading in this direction, and while movies like Terminator show a certain view of the future, history has shown us that harm caused by technology is usually caused by intentional or accidental human misuse of the technology. AGI may eventually reach some form of consciousness that is independent of humans, but it seems far more likely that human-directed AI-powered weapons, misinformation, job displacement, environmental disruption, etc. will threaten our well-being before that.

What can I do?

I believe the only thing that each of us can do is to be informed, and AI-literate and exercise our rights, opinions, and best judgement. The technology is transformative. What is not clear is who will decide how it will transform.

It is also worth noting that AGI is unlikely to be a binary event (one day not there and the next day there). ChatGPT appeared to many people as if it came from nowhere, but it did not. It was preceded over the last several years by GPT 2 and GPT 3. Both were very powerful – but harder to use and far less well known. While ChatGPT (GPT3.5 and beyond) represented major advances – the trend was already in place. Similarly – we will see AGI coming (we already do). The question is what will we do about it before it arrives? That decision should be made by everyone. No matter what happens with OpenAI, the AGI debate and issues are here to stay, and we will need to deal with them – ideally sooner rather than later.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

The Dilemma Of Profits V.S. Guardrails

Innovation March 1, 2026

As Davos & India Celebrated AI, Paris Sounded The Alarm On AI Safety

Innovation February 28, 2026

Backyard Baseball Is Getting A New Game And I’m Ready For It In July

Innovation February 27, 2026

Solving The Data Bottleneck For Physical AI

Innovation February 26, 2026

Today’s Wordle #1686 Hints And Answer For Friday, January 30

Innovation January 30, 2026

Today’s Wordle #1685 Hints And Answer For Thursday, January 29

Innovation January 29, 2026
Add A Comment

Leave A Reply Cancel Reply

Editors Picks

OpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway

March 9, 2026

What AI Models for War Actually Look Like

March 8, 2026

Wall Street Has AI Psychosis

March 7, 2026

Trump Imposes New Tariffs to Sidestep Supreme Court Ruling

March 5, 2026

Why Sierra the Supercomputer Had to Die

March 4, 2026

Latest Posts

AI Safety Meets the War Machine

March 2, 2026

Say Goodbye to the Undersea Cable That Made the Global Internet Possible

March 1, 2026

The Dilemma Of Profits V.S. Guardrails

March 1, 2026

‘Uncanny Valley’: Pentagon vs. ‘Woke’ Anthropic, Agentic vs. Mimetic, and Trump vs. State of the Union

February 28, 2026

As Davos & India Celebrated AI, Paris Sounded The Alarm On AI Safety

February 28, 2026
Advertisement
Demo

Startup Dreamers is your one-stop website for the latest news and updates about how to start a business, follow us now to get the news that matters to you.

Facebook Twitter Instagram Pinterest YouTube
Sections
  • Growing a Business
  • Innovation
  • Leadership
  • Money & Finance
  • Starting a Business
Trending Topics
  • Branding
  • Business Ideas
  • Business Models
  • Business Plans
  • Fundraising

Subscribe to Updates

Get the latest business and startup news and updates directly to your inbox.

© 2026 Startup Dreamers. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.

GET $5000 NO CREDIT