Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Trending

Today’s NYT Mini Crossword Clues And Answers For Fri day, May 9th

May 9, 2025

This Hidden Threat Can Diminish Your Rental Property Revenue

May 9, 2025

Why Buying a Retiring Business Is the Smartest Move for Young Entrepreneurs

May 9, 2025
Facebook Twitter Instagram
  • Newsletter
  • Submit Articles
  • Privacy
  • Advertise
  • Contact
Facebook Twitter Instagram
Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Subscribe for Alerts
Startup DreamersStartup Dreamers
Home » Why Companies Are Vastly Underprepared For The Risks Posed By AI
Innovation

Why Companies Are Vastly Underprepared For The Risks Posed By AI

adminBy adminJune 15, 20231 ViewsNo Comments7 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email

In the last year, artificial intelligence has arrived with a bang. Due to the emergence of generative tools like ChatGPT, businesses across every industry are realizing its immense potential and starting to put it to use.

We know that there are challenges – a threat to human jobs, the potential implications for cyber security and data theft, or perhaps even an existential threat to humanity as a whole. But we certainly don’t yet have a full understanding of all of the implications. In fact, a World Economic Forum report recently stated that organizations “may currently underappreciate AI-related risks,” with just four percent of leaders considering the risk level to be “significant.”

ADVERTISEMENT

In May, Samsung became one of the latest companies to ban the use of ChatGPT after it discovered staff had been feeding it data from sensitive documents. ChatGPT’s operators, the AI research organization OpenAI, openly state that there is no guarantee of privacy when this happens, as it involves uploading data to the cloud where it can be accessed by its employees and potentially others.

This is just one of what are likely to be many examples we will see in the coming months of businesses shutting the stable door after the horse has bolted. The speed with which this technology has arrived on the scene, combined with the huge excitement around its transformative potential and the well-documented power of the fear-of-missing-out (FOMO) effect, has left many organizations unprepared for what is to come.

What Are The Risks?

ADVERTISEMENT

The first step towards managing the risks posed by generative AI is understanding what they are. For businesses, they can largely be segmented into four categories:

Accuracy

A common problem with generative AI at this stage is that we can’t always rely on its results to be accurate. Anyone who has used ChatGPT or similar tools for research or to answer complex questions will know that it can sometimes give incorrect information. This isn’t helped by the fact that AI is often opaque about its sources, making it difficult to check facts. Making mistakes or taking action based on inaccurate information could easily lead to operational or reputational damage to businesses.

Security threats

This can come in the form of both internal and external threats. Internally, unaware or improperly-trained users could expose sensitive corporate information, or protected customer information, by feeding it into cloud-based generative platforms such as ChatGPT. Externally, generative AI enables cyber-criminals to engage in new and sophisticated forms of social engineering and phishing attacks. This has included the use of generative AI to create fake voice messages from business leaders, asking employees to share or expose sensitive information.

ADVERTISEMENT

Bias

AI systems are only as good as the data they are trained on, and there is a great deal of concern about the implications that this has for creating biased outcomes. If data is collected in a biased way (for example, over or under-representing statistics of particular population sections), then this can lead to skewed results that can impact decision-making. An example is a tool designed to automatically scan the resumes of job applicants and filter out those who are unsuitable. If the tool doesn’t have enough data on applicants from a particular segment, this could mean it isn’t able to accurately assess applications from that segment. This can also lead to unfavorable outcomes and reputational damage if AI is used to respond to customer inquiries and after-sales support.

Culture and trust

The introduction of new tools, technologies, and processes can often cause anxiety among workers. With AI and all the discussion about replacing humans, this is understandably more intense than usual. Employees may fear that AI systems have been brought into their jobs to potentially make them redundant. This can lead to the development of apprehension, mistrust, and disgruntled workers. It could cause them to feel that their own human skills are less valuable, creating toxicity in the workplace and increasing employee turnover. There could also be concerns that certain AI systems, such as those used for workplace monitoring, have been brought in to surveil human workers or monitor their activity in an intrusive way.

How Prepared Are Organizations?

ADVERTISEMENT

A survey carried out by analysts Baker McKenzie concluded that many C-level leaders are over-confident in their assessments of organizational preparedness in relation to AI. In particular, it exposed concerns about the potential implications of biased data when used to make HR decisions.

It also proposed that it would be very sensible for companies to think about appointing a Chief AI Officer (CIAO) with overall responsibility for assessing the impact and opportunities on the horizon. So far, companies have been slow to do this, with just 41% of companies reporting that they have AI expertise at the board level. In my experience, it’s uncommon to find that companies have specific policies in place around the use of generative AI tools. They often lack a framework to ensure that information generated by AI is accurate and trustworthy and that AI decision-making is not affected by bias and a lack of transparency around AI systems. Another significant gap in AI preparedness at many organizations is that the impact of disruption on culture, job satisfaction, and trust is also often underestimated.

Improving Corporate Preparedness

ADVERTISEMENT

There’s no quick fix to addressing a seismic societal shift as disruptive as AI, but any strategy should include developing a framework aimed at identifying and addressing the threats covered here. It should also cover keeping an eye on the horizon for new threats that will emerge as technology matures.

Certainly, a good start is to ensure AI expertise is present at the board level, for example, through the appointment of a CAIO or similar. As well as mitigating threats, this is a person who can ensure opportunities are identified and exploited. Their job should then include ensuring that awareness permeates throughout the organization at all levels. Every employee should be aware of the risks regarding accuracy, bias, and security. On top of that, they should also have an understanding of how AI is likely to impact their own role and how it can augment their skills to make them more efficient and effective. Companies should make efforts to ensure there is an open, ongoing dialogue, including reassurance over its impact on human jobs and education on the new opportunities that are opening up for AI-skilled humans.

If AI is used for information-gathering or decision-making, policies should be in place to assess the accuracy and identify areas of operation that could be impacted by AI bias. Particularly for those using AI at scale, this could mean investing in rigorous testing and quality assurance systems.

ADVERTISEMENT

Identifying and mitigating AI cyber threats will also increasingly become a part of organizational cyber-security strategies. This can be as simple as ensuring employees are aware of the threats of AI-enhanced phishing and social engineering attacks, right up to deploying AI-based cyber defense systems to protect against AI-augmented hacking attempts.

Last but by no means least, companies should make efforts to engage with regulators and government bodies in discussions around AI regulation and legislation. As the technology matures, industry bodies and organizations such as trade unions will be involved in drafting and implementing codes of practice, regulations, and standards. It’s essential that the organizations that are at the forefront of using this technology provide their input and expertise.

By failing to understand and react to these threats, any individual or organization runs the risk of falling foul of one of the greatest of all threats posed by AI – failing to exploit its opportunities and by doing so, being left behind by more forward-thinking competitors.

ADVERTISEMENT

To stay on top of the latest on new and emerging business and tech trends, make sure to subscribe to my newsletter, follow me on Twitter, LinkedIn, and YouTube, and check out my books Future Skills: The 20 Skills and Competencies Everyone Needs to Succeed in a Digital World and The Future Internet: How the Metaverse, Web 3.0, and Blockchain Will Transform Business and Society.



Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

Today’s NYT Mini Crossword Clues And Answers For Fri day, May 9th

Innovation May 9, 2025

Pine Beat Is A New Sustainable British Bluetooth Speaker With Power Bank

Innovation May 8, 2025

Apple’s Infamous App Store Tax Is Collapsing

Innovation May 7, 2025

Today’s ‘Wordle’ #1417 Hints, Clues And Answer For Tuesday, May 6th

Innovation May 6, 2025

Bella Ramsey Shines As Ellie Starts Acting Her Age

Innovation May 5, 2025

NYT Crossword Puzzle Clues And Answers For Sunday, May 4

Innovation May 4, 2025
Add A Comment

Leave A Reply Cancel Reply

Editors Picks

Today’s NYT Mini Crossword Clues And Answers For Fri day, May 9th

May 9, 2025

This Hidden Threat Can Diminish Your Rental Property Revenue

May 9, 2025

Why Buying a Retiring Business Is the Smartest Move for Young Entrepreneurs

May 9, 2025

What to Know Before You Sign a Franchise Deal

May 9, 2025

OpenAI and the FDA Are Holding Talks About Using AI In Drug Evaluation

May 9, 2025

Latest Posts

The Question Every Founder Should Be Able to Answer—But Most Can’t

May 8, 2025

The 10 Best Low-Risk Business Ideas for Retirees

May 8, 2025

Apple’s Infamous App Store Tax Is Collapsing

May 7, 2025

We Must End the Hidden Growth Tax on U.S. Small Businesses

May 7, 2025

How to Scale Innovation and Creativity in Your Business

May 7, 2025
Advertisement
Demo

Startup Dreamers is your one-stop website for the latest news and updates about how to start a business, follow us now to get the news that matters to you.

Facebook Twitter Instagram Pinterest YouTube
Sections
  • Growing a Business
  • Innovation
  • Leadership
  • Money & Finance
  • Starting a Business
Trending Topics
  • Branding
  • Business Ideas
  • Business Models
  • Business Plans
  • Fundraising

Subscribe to Updates

Get the latest business and startup news and updates directly to your inbox.

© 2025 Startup Dreamers. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.

GET $5000 NO CREDIT