Unsanctioned generative AI use is only growing inside organizations. Here’s what IT can do to support its workforce while keeping businesses safe.
You’re no doubt familiar with the term Shadow IT—systems or solutions built outside of official IT remits—but you may be less familiar with the concept of Shadow AI. Yet with the rapid emergence of generative AI, now available no further than any employee browser tab, it’s a real emerging trend that’s only growing.
Shadow AI is a term describing unsanctioned or ad-hoc generative AI use within an organization that’s outside IT governance. Research shows about 49% of people have used generative AI, with over one-third using it daily, according to Salesforce1. Inside the workplace, this can mean employees accessing generative AI tools like ChatGPT to perform tasks such as drafting copy, creating images, or even writing code. For IT, this can result in a governance nightmare that requires deciding what AI usage to permit or restrict to support the workforce while at the same time keeping the business safe.
If that wasn’t enough for IT, generative AI use is actually accelerating. According to that same Salesforce survey, 52% of respondents reported that their usage of generative AI is increasing compared to when they first started. This means the threat of shadow AI is here for IT—and it’s growing.
What IT can do about shadow AI
What can IT leaders do to get their arms around the situation? Here are three steps IT can take today to start addressing the challenges of shadow AI use inside organizations.
1. Decide now when and how employees can access generative AI tools
Determining your policies for how employees can access public generative AI tools now is a critical first step to determining your organization’s generative AI strategy because many of your employees are already using these tools. How you manage access will depend on many factors, but it can range from written guidelines to setting up firewalls or using VPNs to control access. Your response will be unique to your organization’s needs but must ultimately weigh the benefits of access to these tools in terms of employee productivity and innovation against the potential risks to security and data leakage.
2. Clearly communicate generative AI policies to employees
Once your internal policies are established, communicate widely to internal users what generative AI tools they are free to use, for what purposes, and with what kinds of data. You should be as specific as possible about what is safe use of generative AI and what is not. This means thinking through generative AI use cases for roles across your organization and providing guidance for how your policies impact those use cases.
Furthermore, be prepared to communicate early and often. For example, while 31% of executives say their company has established policies or guidance around how to use AI, only 18% of individual contributors say the same thing according to research from Asana2. This suggests two challenges: only a third of organizations have prioritized communicating policies, but in addition, once those policies are communicated, employees don’t always get the message. This means you should be prepared to communicate more than you might think about your policies to ensure employees understand and follow them.
3. Lean on education and training to reinforce responsible use
It’s one thing to tell people what policies and guidelines are for using generative AI tools and another thing entirely to show them how to use the tools responsibly. Hands-on trainings and learning resources can go a long way towards helping users understand how to get the most from generative AI without putting company data at undue risk. This can include anything from workshops to webinars to self-paced e-learning modules. But the idea is to make resources plentiful and easily accessible, so users understand what safe and responsible use looks like.
Unmasking shadow AI: How IT can get started
Determining how to manage access, communicating and enforcing policies, and leaning on education and training are steps that organizations can take right now to address shadow AI. But as generative AI use becomes increasingly pervasive, organizations will also need to consider a broader strategy which may include building or customizing tools to better meet business needs. And data shows the effort is worth it. Seventy-six percent of IT leaders believe GenAI will be significant if not transformative for their organizations, and 65% believe they will see meaningful results within the next year according to a recent Dell survey. 3
No matter where you are in your GenAI journey, the steps above can help. And if you need more guidance, enlisting the support of partners can get you there faster. At Dell, we work with organizations every day to help them identify use cases, put solutions in place, increase adoption, and even train internal users to accelerate innovation. To learn more, visit dell.com/ai.
Read the full article here