Ben Pippenger, co-founder and Chief Strategy Officer at Zylo.
Artificial intelligence (AI) was once a topic of conversation only among the most tech-savvy. But today, it’s gone mainstream. There’s buzz about AI everywhere, from boardrooms to backyard BBQs.
The core generative AI big players, Google Bard and ChatGPT, have become household names. People use these services both in their professional and personal lives to do everything from writing emails and blog posts, to generating images and even planning events and vacations.
Given the hype around Generative AI, it’s not surprising that just about every software vendor is investing in their AI strategy and identifying ways it can make their products easier to use and more powerful. The potential of AI is exciting. There’s no doubt it’ll bring big productivity wins and help drive value creation—among other big benefits.
It’s certainly tempting to go all in on AI. But organizations must proceed with caution. While AI promises to deliver big benefits, it must be governed wisely.
Balancing The Benefits And Risks
When I stop to think about it, the hype around AI and the constant tug-of-war between benefits and risks reminds me a lot of the debate on Shadow IT.
Software was once primarily acquired by IT teams. But thanks to the rise of SaaS, it’s easy for anyone within an organization to buy the tools they need. As a result, many software tools are brought into the organization without IT’s knowledge or oversight—a phenomenon known as Shadow IT.
Shadow IT isn’t necessarily a bad thing. It can introduce innovation and new, improved ways for organizations to do things. In fact, companies like Atlassian have long embraced Shadow IT and this sense of “tool autonomy” that allows their employees to use their preferred tools to get work done.
But there’s also a dark side to Shadow IT. When technology enters an organization’s ecosystem without the knowledge of IT (and especially security teams), it introduces significant security risks and potential financial liabilities.
Most often, Shadow IT enters the organization via a credit card purchase, bypassing IT and security checks. That means IT teams are unaware of what data is going into the application.
In addition, when an employee purchases an application, they may be unaware that the exact application (or a comparable one) is already in use elsewhere in the organization. That leads to redundancy; multiple tools are out there performing the same function. It also introduces inefficiency, as organizations can typically negotiate better terms and pricing when they group licenses into a single contract.
The pros and cons of AI are similar to that of Shadow IT. On one hand, AI presents a huge opportunity to increase efficiency and the speed of value creation. But on the flip side, AI can introduce significant risk to the organization if the wrong information is given to these models.
Striking The Right Balance: Freedom Within A Framework
In the world of IT, governance so often feels like a four-letter word. But it’s key to achieving the elusive balance of freedom and responsibility for both Shadow IT and Generative AI tools.
Governance is rooted in visibility. You can’t fully capitalize on the opportunities of AI and mitigate the risks if you don’t know what’s going on in your organization. You need to know what AI tools are being used—and what tools are being used that have AI capabilities—who is using them, why they’re being used, what data is being shared and what sort of financial or business risk is being introduced as a result. Then, you can use this knowledge to either unlock innovation and potential productivity gains for your team or remediate risk. The process is remarkably similar to how companies manage Shadow IT today.
Striking the right balance and establishing that “freedom within a framework” when it comes to AI use in the organization requires different components.
• Set controls and policies around the use of AI products. Think of these as “guardrails” for your employees.
• Deliver guidance and training that enables your employees to be good digital corporate citizens. They need to know what the policies are and why you’re enacting them.
• Monitor your portfolio on an ongoing basis. Determine which existing applications use AI and how and provide your employees with training and guidance to mitigate risk. Remember: This isn’t a one-and-done activity. It needs to be done continuously.
• When AI is being used, be sure to evaluate and understand what services are being used, where the data is stored and how the data gets there.
The Potential Of AI Is Huge
Is AI the new Shadow IT? I’d argue yes and no.
Full visibility is foundational to managing both AI and Shadow IT. It’s imperative to understand what SaaS applications are in use at your organization and which leverage AI so you can shed light on any risks that may be lurking in the shadows.
What’s more, the rise of both Shadow IT and AI requires organizations to establish policies (or guardrails) and deliver training that empowers employees to be good corporate citizens.
But where AI differs from Shadow IT is in the incredible potential this technology holds. The gains employees are already experiencing from AI tools are not possible without it.
Some organizations attempt to reduce the flow of Shadow IT by establishing strict policies. I don’t believe AI can (or should) be stopped in the same fashion.
Soon, AI will be everywhere—not just in new apps that pop up across your business. Now is the time to build a comprehensive strategy that includes an assessment of apps you are buying and renewing, as well as ongoing discovery that helps ensure your company is safe and sound.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Read the full article here