Manny Rivelo is the CEO of Forcepoint.
In the vast landscape of artificial intelligence (AI), a new player has emerged—seamlessly collaborating and propelling your enterprise forward with remarkable finesse. Generative AI tools like ChatGPT and Bard have captured the imagination of the world, effortlessly creating everything from vegan ice cream to symphonies in an instant. However, cybersecurity pros and business leaders need to keep in mind that every collaboration with ChatGPT comes with its own set of considerations.
One of the bright engineers at my company managed to create a zero-day malware attack without writing a single line of code simply by asking the right questions to ChatGPT. It took him a mere two hours. The risks these generative AI tools pose to your organization are real, but so are the opportunities for positive outcomes such as new revenue generation and productivity gains.
The ability to harness the power of AI to create new medicines, blaze trails to revenue and growth, and revolutionize customer service makes not taking the AI plunge detrimental to your business’s future competitive advantage. Add to the decision calculus that denying employees access to tools they deem essential to their work will only lead them down uncharted—possibly dangerous—paths.
Generative AI is the ultimate shadow IT. How do you move forward to embrace this emerging technology and let employees collaborate with ChatGPT while ensuring your intellectual property remains protected?
The Risk Of Intellectual Property Exposure
ChatGPT, Bard and other generative AI chatbots feed on data, growing stronger and more capable with each byte. Their ravenous appetite for knowledge fuels their ability to create fresh content rapidly. They use algorithms, generative models and deep learning techniques to spin gold from the straw of their training data. This can lead to immense creativity and productivity as new things are produced in a fraction of the time it takes a human.
The danger, however, is not that these AI tools will steal our jobs but that they will expose our most precious information assets to the world. Generative AI tools lurk in the shadows of your enterprise as potentially disruptive to corporate security as Dropbox and the iPhone before IT teams finally made it a standard corporate issue. Today, generative AI tools are accessible to your employees through websites and apps, and it’s in this mingling of data and the internet that the risk lies.
Your organization’s data is already scattered to the winds, residing in personal and corporate devices, SaaS apps, and the cloud. With a single careless or malicious move, your confidential information could be spilled into the chat—leaving your trade secrets vulnerable to hackers.
The best defense is to bring these AI tools into the light where you can control access and harness their power through a zero-trust approach.
Embracing “Data-First” Zero-Trust Security
To mitigate the potential landmines, companies must adopt a zero-trust approach to protect their data and enable access to AI applications. Data protection becomes paramount as the potential for information breaches continues to grow. You can defend your organization against risks related to generative AI by implementing the following strategies.
1. Limit and control access. Establish a corporate security policy for AI tool usage, ensuring that all employees follow the same guidelines consistently. Enforce this policy with solutions like secure web gateways (SWG) and cloud access security brokers (CASB), which regulate access to websites and SaaS versions of AI tools. When necessary, redirect visitors to approved, sanctioned applications instead of unsanctioned AI websites.
2. Consolidate and simplify management. Unify security policies and streamline management across multiple channels and devices to prevent data breaches. This way, your organization can manage different generative AI tools with a single policy, whether employees access them on company laptops or personal phones. Enable your hybrid workforce to safely use AI while easily monitoring and controlling permissions.
3. Stop data and IP loss. Implement data security measures that enable AI and protect users/applications to prevent unauthorized copying or pasting of confidential information (PII, PHI, IP, etc.) from sources like business email and documents. Integrate data loss prevention policies across the web and public clouds to further fortify your security against the growing list of newly available AI chatbots.
Generative AI is a double-edged sword; it’s a partnership that can lead to big prizes or pitfalls for your business. Restricting access to ChatGPT only increases shadow IT risk and stifles productivity. Conversely, leaving access unchecked opens Pandora’s box to a world of hidden dangers. The wiser path is to control AI usage and empower your workforce securely.
By pragmatically applying zero-trust principles, your company can harness the power of generative AI to drive efficiency and productivity. Envision a world where ChatGPT takes its place among your list of sanctioned technology, just like the devices, websites or SaaS apps you use today. With a thoughtful and measured approach to generative AI, organizations don’t have to awkwardly sit on the sidelines. They can confidently embrace ChatGPT to reap the rewards while minimizing the risks.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Read the full article here