COMMENTARY: AI security can’t be an afterthought
As companies rush to adopt artificial intelligence, protecting systems from devasting jailbreak attacks must be a top priority, writes Alan Pelligini, CEO of Thales North America.
Artificial intelligence is revolutionizing the way we live and work. The launch of ChatGPT democratized the technology by taking advanced systems primarily used by scientists and engineers and creating a set of tools easily accessed by everyone from students to companies revamping their operations.
AI is also becoming an essential competitive advantage for companies looking to differentiate themselves from the crowd, streamline product development and up-level their customer service operations.
Before jumping into the AI pool, there is one question companies should be asking themselves: Are we prepared to fully protect our customers from the jailbreak attempts and other cyberattacks that will be aimed at our AI system?
Companies need to think beyond securing their own systems, which is equally important. By embedding AI in nearly every aspect of our connected lives, we’re creating sprawling, interconnected systems that change how companies do business.
As AI takes more control over tasks, the fallout from a cyberattack is more devastating in scale. Compounding the problem is the fact that some of the largest most powerful AI systems are particularly vulnerable. AI jailbreak attempts are the attacks of the future.
In plain terms, an AI jailbreak is a type of hacking where attackers trick an AI system into bypassing its rules and safeguards to perform unintended actions and activities. These attacks have the potential for fatal consequences depending on what the system controls. In an AI jailbreak, there are two primary purposes for an attack.
The first is the traditional attack that involves stealing data. In some cases, an AI system, like humans, can be tricked into sharing sensitive details such as medical information or business plans.
The difference with an AI system attack, as mentioned before, is the scale of the damage. With human error-based data breaches, hackers need to find the right person to gain access to the information they want or hope to steal. If a network is set up securely, there should be protocols in place to limit employee access and to help contain any breaches. However, the interconnectedness of an AI system could more easily act as an unintended launch pad for hackers into deeper parts of the network.
The second more nefarious jailbreak attack is one that targets the models themselves, causing systems to bypass safety protocols. For example, hacking an AI-powered car to misinterpret a “Stop” sign as a “70 mph” sign could be fatal to both passengers and others around the hacked vehicle. Tricking a medical device could lead to the system overdosing a patient on pain medication. The list is endless and will only grow as AI is used more in daily life.
Regardless of the risks, abandoning AI is no longer an option—AI is here to stay. As companies integrate AI into their operations, there are three factors to consider:
Right-size your AI needs
Companies should assess what they want and need their AI systems to do. Bigger is not always better. Large language models are very susceptible to attacks due to their use of vast datasets and statistical methods. Symbolic or hybrid models, however, are less data-intensive and operate on explicit rules, making jailbreaks more difficult.
Secure your AI
Just like with any other system, AI must be protected using a defense in depth approach. Digital watermarking, cryptography, and other security tools can fortify AI models while cybersecurity teams should stress-test AI models to find and fix vulnerabilities before hackers can exploit them.
Double-down on cybersecurity training
Security of AI systems should be incorporated into a company’s security posture. Cybersecurity measures must be strengthened and tailored for the AI era. Employee training is foundational for any effective cybersecurity program and its importance as a first line of defense cannot be understated, especially if an AI system is in use.
Ultimately, companies who are not securing their AI-driven products run the risk of a catastrophic reputational and financial backlash when hackers launch a successful AI jailbreak assault on their system.
Unlike a hack on today’s current systems, which can be quite damaging, the far-reaching consequences of a hack on an AI system will be of a magnitude greater than we have ever seen due to the scale at which society will be impacted. Companies need to start integrating security into their AI systems now while they are being developed to ensure the
Alan Pellegrini is CEO of Thales North America.