Color photo of young Black man sitting in front of a computer monitor at an office
Digital Security & Governance

6 Ways Organizations Can Guard Against AI Threats

February 1, 2024 1 minute read
Take advantage of artificial intelligence. Just don’t let it take advantage of you
Color photo of young Black man sitting in front of a computer monitor at an office

The potential that artificial intelligence (AI) holds for many sectors is profound and exciting. We should maintain that enthusiasm but remember that cybercrimes are one field that AI can supercharge. In fact, Britain’s National Cyber Security Centre warned last month that the rapid proliferation of AI tools is likely to lead to a rise in cyberattacks (particularly ransomware threats) as well as lower barriers for less sophisticated hackers to cause digital havoc.

We take that sober warning to heart at Acquia, where cybersecurity is paramount. To that end, we convened a panel of in-house experts — Robert Former, Chief Information Security Officer; Justin Cherry, Senior Principal Compliance Auditor, Director GRC; and Deanna Ballew, Senior Vice President, DXP Products — to discuss how organizations can best guard against AI threats and how to use AI responsibly to maximize its potential for good.

1. Form an internal AI committee

No one person in any organization can track the fast-moving advances in AI and the laws (or lack thereof) regulating the technology. That’s why our panelists all recommend the formation of an internal council to monitor the use of AI in products and services. Committee members can be from HR, legal, security, product, and other teams and provide timely governance, oversight, and guidance to the company at large.

At Acquia, for instance, our internal AI committee has a very active Slack channel with back-and-forth conversations about new regulations and lawsuits. “It’s going to take experts within your organization to come together and figure out how to move AI forward in a safe manner,” said Ballew.

2. Communicate in a transparent manner both internally and externally

“On top of the non-trivial technical challenges that we face with AI, we also have significant, almost exponential challenges with its social aspects,” said Former. “There’s a lot of fear, uncertainty, and doubt that’s heaped on the use of AI. Everybody’s waiting for Terminator to start.” To combat these concerns, he recommends being transparent with internal and external users about the steps your organization is taking to ensure that your use of AI is compliant and ethical.

Ballew underscored Former’s point: “Understand what you’re doing and how you’re using your data. How is [your AI] model being trained? Document it and be transparent about it. The more you know, the less risk you have.”

3. Adopt a secure development lifecycle

For Former, a secure development lifecycle for AI is all about testing, testing, testing. You want security in design, development, and deployment, which requires ongoing testing, both positive (does your AI model do what you expect?) and negative (in what way has it evolved beyond your expectations?).

At Acquia, for example, we test each business unit annually with our global resiliency manager and product teams. “We coordinate with these groups frequently to ensure that they and we, as a company, are prepared for anything that comes our way,” Cherry explained.

Be patient if that level of security takes some time to seed in your organization. “Don’t expect instant results,” said Former, “but always be vigilant about what the results will be.”

4. Create an AI registry

Develop a registry for AI models that are being used by your organization. Know what data is going in and what’s coming out. “During a security incident,” explained Cherry, “knowing how your systems interconnect with AI technologies will help isolate downstream impacts when system isolation is required.” Be aware of where AI integrations occur in your systems and fold them into your incident response plan.

The registry ties back into Ballew’s point about transparency. “Let everybody within your organization know how you’re leveraging AI, so it’s not scary,” she said. “It’s not a black box. This is how it’s being used and where.”

5. Help staff and coworkers become AI-literate

Organizations have responded to AI’s entry in the workplace in different ways. Some have thrown caution to the wind and are letting their employees have at it — using ChatGPT in the workplace and doing whatever they want with it, for instance — while others forego AI altogether and forbid access to certain sites on their web browsers.

Acquia has taken the middle road. “We see a lot of potential in AI, and you can only realize that potential if you turn your folks loose on it in a safe manner,” said Former.

At Acquia, that means drafting an acceptable use policy for AI and creating a private sandbox for employees whose work involves AI or who are curious about the technology. The sandbox allows staff to use AI without introducing further risk to either our internal intellectual property or our clients’ data, because it doesn’t stretch there.

“There’s a lot of curiosity [around AI],” Former said, “and [staff] are gonna play with it whether you want them to or not, so give them a safe place to play.”

6. Define parameters for AI purchases in your procurement process

Before incorporating AI tools into your workflows or tool sets, evaluate them against company standards you’ve established for AI purchases. If a third party is involved, for example, vet them, or if a tool is under consideration, test it. You may sacrifice a competitive advantage if you bypass the technology, Former said, but you don’t want to invite risk either. “Approach it logically and with as much transparency as you can muster both internally and externally,” he said.

And assess whether the tool or service will actually be useful. Because AI is still maturing, some of the uses to which it’s applied have yet to prove themselves in the day-to-day, said Ballew, and it may distract more than it helps.

Same goes for businesses that weave AI into their products. “You don’t want to jeopardize your brand reputation by implementing an AI feature too soon,” she said. Make sure that it’ll be useful and won’t lead to a security breach or cause your audiences to distrust you. We certainly followed those principles when incorporating AI into Acquia DAM, where its content-generation capabilities accelerate time to market, or machine learning (a subset of AI) into Acquia CDP.

Where to go from here

The promise of AI extends across industries, with consumers showing a mixture of curiosity, fear, and excitement about the technology. Organizations can realize its possibilities while simultaneously reassuring consumers and promoting their enthusiasm — but only if they act responsibly and take the steps needed to guard against the harmful outcomes that AI can produce. Watch the webinar to find out what more our panelists recommend for AI security and compliance!

Keep Reading

View More Resources