Woman wearing glasses looking at computer screen reflected on her lenses
Learn more about our global partner ecosystem

Where AI and Marketing Intersect in 2024

January 3, 2024 1 minute read
AI is here to stay. How are we handling it?
Woman wearing glasses looking at computer screen reflected on her lenses

Artificial intelligence (AI) made quite the splash across tech in 2023, and we only expect its ripples to widen. Because marketers are still getting a handle on it — especially in planning for future uses, implementations, and science fiction doomsday hyperbole — lots of questions continue to bedevil the technology. 

One of the more challenging aspects of AI is regulation. Since we're exploring relatively uncharted territory in AI technology, keeping tabs on proper use and accountability are at the forefront for leaders across the political and technical sectors. 

We asked industry leaders in our partner network to offer insights on consumer trust and the regulations that will inevitably arise as AI plays a larger role in our lives. 

What can organizations do to enact policies and controls around AI in the absence of federal, state, or local regulations?

“Join communities that advocate for equality in AI. Create principles on how to use AI and make them clear, coherent, and easy to understand.” — Danny Bluestone, Founder and CEO |  Cyber-Duck

"Companies should establish an AI Center of Excellence (CoE) specifically charged with creating policies, education, and standards for using AI. Then, create AI ethics documentation that outlines what your company will and will not do using AI. Finally, educate employees on proper AI usage."  — Fred Faulkner, VP of Strategic Marketing | Bounteous

“Create an internal committee and determine what they feel is acceptable for their organization.”  — Joshua Hover, Director | Perficient

“Organizations can enact AI policies by setting data quality standards, ensuring ethical AI practices, maintaining compliance with relevant laws, and overseeing AI systems to prevent misuse and bias. Collaboration between technical experts, legal advisors, and leaders is crucial for aligning AI with organizational values and client expectations.” — Jill Baker, VP Practice Director | Genuine  

“In the absence of specific regulations, organizations can enact their own policies by prioritizing data governance to ensure accuracy, compliance, and user privacy. This involves mitigating bias in AI, establishing transparent user consent mechanisms for the collection of personal data and implementing robust data governance practices that align with data protection laws like HIPAA, GDPR, or CCPA. Maintaining data quality and integrity is key for responsible AI implementation.”  — Frank Febbraro, CTO | Phase2

As AI advances, 68% of consumers say that companies need to be more trustworthy, according to ZDNet. How can organizations facilitate consumer trust?

“To build trust in an increasingly AI-enabled world, organizations must emphasize transparency and ethical AI use. Addressing the paradox of ‘artificial’ in AI involves demonstrating real, tangible benefits and ensuring AI decisions are explainable and fair.” — Daniel Knauf, Chief Technology Officer | Material+

“When it comes to customer trust and AI, there are three things that companies should keep in mind. When communicating with customers, make it clear when this communication is aided or facilitated by AI. It’s important to have a human continually validate AI output. Be transparent and state your policy in this regard. Tread lightly when it comes to AI bias. Be thoughtful in the ways you implement this technology.” — Andy Kucharski, President | Promet Source

“Organizations can build consumer trust by being transparent about how they use AI, prioritizing data privacy, actively seeking user consent, and engaging in clear, consistent communication about AI's role and limitations.”  — Aileen Wong, SVP Experience Design | Genuine

“To foster consumer trust amid advancing AI, organizations should prioritize transparency in data usage and demonstrate clear intentions. They should aim to provide genuine assistance rather than fostering purely transactional relationships. Avoid haphazardly applying AI solutions without considering privacy concerns. Additionally, incorporating human validation in AI outputs can mitigate the risks of impersonal or potentially invasive interactions.” — Frank Febbraro, CTO | Phase2

“Honesty and transparency. Margot Bloomstein’s book Trustworthy is an excellent read on this topic.” — Greg Dunlap, Director of Strategy | Lullabot

To learn more about each of the partners highlighted here, please visit the Acquia partner portal.

Keep Reading

View More Resources