Generative AI bears exciting possibilities for advertisers who wish to use it to enhance or build new creative ideas, promising to lighten much of the heavy lifting of campaign creation and create a pathway to personalize content at scale.
Already, some advertisers have started to experiment. In Forrester’s Pulse survey of U.S. B2C marketers, conducted in Q1 2023, 41% said they were experimenting with ChatGPT and 19% said they had already used it in their marketing efforts.
Kraft Heinz, for instance, used Dall-E-produced images of ketchup to demonstrate the pervasiveness of its brand for its “It has to be Heinz” campaign.
Mint Mobile used OpenAI’s ChatGPT to write the copy of a recent campaign fronted by Ryan Reynolds, while Coca-Cola produced its “Masterpiece” commercial using AI imaging.
But while there is much potential for AI in creativity, advertisers must be aware of complications and unintended consequences when using generative AI, Forrester warns in a recent report.
The report, released May 30, outlines four tips for adopting responsible AI practices which would enable advertisers to optimize creative executions while mitigating harm.
1. Rein in over-exuberant creatives experimenting with generative AI
Since generative AI is a new innovation, brands need to be aware of third-party risks such as brand safety, regulatory actions and customer trust.
As a result, Forrester suggests advertisers “set clear boundaries for external use in execution versus internal use for campaign development.” To ensure there is a proper track record of how AI is used, the research and advisory company recommends advertisers log prompts and queries input into any chatbots.
Forrester also suggests companies identify common use cases across content development and execution to share with teams and clients. In other words, develop a standard of usage for employees to maintain consistency. The report specifically cited R/GA’s development of a generative AI ethics guideline for clients and employees as an example.
2. Consider liability, copyright and IP infringement concerns
Forrester warns that advertisers should consider the legal ramifications of “both the outputs and inputs of AI-enhanced marketing,” prior to investing in the technology.
Citing a US Copyright office ruling that affirms that copyright protection only applies to human-authored works, the company notes that marketing work cannot be granted copyright protection without some level of (still undefined) human involvement. That means that “absent a court ruling or congressional action, brands and creators must assume liability for AI-generated marketing,” according to the report, unless the brand develops proprietary solutions or relies on commercially ready tools like Adobe Firefly.
3. Avoid open web AI models to protect proprietary data
While issues surround the ownership of data and IP created by AI, other challenges relate to the usage of data, especially as it relates to training the AI. That is why Forrester recommends companies protect proprietary data by avoiding open web AI models.
“Build policies and best practices to protect proprietary and sensitive data. Do not enter sensitive data or company IP into public versions of generative AI,” the company advises in the report, citing BBDO’s warning against using generative AI in campaigns at all until the agency implements the tech in a way ‘that avoids unresolved issues.’
4. Tap human creativity to prevent inaccuracy
As Forrester asserts in its report, marketing executions created by generative AI are susceptible to inaccurate responses, made clear in public demos of AI tools like Google’s Bard.
That is why Forrester recommends humans should always review and revise assets produced by large language models.
“Human creativity replenishes or refreshens AI-powered creativity that would otherwise inaccurately regurgitate facts, figures and ideas,” the report says.
If used correctly, however, Forrester maintains AI can be a helpful tool in inspiring and elevating creative outputs.