Welcome to The Week in Generative AI, a weekly roundup for marketers from Quad Insights that sums up the latest news surrounding this rapidly evolving technology.

AI or DIY? Google Gemini ad controversy raises important questions for content creation 

Google’s recent “Dear Sydney” Olympics ad for its Gemini AI chatbot provides a stark reminder about the challenges facing marketers in touting the emerging capabilities of artificial intelligence. The ad features a father asking Gemini to help his daughter write a fan letter to send to Olympic hurdler Sydney McLaughlin-Levrone. After days of criticism, Google removed the ad from its TV broadcast rotation last Friday, though it remains online. (McLaughlin-Levrone set a world record in winning the 400-meter hurdles at the Paris Games on Thursday.)

“What are the ethical implications of using AI to simulate human emotions and relationships?” Shelly Palmer, Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, wrote in a blog post published before Google pulled the spot. “Suffice it to say … I don’t want to live in Generica, where every human experience is continually devolving into cookie cutter templates that ‘work.’”

The conversation surrounding the controversy has continued into this week with industry observers raising questions about finding the right balance between AI- and human-centric input into content. It’s important to recognize that AI tools are here to stay, Omar H. Fares, Lecturer in the Ted Rogers School of Retail Management at Toronto Metropolitan University notes. “Our collective line of inquiry needs to shift towards exploring a state of interdependence, where society can maximize the benefits of these tools while maintaining human autonomy and creativity,” he wrote in The Conversation. “Achieving this balance is challenging and begins with education that emphasizes foundational human capabilities such as writing, reading and critical thinking.”

See also: “The antithesis of the Olympics: Using AI to write a fan letter” (NPR)

More AI controversy:

S&P Global initiates AI training for all employees  

S&P Global just announced that it plans to train all 35,000 of its employees in the ways of generative AI with a goal of fueling innovative customer solutions. The company will work with consulting giant Accenture to provide a customized learning program rolling out in August. In addition to training, the S&P and Accenture partnership will involve joint efforts to build standards for AI development and benchmarking across the financial services. “AI is for everyone,” Bhavesh Dayalji, S&P Global’s Chief AI officer, said in a statement.

ICYMI: “How AI will reshape the future of video for employee training” (Vimeo)

Quote of the week 

“I think generative AI is going to transform the way we innovate in CPG…. Where do the advantages lie when you’re a company that operates like us across tens of categories? Well, it’s in consumer understanding and consumer information. Generative AI is amazing at helping make sense of unstructured data. We want to use it to surface fresh insights and move from insights to winning concepts much faster.”

—Fabrice Beaulieu, Chief Marketing, Sustainability and Corporate Affairs Officer at Reckitt, speaking with Ad Age’s Jack Neff on the Aug. 7 edition of the Marketer’s Brief Podcast. Reckitt owns Calgon, Lysol, Mucinex, Woolite and other leading consumer brands.

54,000 

That’s the number of applications for generative AI–related patents made globally between 2013 and 2023, according to the World Intellectual Property Organization, “with more than 25% of them emerging in the last year alone.”

When AI goes mad: Models suffer without fresh data, study finds  

It turns out that generative AI may have a use for humans after all. A new study from researchers at Rice and Stanford universities [PDF] suggests that quality of responses from AI models goes into a tailspin unless they are trained on enough “fresh data” from the real world. The study looked at the outputs of three AI image-generation tools when trained on different mixes of human content and AI-generated data. Their conclusion is that “synthetic” data from other AI tools causes AI models to go, well, “MAD” — afflicted by Model Autophagy Disorder. The researchers say MAD is roughly analogous to mad cow disease. “Some ramifications are clear: Without enough fresh real data, future generative models are doomed to MADness,” Rice computer engineer Richard Baraniuk told Science Alert.

See also: “10 reasons why AI may be overrated” (NPR)

Deloitte report: Gen AI poised to transform retail investing 

The traditional ways of getting investment advice — from friends and family, social media, financial planners, websites or the media — are about to get a massive AI-driven makeover, according to the Deloitte Center for Financial Services. Deloitte analysts predict that AI will become the main engine for guiding individuals on where to invest their money, with gen AI applications in play 78% of the time in retail investing decisions by 2027, up from less than 10% this year. (Individuals typically consult more than one source.)

See also: “Trending: Billion-Dollar Investments Drive AI Surge in Healthcare” (PYMNTS.com)

Further reading