Welcome to The Week in Generative AI, a weekly column for marketers from Quad Insights that quickly sums up emerging developments surrounding tools such as ChatGPT and Bard, while also offering the latest details on how generative AI tools are being incorporated into advertising products and workflows.
Generative AI and the creator economy
We’re starting to see more takes on how ChatGPT will impact creators and the creative industry. Writing in Mashable, Christianna Silva profiles a group of creators who collaborated using ChatGPT to jumpstart their creative process, while also exploring the possibilities of an AI-driven creator economy. “The simple answer to how generative AI affects the creator economy, or any job, really, is that it saves you time,” Silva says. “None of these tools are perfect, yet. The creators who refuse to use these tools could risk being overtaken by creators who do use AI tools.”
And Bernard Marr in Forbes discusses the future of generative AI beyond ChatGPT and how tasks will be automated for greater creativity past just text prompts and image generation. “Tools are emerging that will allow designers to simply enter the details of the materials that will be used and the properties that the finished product must have,” says Marr. “The algorithms will create step-by-step instructions for engineering the finished item.” He cites an example of Airbus engineers using this type of tool to redesign interior panels on a passenger jet.
Additional takes:
· “Generative AI comes for advertising” (Axios)
· “A new AI trend is ‘expanding’ classic art and the internet is not happy” (Mashable)
· “Ralph Lauren Joins Ranks Testing Generative AI” (Consumer Goods Technology)
· “Generative AI—How to create diverse, inclusive and legal content” (Ad Age)
OpenAI fights AI hallucinations
OpenAI is working on a new way to stop AI from making stuff up. Over at CNBC, Hayden Field reports on how the company is using a technique called “adversarial training” to train two AI systems against each other. One system, the generator, is responsible for making stuff up. The other system, the discriminator, is responsible for figuring out what’s real and what’s not. Over time, the generator gets better at making stuff up that looks real, and the discriminator gets better at figuring out what’s real and what’s not. OpenAI hopes that this sort of back-and-forth will help to reduce the number of times that AI systems output so-called “hallucinations.” Crucially, according to Field, “the research comes at a time when misinformation stemming from AI systems is more hotly debated than ever, amid the generative AI boom and lead-up to the 2024 U.S. presidential election.”
Speaking of contested debates, a group of AI researchers and executives, including OpenAI co-founder Sam Altman, signed an open letter warning that artificial intelligence could pose an existential threat to humanity. The Center for AI Safety posted the letter titled “Statement on AI Risk” on Tuesday. It argues that AI research is moving too quickly and that we need to take steps to mitigate the risks. The authors call for a global effort to develop safety guidelines for AI research and development.
Kevin Roose in The New York Times writes that “the statement comes at a time of growing concern about the potential harms of artificial intelligence. Recent advancements in so-called large language models — the type of A.I. system used by ChatGPT and other chatbots — have raised fears that A.I. could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.”
The letter was met with mixed reactions. Some experts have praised the authors for raising awareness of the potential risks of AI, while others have criticized the letter for being alarmist.
Additional takes:
· “How the media is covering ChatGPT” (Columbia Journalism Review)
· “OpenAI execs warn of ‘risk of extinction’ from artificial intelligence in new open letter” (Ars Technica)
· “Generative AI tools lead to rising deepfake fraud” (Fox Business)
· “8 big questions about AI” (New York Times Opinion)
AI Snaps come to Snapchat+ and TikTok launches Chatbot
Snapchat has delivered AI Snaps to Snapchat+ users. This new generative AI feature allows Snapchat+ subscribers to send Snaps of what they’re up to and then receive a generative Snap back from the in-app chatbot, My AI. According to Sarah Perez from TechCrunch, the new feature “doesn’t seem to have much value beyond entertainment purposes” and that it’s “unclear to what extent Snap has implemented strong guardrails around the My AI generative photo feature.”
TikTok is currently testing Tako, their AI chatbot, in the Philippines, and this tool has the potential to “revolutionize user experience on the platform,” according to Daniel Buchuk, an analyst at Watchful Technologies. By offering personalized recommendations, facilitating direct dialogue and guiding users through the vast TikTok landscape, Tako is meant to transform the way users navigate and discover content. Buchuk says Tako could be a “powerful tool for reaching new audiences and engaging with existing users.” Additionally, Tako can be used to answer user questions and provide support, which could be used to “build trust and loyalty with customers.”
Additional takes:
· “TikTok Is Testing an AI Chatbot Named Tako” (Bloomberg)
· “Introducing My AI Snaps” (Snapchat blog)
· “Do consumers support generative AI in marketing?” (Chain Store Age)
Further reading
· “Generative AI’s $7 Trillion Ecosystem: Invest In Nvidia, Microsoft, Adobe And More” (Forbes)
· “China isn’t waiting to set down rules on generative AI” (MIT Technology Review)
· “Opinion: Nvidia created an AI bubble, and software stocks are already paying the price” (MarketWatch)
· “Microsoft has launched “Jugalbandi”—a new generative AI app for India” (Forbes)
· “Alibaba begins rollout of its ChatGPT-style tech as China AI race heats up” (CNBC)
Thanks for following along. We’ll see you next Friday.
Previously: “The Week in Generative AI: May 26, 2023 edition”