Welcome to The Week in Generative AI, a weekly column for marketers from Quad Insights that quickly sums up need-to-know developments surrounding this rapidly evolving technology.
2024 AI predictions
AI took a star turn in 2023, and 2024 trendspotting has started. Experts at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) have highlighted seven areas of interest for the new year, including white collar work shifts, deepfake proliferation and governmental regulation. Among the contributors to the list of predictions: Peter Norvig, an HAI fellow and Google researcher, who writes that he’s focused on the “rise of agents and being able to connect to other services to actually do things.”
Meanwhile, industry titan Bill Gates weighs in on his blog with visions of a future in which AI has reshaped work and communication. “If we make smart investments now, AI can make the world a more equitable place,” he writes. “It can reduce or even eliminate the lag time between when the rich world gets an innovation and when the poor world does.”
In a less optimistic look ahead, TechCrunch’s Devin Coldewey foresees the 2024 U.S. presidential election as an AI stress test with “bot accounts and fake blogs [that] spout generated nonsense 24/7” and adds that voters should “expect both false positives and false negatives in a concerted effort to confuse the narrative and make people distrust everything they see and read.”
Related coverage:
• “A song of hype and fire: The 10 biggest AI stories of 2023” (Ars Technica)
• “Agencies hope AI helps with content transparency, anticipation and commerce in 2024” (Digiday)
• “Alphabet to limit election queries Bard and AI-based search can answer” (Reuters)
• “An Anticipated Wave of AI Specialist Jobs Has Yet to Arrive” (The Wall Street Journal)
• “Four trends that changed AI in 2023” (MIT Technology Review)
OpenAI and safety
On Monday, OpenAI, creator of ChatGPT, announced the formation of a “preparedness team” to build out its risk management capabilities.
The Washington Post’s Gerrit De Vynck reports that MIT AI professor Aleksander Madry will lead this new group as it monitors “how and when OpenAI’s tech can instruct people to hack computers or build dangerous chemical, biological and nuclear weapons, beyond what people can find online through regular research.”
This group will join two other similar teams at OpenAI, Ina Fried explains at Axios. “There’s a safety team that focuses on mitigating and addressing the risks posed by the current crop of tools,” Fried writes, “while a superalignment team looks at issues posed by future systems whose capabilities outstrip those of humans.”
Related coverage:
• “How Microsoft’s multibillion-dollar alliance with OpenAI really works” (Financial Times)
• “Microsoft’s near-term fate is in OpenAI’s hands — for better or worse” (Yahoo Finance)
• “OpenAI Says Board Can Overrule CEO on Safety of New AI Releases” (Bloomberg)
U.S. government expands efforts to foster “trustworthy” AI development
On Tuesday, the National Institute of Standards and Technology (NIST) issued a Request for Information (RFI) to “assist in the implementations of its responsibilities under the recent Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” In the RFI, NIST asks for “information related to AI red-teaming, generative AI risk management, reducing the risk of synthetic content, and advancing responsible global technical standards for AI development.”
As Reuters’ David Shepardson notes, “external red-teaming has been used for years in cybersecurity to identify new risks, with the term referring to U.S. Cold War simulations where the enemy was termed the red team.”
According to the NIST website, responses will be accepted until Feb. 2, 2024.
Related Coverage:
• “AI Needs Safety Standards and NIST Is Writing Them” (PYMNTS)
Further reading
• “Agencies are tackling generative AI, but with a little help” (Marketing Brew)
• “Amazon’s AI Product Reviews Seen Exaggerating Negative Feedback” (Bloomberg)
• “Apple isn’t standing still on generative AI, and making human models dance is proof” (AppleInsider)
• “Expedia wants to use AI to cut Google out of its trip-planning business” (The Verge)
• “Microsoft Copilot users can now turn any idea into a song using AI” (VentureBeat)
• “Midjourney’s move to a dedicated website is a big deal for AI art” (Creative Bloq)
• “TomTom creates AI-based conversational assistant for vehicles with Microsoft” (Reuters)
Thank you for reading along with us this year. We will see you in 2024 for more AI news.
If you’d like to catch up on prior installments of this column, start by heading to our last recap: “The Week in Generative AI: December 15, 2023 edition”