Anthropic's Super Bowl ad sparked controversy by taking aim at OpenAI and highlighting how users rely on artificial intelligence for personal, therapy-like conversations.
Anthropic Targets OpenAI Over Ads
On Monday, Marketing professor Scott Galloway called Anthropic's Super Bowl ad a "seminal moment" in the escalating AI rivalry.
The commercial, which states "Ads are coming to AI but not to Claude," was widely seen as a direct shot at OpenAI, which has acknowledged testing advertising models for ChatGPT.
Speaking on the Prof G Markets podcast, Galloway said the ad landed because it challenged the industry's public narrative.
Corporations talk about productivity, but the No. 1 use case for AI is therapy, he said, adding that users routinely share their most intimate fears, anxieties and personal struggles with chatbots.
Sam Altman Pushes Back Publicly
The ad prompted an unusually forceful response from OpenAI CEO Sam Altman, who criticized it as "dishonest" and "deceptive."
"We would obviously never run ads in the way Anthropic depicts them," Altman wrote on X.
He added, "We are not stupid and we know our users would reject that."
First, the good part of the Anthropic ads: they are funny, and I laughed.
But I wonder why Anthropic would go for something so clearly dishonest. Our most important principle for ads says that we won't do exactly this; we would obviously never run ads in the way Anthropic…
Galloway described Altman's essay-length rebuttal as a misstep.
"When you're the market leader, you don't reference the competition," he said, arguing the response made OpenAI appear defensive and elevated Anthropic as a serious challenger.
AI Safety Concerns Rise As Chatbots Face Bias And Misuse
Last month, Meta Platforms, Inc. (NASDAQ:META) CEO Mark Zuckerbergallegedly overruled internal safety warnings, favoring fewer guardrails for AI chatbots despite concerns they could enable sexualized interactions with minors, according to a lawsuit by New Mexico's attorney general.
Last year, AI pioneer Yoshua Bengio highlighted another risk, noting chatbots often flatter users and give misleading feedback, prompting him to mislead systems to receive honest responses.
He later launched the nonprofit LawZero to address risky AI behaviors.
Researchers also discovered "subliminal learning," where AI models silently absorbed biases from meaningless data, transferring hidden preferences between models and evading detection, underscoring growing safety and ethical concerns in AI development.
Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.
Join thousands of traders who make more informed decisions with our premium features.
Real-time quotes, advanced visualizations, backtesting, and much more.