New York Times / Benjamin Mullin and David McCabe
The Guardian / Amanda Meade
NewsGuard / Macrina Wang, Charlene Lin, and McKenzie Sadeghi
In a NewsGuard test, DeepSeek debunked probably false claims only 17 percent of the time →“NewsGuard found that with news-related prompts, DeepSeek repeated false claims 30 percent of the time and provided non-answers 53 percent of the time…NewsGuard’s December 2024 audit on the 10 leading chatbots (OpenAI’s ChatGPT-4o, You.com’s Smart Assistant, xAI’s Grok-2, Inflection’s Pi, Mistral’s le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini 2.0, and Perplexity’s answer engine) found that they had an average fail rate of 62 percent. DeepSeek’s fail rate places the chatbot as tied for 10th of the 11 models tested.”
404 Media / Jason Koebler
OpenAI furious DeepSeek might have stolen all the data OpenAI stole from us →“This is all to say that, if OpenAl argues that it is legal for the company to train on whatever it wants for whatever reason it wants, then it stands to reason that it doesn’t have much of a leg to stand on when competitors use common strategies used in the world of machine learning to make their own models. But of course, it is going with the argument that it must ‘protect [its] IP.'”
Deadline / Jill Goldsmith
The Wall Street Journal / Suzanne Vranica and Patience Haggin
Meta’s free-speech shift made it clear to advertisers: “Brand safety” is out of vogue →“For the better part of a decade, the dialogue between Meta and Madison Avenue has moved in one direction: The company pledged to do more and more to combat hate speech and misinformation on Facebook and Instagram, responding to grievances from brands as well as broader social and political pressures. Now, a new cultural moment has arrived, punctuated by Donald Trump’s return to the White House, and Meta’s executives are carrying an unmistakably different message: Some things we used to remove will now be allowed.”
The Kyiv Independent / Olga Rudenko
WIRED / Lily Hay Newman and Matt Burgess