And advice about swallowing a cat
AI-Powered Coca-Cola Ad Celebrating Authors Gets Basic Facts Wrong
Judge admits nearly being persuaded by AI hallucinations in court filing
In its official trending news section, X (formerly, Twitter) promoted a story with the headline “Iran Strikes Tel-Aviv with Heavy Missiles,” but it was fake — and it was created by X’s own AI chatbot, Grok.That was a year ago. And we thought that was bad.
— philmandelbaum.com (@philmandelbaum.bsky.social) 2025-05-15T23:55:59.559Z
I spoke to the person who AI-generated the Chicago Sun-Times reading list. Says he's very embarrassed. This was part of a generic package inserted into newspapers and other publications, so likely to run elsewhere. He didn't know it'd be in Chicago Sun-Timeswww.404media.co/chicago-sun-…
— Jason Koebler (@jasonkoebler.bsky.social) 2025-05-20T14:47:27.261Z
OMG — it's a viral/syndicated hallucination. Paging @ryancordell.org and @dasmiq.bsky.social and Viral Texts.
— Ted Underwood (@tedunderwood.me) 2025-05-20T14:27:02.514Z
AI itself is neither evil nor good. It’s a tool to use—and for some, to use against others. The peril comes when we accept information to feed our emotional reaction rather than our considered response. And that peril exists across the political spectrum. (Thread)www.axios.com/2025/05/14/m…
— Jeff Roush (@jeffroushwriting.bsky.social) 2025-05-19T00:51:15.703Z
thanks google AI
— Bill Corbett (@billcorbett.bsky.social) 2025-05-18T22:00:55.778Z
Oh, sweet merciful fire, two notes:(1) Even Latham & Watkins, ya'll.(2) The lawyer defending an AI company for using stolen IP to train their AI/LLM used the company's AI/LLM to cite check her brief, which hallucinated results.fingfx.thomsonreuters.com/gfx/legaldoc…
— Gabriel Malor (@gabrielmalor.bsky.social) 2025-05-15T19:49:05.202Z
This may be my new favourite hallucination. Thanks to @arynn.bsky.social for finding
— Dr Joe McIntyre (@drjoemcintyre.bsky.social) 2025-05-15T23:21:59.937Z
An AI leaderboard suggests the newest reasoning models used in chatbots are producing less accurate results because of higher hallucination rates. Experts say the problem is bigger than that
— New Scientist (@newscientist.com) 2025-05-16T09:59:13.016Z
Anthropic's lawyers take blame for AI 'hallucination' in music publishers' lawsuit reut.rs/4dhKCt0
— Reuters (@reuters.com) 2025-05-15T19:45:14.946Z
incredible: a lawyer acting for one of the world's biggest AI companies, Anthropic, had to apologise to a US court because they had cited an AI hallucination in their submission.techcrunch.com/2025/…
— CAMERON WILSON (@cameronwilson.bsky.social) 2025-05-16T00:13:47.000Z
AI will take your job as soon as it figures out what year it is
— Drew Harwell (@drewharwell.com) 2025-05-28T18:45:39.117Z