ChatGPT creator OpenAI has developed internal tools for watermarking and tracking AI-generated content with 99.9 percent accuracy, the Wall Street Journal reports — but is refusing to release it.
Effective tools for flagging AI-generated text could be useful in any number of situations, from cracking down on cheating students to sorting through the AI-generated sludge filling the web.
Which is why it’s so surprising that OpenAI, as the WSJ reports, has been quietly hanging onto tools that could do exactly that.
Leave a reply