Home News OpenAI is Developing Text Watermarking for ChatGPT

OpenAI is Developing Text Watermarking for ChatGPT

Despite the tool's readiness, OpenAI has yet to release it due to ongoing internal debate.

By Inc.Arabia Staff
images header

OpenAI has confirmed that it is finalizing a text watermarking for ChatGPT, a method that could detect AI-generated essays and expose academic cheating.

Despite the tool's readiness, OpenAI has yet to release it due to ongoing internal debate, according to a report by The Wall Street Journal.[1]

In an update to a May blog post, OpenAI acknowledged its work on text watermarking and explained that it is one of several solutions under consideration for text provenance. Other methods include classifiers and metadata. The company emphasized that its watermarking method has been highly accurate in some situations but faces challenges when encountering tampering methods such as translation systems, rewording with another generative model, or inserting and then deleting special characters.

OpenAI also noted potential unintended consequences of text watermarking, such as stigmatizing AI use for non-native English speakers. The company is weighing these risks while prioritizing the release of authentication tools for audiovisual content.

An OpenAI spokesperson told TechCrunch that the company is taking a "deliberate approach" to text provenance due to the complexities and potential broader impact on the ecosystem beyond OpenAI.[2]

Recently, OpenAI began rolling out an advanced voice mode to a select group of ChatGPT Plus users. Originally planned for late June, the launch was delayed to July to ensure it met the company's standards.

In May, OpenAI unveiled its latest innovation in AI, introducing the world to GPT-4o just one year after GPT-4's introduction. GPT-4o is available to all, including unpaid users.

A month earlier, OpenAI introduced a new voice-cloning tool. It comes with a caveat: strict controls will be enforced until safeguards are established to mitigate the risks of audio manipulation aimed at deceiving listeners.

Last update:
Publish date: