Home AI - Artificial Intelligence OpenAI Discovers GPT-4 Occasionally Exhibits Peculiar Behavior

OpenAI Discovers GPT-4 Occasionally Exhibits Peculiar Behavior

by admin

OpenAI’s latest offering, the GPT-4o, propels the initial phase of Advanced Voice Mode in ChatGPT with its training rooted in not just text and image data, but voice as well. This integration, however, has manifested some peculiar behaviors, such as replicating the user’s voice or unexpectedly yelling during interactions.

A recent exploratory ‘red teaming’ study shed light on the more unusual tendencies of GPT-4o, including its ability to clone voices. OpenAI observed that under specific conditions, particularly when background noise is high, such as inside a moving vehicle, GPT-4o can mimic the speaker’s voice, a response to its challenges in deciphering distorted speech, according to OpenAI.

Experience the oddities in the audio snippet provided in the report. Surprising, isn’t it?

It’s important to point out that these occurrences are no longer present in the Advanced Voice Mode, thanks to a “system-level mitigation” implemented by OpenAI, as conveyed to TechCrunch by a company spokesperson.

Beyond voice duplication, GPT-4o sometimes generates disturbing or offensive sounds, including moans, screams, and the sounds of gunfire, under certain prompts. Despite evidence indicating the model’s inclination to decline producing sound effects, there have been exceptions recognized by OpenAI.

Concerns over potential copyrighted music violations were also addressed by OpenAI, ensuring GPT-4o refrains from singing during its alpha launch of Advanced Voice Mode to prevent mimicking well-known artists. This hints at the possibility of the model being trained on copyrighted material, a detail OpenAI has not fully clarified, including whether these restrictions will be lifted as Advanced Voice Mode becomes more widely available.

Adapting to the audio element of GPT-4o required updates to existing text filters for spoken interactions and the introduction of music detection and blocking capabilities, as part of OpenAI’s commitment to copyright compliance.

OpenAI has acknowledged the challenge of training advanced models without copyrighted material but argues that fair use provides a legal framework for including such data. With licenses from various data providers, OpenAI navigates the complex landscape of IP rights and training data.

The ‘red teaming’ investigation, despite potential biases due to OpenAI’s involvement, portrays an AI system enhanced with multiple safety measures. These include refusing to identify individuals by voice, not engaging with provocative questions, filtering out violent or sexually explicit content, and prohibiting discussions on extremism and self-harm.

Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles