Home AI - Artificial Intelligence This Founder Took Measures to Teach His AI to Avoid Rickrolling Users

This Founder Took Measures to Teach His AI to Avoid Rickrolling Users

by admin

While closely inspecting the responses generated by his company’s Lindy AI assistants, Flo Crivello encountered a perplexing situation. An inquiry from a new user prompted her Lindy AI to produce a video tutorial to enhance her understanding of the service. However, Crivello quickly realized the error—the platform lacks such a tutorial.

“We caught this and wondered, ‘What kind of video did it dispatch?’ and then realized, ‘This is definitely an issue,'” Crivello shared with TechCrunch.

Unexpectedly, the video provided by the AI to the client was, humorously, Rick Astley’s 1987 pop anthem, “Never Gonna Give You Up” — in essence, the client was Rickrolled by an AI.

Rickrolling, a digital prank over 15 years old, gained popularity when a 4chan user redirected eager “Grand Theft Auto IV” viewers to Astley’s hit instead. This phenomenon has continued for seventeen years, with the iconic song amassing over 1.5 billion views on YouTube.

This widespread internet joke has been assimilated into the database of large language models like ChatGPT, which powers Lindy.

“These models aim to predict the most probable subsequent text,” explained Crivello. “It begins with, ‘Oh, I’m about to send a video!’ and naturally progresses from there.”

Crivello informed TechCrunch that amidst millions of interactions, only twice did Lindy inadvertently engage in Rickrolling. Nevertheless, correcting the mistake was crucial.

“Addressing this in the new era of AI was straightforward, merely necessitating the addition of a directive in our system prompt for every Lindy interaction, explicitly stating ‘do not Rickroll users,’” he noted.

Lindy’s oversight raises important considerations regarding the extent to which internet culture influences AI development. Lindy’s accidental Rickroll illustrates how naturally AI can replicate specific human interactions. However, the internet’s humor can also infiltrate AI in undesirable ways, as Google discovered when its AI, trained using Reddit data, made unreliable suggestions.

“In Google’s scenario, the information wasn’t fabricated; it was just derived from unreliable sources,” Crivello pointed out.

As Large Language Models (LLMs) continue to advance, Crivello anticipates fewer such blunders. Moreover, he highlights the evolving ease with which these issues can be addressed. Initially, Lindy’s AI would claim to be processing unfulfillable user requests indefinitely, mimicking a very human kind of procrastination.

“It was challenging to amend that behavior initially,” he admitted. “However, the introduction of GPT-4 enabled us to imbue Lindy with the ability to straightforwardly decline unachievable tasks, effectively resolving the issue,” he added.

Fortunately, there’s a chance the customer who was Rickrolled remains unaware of the mishap.

“It’s uncertain if the customer even noticed,” he mused. “We quickly provided the correct link and didn’t receive any feedback on the initial one.”

Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles