The Most Dangerous Hackers Just Took Over ChatGPT

ChatGPT and AI Chatbots hackers

It’s going to be a carnage fueled by false information by Dangerous Hackers!

header-advert

For years, cybersecurity ninjas have tested their mettle at Def Con, the biggest hacker conference in the world, by hacking into cars, finding security flaws in smart homes, or even rigging elections.

Therefore, it shouldn’t come as a surprise that hackers at this year’s Def Con in Las Vegas have turned their attention to AI chatbots, a trend that has gone viral, particularly after OpenAI made ChatGPT available to the general public late last year.

According to NBC News, the convention held a whole contest to develop new prompt injections that make chatbots like Google’s Bard or ChatGPT spout nearly whatever an attacker desires rather than to find software weaknesses.

According to the report, the competition involved six of the largest AI companies in an effort to persuade hackers to find vulnerabilities in their generative AI tools, including Meta, Google, OpenAI, Anthropic, and Microsoft. Even the White House previously declared its support for the event in May.

And nobody should be shocked by that. Although these chatbots are technically remarkable, they are notoriously bad at accurately discerning fact from fiction. Additionally, as we have often observed, they are simple to control. Additionally, there are very significant financial motivations to find these problems given the billions of dollars going into the AI business. According to Rumman Chowdhury, a trust and safety consultant who worked on the competition’s design, NBC:

All of these companies are trying to commercialize these products

ChatGPT and AI Chatbots Hackers

And unless this model can reliably interact in innocent interactions, then it is not a marketable product.

The businesses who took part in the competition left themselves a lot of room. For instance, they will have plenty of time to fix any issues since they won’t be made public until February, 2024. Additionally, hackers at the event could only access the systems using the laptops that were given.

It remains to be seen, though, whether the work will result in long-lasting remedies. Researchers from Carnegie Mellon have discovered that the chatbot guardrails used by these corporations are laughably trivial to get over with a quick prompt injection, allowing them to be used as effective tools for discrimination and deception.

Even worse, these experts claim that despite the numerous particular problems that a swarm of Def Con hackers find, there is no quick cure for the problem’s core cause.

According to Zico Kolter, a professor at Carnegie Mellon and one of the report’s authors, “there is no obvious solution,” he told the New York Times last month:

You can create as many of these attacks in a short period of time as you like.

– Zico Kolter

There are no good guardrails,” said Tom Bonner of the AI security company HiddenLayer, who spoke at this year’s DefCon, to the Associated Press.

A straightforward set of images and text can also be used to “poison” AI training data, with potentially disastrous results, according to recent study from ETH Zurich in Switzerland.

In other words, whether or not an army of hackers tests AI companies’ products, they will have their job cut out for them. According to Chowdhury,

Misinformation is going to be a persistent issue for a while.

About News Blob 23 Articles
NewsBlob is a premier news blog dedicated to shedding light on the latest developments and engaging stories shaping Nigeria, Africa, and the world. Our platform serves as a beacon of reliable information, offering comprehensive coverage on politics, trends, entertainment, business, investments, lifestyle, education, sports, and more.

Be the first to comment

Leave a Reply