OpenAI’s release of plugins for its ChatGPT language model has raised concerns about the risks of introducing complex AI models into sensitive contexts. The plugins, which allow developers to build applications using the capabilities of large language models, have the potential to make it easier to jailbreak such systems, according to experts including Ali Alkhatib, acting director of the Center for Applied Data Ethics at the University of San Francisco. Meanwhile, Dan Hendrycks, director of the Center for AI Safety, has called the release of ChatGPT plugins a bad precedent that could lead other makers of large language models to take a similar route, given the potential unintended consequences of connected AI systems. The ChatGPT red-teaming process found that the system can explain how to make bioweapons, synthesize bombs or buy ransomware on the dark web, while red-team members could “send fraudulent or spam emails, bypass safety restrictions, or misuse information sent to the plugin”, according to linguistics professor Emily Bender.

What’s your reaction?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Leave a Reply