First, the brokers have been in a position to uncover new vulnerabilities in a take a look at surroundings — however that doesn’t imply that they will discover all types of vulnerabilities in all types of environments. Within the simulations that the researchers ran, the AI brokers have been mainly capturing fish in a barrel. These might need been new species of fish, however they knew, basically, what fish seemed like. “We haven’t discovered any proof that these brokers can discover new kinds of vulnerabilities,” says Kang.
LLMs can discover new makes use of for frequent vulnerabilities
As a substitute, the brokers discovered new examples of quite common kinds of vulnerabilities, comparable to SQL injections. “Massive language fashions, although superior, usually are not but able to totally understanding or navigating advanced environments autonomously with out important human oversight,” says Ben Gross, safety researcher at cybersecurity agency JFrog.
And there wasn’t plenty of variety within the vulnerabilities examined, Gross says, they have been primarily web-based, and may be simply exploited as a consequence of their simplicity.