“AI can produce secure-looking code, but it surely lacks contextual consciousness of the group’s menace mannequin, compliance wants, and adversarial danger setting,” Moolchandani says.
Tuskira’s CISO lists two main points: first, that AI-generated safety code might not be hardened in opposition to evolving assault methods; and second, that it could fail to mirror the particular safety panorama and desires of the group. Moreover, AI-generated code may give a false sense of safety, as builders, notably inexperienced ones, usually assume it’s safe by default.
Moreover, there are dangers related to compliance and violations of licensing phrases or regulatory requirements, which might result in authorized points down the road. “Many AI instruments, particularly these producing code primarily based on open-source codebases, can inadvertently introduce unvetted, improperly licensed, and even malicious code into your system,” O’Brien says.
Open-source licenses, for instance, usually have particular necessities concerning attribution, redistribution, and modifications, and counting on AI-generated code may imply unintentionally violating these licenses. “That is notably harmful within the context of software program improvement for cybersecurity instruments, the place compliance with open-source licensing is not only a authorized obligation but in addition impacts safety posture,” O’Brien provides. “The danger of inadvertently violating mental property legal guidelines or triggering authorized liabilities is important.”