“To reap the advantages of AI, customers should have confidence that the AI will behave as designed, and outcomes are protected and safe,” the CSA stated within the information. “Nonetheless, along with security dangers, AI techniques might be susceptible to adversarial assaults, the place malicious actors deliberately manipulate or deceive the AI system.”
The rules don’t deal with AI security or broader points generally related to AI, equivalent to equity, transparency, or inclusion, nor do they sort out cybersecurity dangers launched by AI techniques.
Whereas some advisable actions could overlap with these areas, the rules additionally don’t particularly deal with the misuse of AI in cyberattacks, equivalent to AI-enabled malware, or threats like misinformation, disinformation, and deepfakes, the CSA stated.