Intel has disclosed a most severity vulnerability in some variations of its Intel Neural Compressor software program for AI mannequin compression.
The bug, designated as CVE-2024-22476, gives an unauthenticated attacker with a technique to execute arbitrary code on Intel techniques working affected variations of the software program. The vulnerability is probably the most critical amongst dozens of flaws the corporate disclosed in a set of 41 safety advisories this week.
Improper Enter Validation
Intel recognized CVE-2024-22476 as stemming from improper enter validation, or a failure to correctly sanitize consumer enter. The chip maker has given the vulnerability a most rating of 10 on the CVSS scale as a result of the flaw is remotely exploitable with low complexity and has a excessive affect on knowledge confidentiality, integrity, and availability. An attacker doesn’t require any particular privileges, and neither is consumer interplay required for an exploit to work.
The vulnerability impacts Intel Neural Compressor variations earlier than 2.5.0. Intel has beneficial that organizations utilizing the software program improve to model 2.5.0 or later. Intel’s advisory indicated that the corporate discovered of the vulnerability from an exterior safety researcher or entity whom the corporate didn’t establish.
Intel Neural Compressor is an open supply Python library that helps compress and optimize deep studying fashions for duties equivalent to laptop imaginative and prescient, pure language processing, advice techniques, and quite a lot of different use circumstances. Methods for compression embody neural community pruning — or eradicating the least necessary parameters; lowering reminiscence necessities by way of course of name quantization; and distilling a bigger mannequin to a smaller one with related efficiency. The objective with AI mannequin compression know-how is to assist allow the deployment of AI purposes on various {hardware} units, together with these with restricted or constrained computational energy, equivalent to cell units.
One Amongst Many
CVE-2024-22476 is definitely one among two vulnerabilities in Intel’s Neural Compressor software program that it disclosed — and for which it launched a repair — this week. The opposite is CVE-2024-21792, a time-of-check-time-of-use (TOCTOU) flaw that would lead to info disclosure. Intel assessed the flaw at presenting solely a average threat as a result of, amongst different issues, it requires an attacker to have already got native, authenticated entry to a susceptible system to use it.
Along with the Neural Compressor flaws, Intel additionally disclosed 5 high-severity privilege escalation vulnerabilities in its UEFI firmware for server merchandise. Intel’s advisory listed all of the vulnerabilities (CVE-2024-22382; CVE-2024-23487; CVE-2024-24981; CVE-2024-23980; and CVE-2024-22095) as enter validation flaws, with severity scores starting from 7.2 to 7.5 on the CVSS scale.
Rising AI Vulnerabilities
The Neural Compressor vulnerabilities are examples of what safety analysts have lately described because the increasing — however usually neglected — assault floor that AI software program and instruments are creating at enterprise organizations. Numerous the safety considerations round AI software program to this point have centered on the dangers in utilizing massive language fashions and LLM-enabled chatbots like ChatGPT. Over the previous 12 months, researchers have launched quite a few experiences on the susceptibility of those instruments to mannequin manipulation, jailbreaking, and several other different threats.
What has been considerably much less of a spotlight to this point has been the danger to organizations from vulnerabilities in a few of the core software program elements and infrastructure utilized in constructing and supporting AI merchandise and platforms. Researchers from Wiz, for example, lately discovered weaknesses within the extensively used HuggingFace platform that gave attackers a technique to tamper with fashions within the registry or to comparatively simply add weaponized ones to it. A current examine commissioned by the UK’s Division for Science, Innovation and Expertise recognized quite a few potential cyber-risks to AI know-how at each life cycle state from the software program design section by improvement, deployment, and upkeep. The dangers embody a failure to do ample menace modeling and never accounting for safe authentication and authorization within the design section to code vulnerabilities, insecure knowledge dealing with, insufficient enter validation, and an extended listing of different points.