Microsoft sues service for creating illicit content with its AI platform
Microsoft is accusing three individuals of running a "hacking-as-a-service" scheme that was designed to allow the creation of harmful and illicit content using the companyβs platform for AI-generated content.
The foreign-based defendants developed tools specifically designed to bypass safety guardrails Microsoft has erected to prevent the creation of harmful content through its generative AI services, said Steven Masada, the assistant general counsel for Microsoftβs Digital Crimes Unit. They then compromised the legitimate accounts of paying customers. They combined those two things to create a fee-based platform people could use.
A sophisticated scheme
Microsoft is also suing seven individuals it says were customers of the service. All 10 defendants were named John Doe because Microsoft doesnβt know their identity.