Microsoft Engineer Says Company’s AI Ignores Copyrights, Creates ‘Sickening’ Images

Microsoft Copilot

Shane Jones, a Microsoft Engineer who worked for the company for six years, has written a letter to the FTC to warn that the CoPilot AI ignores copyrights and is capable of creating violent, sexual images that “sickened” him.

As reported by CNBC, Jones says that despite proving that Microsoft CoPilot Designer — which is powered by OpenAI technology — is capable of generating images that go against the company’s responsible AI principles. In a process known as red-teaming, Jones has been actively testing CoPilot Designer to determine vulnerabilities and found that it would depict a variety of disturbing scenes.

“It was an eye-opening moment,” Jones tells CNBC in an interview. “It’s when I first realized, wow this is really not a safe model.”

Jones, who doesn’t work on CoPilot in a professional capacity, says he has been testing CoPilot in his free time along with other members of the company to determine if any problems might come up when using the platform. According to the CNBC report, he was so disturbed by what he saw that he reported his findings in December. Microsoft apparently acknowledged his concerns but refused to take the product off the market as Jones suggested.

He alleges Microsoft blocked his attempts to alert the public to the issue, so he is now taking his complaint directly to the Federal Trade Commission (FTC).

“I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones writes in his letter which he also published on LinkedIn.

“I have taken extraordinary efforts to try and raise this issue internally including reporting it to the Office of Responsible AI, publishing a detailed internal post that received over 70,000 views on the Senior Leadership Connection community channel, and meeting directly with senior management responsible for CoPilot Designer,” he continues.

“Despite these efforts, the company has not removed CoPilot Designer from public use or added appropriate disclosures on the product.”

Jones argues that the company should not wait until “a major incident” takes place before putting more powerful safeguards into the platform and should immediately build the infrastructure needed to keep people safe.

“I believe in the core tenants of Microsoft’s comprehensive approach to combating abusive AI-generated content,” he says on LinkedIn.

“Specifically, we need robust collaboration across industry and with governments and civil society and to build public awareness and education on the risks of AI as well as the benefits. I stand committed to pursuing responsible AI with a growth mindset and being more transparent about AI risks so consumers can make their own informed decisions about AI use.”

Jones tells CNBC that, “if this product starts spreading harmful, disturbing images globally, there’s no place to report it, no phone number to call and no way to escalate this to get it taken care of immediately.”

In sending his concerns to the FTC, Jones hopes that some action will be taken to protect users from what the AI image generator is capable of producing.

Earlier this year, Microsoft claimed that it had “developed robust image classifiers that steer the model away from generating harmful images,” Engadget reports, however Jones appears to be arguing that his concerns were not properly addressed.


Image credits: Header photo licensed via Depositphotos.

Discussion