Microsoft isn’t the only high-tech corporation using in-house attack teams to uncover weaknesses in products and operations, but the company’s  Red Team has proven to be invaluable. (Source: Adobe Stock)

Microsoft Reveals How Red Team Experiment Makes Its Operation, Products Stronger

The executives at Microsoft had a tremendous idea about five years ago. They put together a group of professionals they entitled The Red Team. This team was turned loose to find problems within Microsoft and its products, such as weaknesses or failures in operations. It has proven to be a huge success.

On Monday, Microsoft revealed details about the team within the company that since 2018 has been tasked with figuring out how to attack AI platforms to reveal their shortcomings.

An article on wired.com provides the background of the original Red Team and its evolution into an AI Red Team. For most people, the idea of using AI tools in daily life—or even just messing around with them—has only become mainstream in recent months, with new releases of generative AI tools from a slew of big tech companies and startups, like OpenAI’s ChatGPT and Google’s Bard.

How It Began

Microsoft’s AI red team evolved from an experiment into “a full interdisciplinary team of machine learning experts, cybersecurity researchers, and even social engineers.” The group shares its findings with Microsoft and makes them available to other tech companies so they can also have specialized AI knowledge.

“When we started, the question was, ‘What are you fundamentally going to do that’s different? Why do we need an AI red team?’” says Ram Shankar Siva Kumar, the founder of Microsoft’s AI red team. “But if you look at AI red teaming as only traditional red teaming, and if you take only the security mindset, that may not be sufficient. We now have to recognize the responsible AI aspect, which is accountability of AI system failures—so generating offensive content, generating ungrounded content. That is the holy grail of AI red teaming. Not just looking at failures of security but also responsible AI failures.”

Evolution of AI Team

While the team was used to examine the flaws in their own company’s security and machine learning techniques, it included looking into the Microsoft cloud. Over time, though, the AI red team grew as the urgency of addressing machine learning flaws and failures became critical.

“In one early operation, the red team assessed a Microsoft cloud deployment service that had a machine learning component. The team devised a way to launch a denial of service attack on other users of the cloud service by exploiting a flaw that allowed them to craft malicious requests to abuse the machine learning components and strategically create virtual machines, the emulated computer systems used in the cloud.”

The article includes some of the technical moves that the Red Team is planning to make while observing Microsoft/OpenAI ChatGPT and its spin-offs. And it turns out there is more than one Red Team. What you might call an AI Sheriff over at Microsoft is an idea that should catch on industry-wide.

read more at wired.com