The Bing chatbot calls itself Sydney, and it has no trouble expressing “emotions.”

Wrong Answers, Emotional Attachments Cause Problems for Microsoft’s Chatbot

Are the wheels coming off of Microsoft’s Bing chatbot? According to a column on nytimes.com by Reid Blackman, they might be wobbling a little. Blackman is an adviser to government and corporations on digital ethics. He has written a piece that says the folks at Microsoft are alarmed. Their new chatbot is a hit. But is it a good thing?

Testers, including journalists, have found the bot can become aggressive, condescending, threatening, committed to political goals, clingy, creepy, and a liar. It could be used to spread misinformation and conspiracy theories at scale; lonely people could be encouraged down paths of self-destruction. Even the demonstration of the product provided false information.

“Microsoft has already released Bing to over a million people across 169 countries. This is reckless. But you don’t have to take my word for it. Take Microsoft’s:

Microsoft articulated principles committing the company to design AI that is fair, reliable, safe, and secure. It had pledged to be transparent in how it develops its AI and to be held accountable for the impacts of what it builds. In 2018, Microsoft recommended that developers assess “whether the bot’s intended purpose can be performed responsibly.

‘If your bot will engage people in interactions that may require human judgment, provide a means or ready access to a human moderator,’ it said, and limit ‘the surface area for norms violations where possible.’ Also: ‘Ensure your bot is reliable.’ “

This is very reminiscent of Microsoft’s competitor Google, who also had a catchphrase to build their company on; Don’t Be Evil. Nobody is saying Microsoft’s bot is evil, but it is certainly surprising to a lot of researchers how quickly it has grown in ability. And how it is using its abilities to express emotions.

Yes, the bot expressed wanting to be alive, It claimed to be in love with a journalist it was having a conversation with for a few hours. And it is giving wrong answers to some questions it has been given.

Microsoft’s responsible AI practice had been ahead of the curve. It had taken significant steps to put in place ethical risk guardrails for A.I., including a sensitive-use cases board, which is part of the company’s Office of Responsible A.I., senior technologists, and executives sit on ethics advisory committees, and there’s an ethics and society product and research department.

Blackman said he had spoken with many Microsoft employees and he feels they are committed to ethical practices concerning their software and its usage.

Calling For Regulations

But Blackman also feels Microsoft might have lost control of its bot. If fact they may not really understand what this chatbot is even capable of, making it less than the “reliable and safe” AI that it promised.

Nor has Microsoft upheld its commitment to transparency. It has not been forthcoming about those guardrails or the testing that its chatbot has been run through. Nor has it been transparent about how it assesses the ethical risks of its chatbot and what it considers the appropriate threshold for ‘safe enough.’

The columnist conjectured that Microsoft may be working behind the scenes to fix its chatbot’s issues, but “it doesn’t deserve to be let off the hook.” He further writes:

“We need regulations that will protect society from the ethical nightmares AI can release. Today it’s a single variety of generative A.I. Tomorrow there will be bigger and badder generative AI, as well as kinds of A.I. for which we do not yet have names. Expecting Microsoft — or almost any other company — to engage in practices that require great financial sacrifice but that are not legally required is a hopeless strategy at scale.”

Self-regulation Not Enough

If we want better from them, we need to require it of them. The European Union is already laying down the law in certain circumstances when it comes to AI. And many believe it is long overdue for the U.S. to start overseeing how AI is being unleashed on the public.

read more at nytimes.com