This image of “royal” raccoons was created by an AI system called Imagen, built by Google Research.

AI-Generated ‘Realistic’ Images Pose Ethical Questions, Along with Bias Concerns

There is a quote from Kurt Vonnegut Jr. that says: “You asked the impossible of a machine and it complied.”

Today we find the story of a couple of machines that complied so well that they can’t be released to the public just yet. We are talking about Dall-E and Imagen. It seems you can tell either one of these programs to do a specific task and they will comply. However, there is a risk from how well they do comply. From cnn.com we found this:

“A million bears walking on the streets of Hong Kong. A strawberry frog.cat made out of spaghetti and meatballs.
These are just a few of the text descriptions that people have fed to cutting-edge artificial intelligence systems in recent weeks, which these systems — notably OpenAI’s DALL-E 2 and Google Research’s Imagen — can use to produce incredibly detailed, realistic-looking images.

“The resulting pictures can be silly, strange, or even reminiscent of classic art. They’re being shared widely (and sometimes breathlessly) on social media, including by influential figures in the tech community. DALL-E 2 (which is a newer version of a similar, less capable AI system OpenAI rolled out last year) can also edit existing images by adding or taking out objects.”

Take a moment to examine some of the highlighted links in the above paragraph. The images will astound you.

Now if a program can produce such silly and innocuous images, perhaps it can produce realistic images as well?

The problem is these programs are so good, they can produce images of famous people doing rather naughty things. They may not be real, but to most people, they will appear authentic.

The contrast between the images these systems create and the thorny ethical issues are stark for Julie Carpenter, a research scientist and fellow in the Ethics and Emerging Sciences Group at California Polytechnic State University, San Luis Obispo.

“One of the things we have to do is we have to understand AI is very cool and it can do some things very well. And we should work with it as a partner,” Carpenter said. “But it’s an imperfect thing. It has its limitations. We have to adjust our expectations. It’s not what we see in the movies.”

Another Problem

Bias in how programs have been trained came to the headlines early and often when it comes to AI news. And in this article, you’ll see how just asking for a picture of Royal Raccoons has proven once again that some programmers and their programming need a much harder look.

Because Imagen and DALL-E 2 take in words and spit out images, they had to be trained with both types of data: pairs of images and related text captions. Google Research and OpenAI filtered harmful images such as pornography from their datasets before training their AI models, but given the large size of their datasets, such efforts are unlikely to catch all such content, nor render the AI systems unable to produce harmful results. In its Imagen paper, Google researchers pointed out that, despite filtering some data, they also used a massive dataset that is known to include porn, racist slurs, and “harmful social stereotypes.”

Filtering can also lead to other issues: Women tend to be represented more than men in sexual content, for instance, so filtering out sexual content also reduces the number of women in the dataset, said Ahmad. Filtering these datasets for bad content is impossible, Carpenter said, since people are involved in decisions about how to label and delete content — and different people have different cultural beliefs.

“AI doesn’t understand that,” she said.

For now, OpenAI and Google Research are trying to keep the focus on cute pictures and away from images that may be disturbing or pornographic.

So whether its people with bad intentions using AI to distort reality or people who wrote bias into programs, lots of questions remain: Is the power of AI too much for us to handle? Will AI create a world that is less fair or less believable?

AI simply complies.

read more at cnn.com