A father of a toddler who sent photos of a medical condition of his genitals to be uploaded to a doctor’s website lost access to his Google files when it was flagged as pornography. (Source: Adobe Stock)

Google’s Algorithm Fails in a Horrible Way, as Do Those of Other Tech Companies

If you have ever had to try to contact a human in charge of a social platform, you know it’s nearly impossible. Being banned on Facebook for 30 days for something that’s ridiculous is one example. Even when you follow the instructions to object to the said ban, you never reach a person. You reach an algorithm.

That’s what happened to a man described in a story from nytimes.com. Perhaps it should be called a horror story. The story begins:

“Mark noticed something amiss with his toddler. His son’s penis looked swollen and was hurting him. Mark, a stay-at-home dad in San Francisco, grabbed his Android smartphone and took photos to document the problem so he could track its progression. It was a Friday night in February 2021. His wife called an advice nurse at their health care provider to schedule an emergency consultation for the next morning, by video because it was a Saturday and there was a pandemic going on. The nurse said to send photos so the doctor could review them in advance.”

Mark’s wife texted a few high-quality close-ups of their son’s groin area to her iPhone so she could upload them to the health care provider’s messaging system. In one, Mark’s hand was visible, helping to better display the swelling. Mark and his wife gave no thought to the tech giants that made this quick capture and exchange of digital data possible, or what those giants might think of the images.

With help from the photos, the doctor diagnosed the issue and prescribed antibiotics, which quickly cleared it up. But the episode left Mark with a much larger problem, one that would cost him more than a decade of contacts, emails, and photos, and make him the target of a police investigation. Mark, who asked to be identified only by his first name for fear of potential reputational harm, had been caught in an algorithmic net designed to snare people exchanging child sexual abuse material.

Because technology companies routinely capture so much data, they have been pressured to detect and prevent criminal behavior. Child advocates say the companies’ cooperation is essential to combat the rampant online spread of sexual abuse imagery. But it can entail peering into private archives, such as digital photo albums — an intrusion users may not expect — that has cast innocent behavior in a sinister light in at least two cases The Times has unearthed.

Jon Callas, a technologist at the Electronic Frontier Foundation, a digital civil liberties organization, called the cases canaries “in this particular coal mine.”

“There could be tens, hundreds, thousands more of these,” he said.

Given the toxic nature of the accusations, Mr. Callas speculated that most people wrongfully flagged would not publicize what had happened.

The police agreed. Google did not. The story gets worse.

Severe Violation

After setting up a Gmail account in the mid-aughts, Mark, who is in his 40s, came to rely heavily on Google. He synced appointments with his wife on Google Calendar. His Android smartphone camera backed up his photos and videos to the Google cloud. He even had a phone plan with Google Fi.

Two days after taking the photos of his son, Mark’s account had been disabled because of “harmful content” that was “a severe violation of Google’s policies and might be illegal.” A “learn more” link led to a list of possible reasons, including “child sexual abuse & exploitation.”

Mark was confused at first but then remembered his son’s infection. “Oh, God, Google probably thinks that was child porn,” he thought.

In an unusual twist, Mark had worked as a software engineer on a large technology company’s automated tool for taking down video content flagged by users as problematic. He knew such systems often have a human in the loop to ensure that computers don’t make a mistake, and he assumed his case would be cleared up as soon as it reached that person.

Mark assumed he would get his account back once he explained what happened. He didn’t. He filled out a form requesting a review of Google’s decision, explaining his son’s infection. At the same time, he discovered the domino effect of Google’s rejection. Not only did he lose emails, contact information for friends and former colleagues, and documentation of his son’s first years of life, his Google Fi account shut down, meaning he had to get a new phone number with another carrier. Without access to his old phone number and email address, he couldn’t get the security codes he needed to sign in to other internet accounts, locking him out of much of his digital life.

Again, he did nothing wrong, but when he finally reached a human, Google refused to restore his account.

There’s a lot more to this story that would be well worth your time to follow to its conclusion. Needless to say, the amount of effort Mark and others have put in to correct a mistaken algorithm was extraordinary, and it didn’t solve the problem. It is a fascinating piece that dives deeper into a problem that most of us might assume was rare. It is not.

read more at nytimes.com