One User’s Interest in Grief Kept Getting Fed by AI after He Stopped Seeking Help
In a story on technologyreview.com, Tate Ryan-Mosley writes about how he Googled “grief” and looked for resources, then went down a rabbit hole of content after he found out his father had cancer. For months, he kept getting material about the topic, long after he had made peace with it.
The conundrum of algorithms tracking people is that they show you what you want, long after you’ve already found it, according to his story. The worst part was that it would rip open old wounds again.
Yet with every search and click, I inadvertently created a sticky web of digital grief. Ultimately, it would prove nearly impossible to untangle myself. My mournful digital life was preserved in amber by the pernicious personalized algorithms that had deftly observed my mental preoccupations and offered me ever more cancer and loss,” Ryan-Mosely writes. “I got out—eventually. But why is it so hard to unsubscribe from and opt out of content that we don’t want, even when it’s harmful to us?”
Ryan-Mosley discovered that researchers on the subject decried tech companies’ lack of perspective on how this traps users into a web of content as part of “surveillance capitalism.”
Sometimes it’s not even clear what exactly the recommendation algorithms are trying to achieve, says Ranjit Singh, a data and policy researcher at Data & Society, a nonprofit research organization focused on tech governance. “One of the challenges of doing this work is also that in a lot of machine-learning modeling, how the model comes up with the recommendation that it does is something that is even unclear to the people who coded the system,” he says.
Ryan-Mosley said it took months of editing his Amazon web browser history, forgoing looking at the links served to him and changing preferences before he could to finally slow the information and tear down the web algorithm that haunted his life.
read more at technologyreview.com
Leave A Comment