Workers at Shipt grocery delivery have problems with AI that determines worker hours and wages.

When AI Makes the Rules, People May Not Be the Top Priority

The age of AI is producing miraculous outcomes that were not possible before AI. And the inventions and re-imaginings of products due to AI just keep coming faster all the time. Issues of racism being baked into algorithms and deciding the hours and wages of workers in ways they say are unfair are raising alarms, according to a story on

Here is the first example: Workers for Shipt, the grocery-delivery platform owned by Target, are protesting the firm’s recent implementation of a new algorithm dictating workers’ schedules and wages. How the algorithm makes these decisions isn’t clear: Shipt has provided few details to their more than 200,000 employees, and the company refuses to share anything with the public, claiming that the system is “proprietary.” But even without access to the inner workings of the algorithm, workers feel its impacts. Since the system went live in September, they claim that their wages have decreased and that scheduling is more complicated and unpredictable, throwing their lives into precarity and financial uncertainty. As Shipt worker Willy Solis put it:

“This is my business, and I need to be able to make informed decisions about my time.”

Many companies that create algorithms have been reticent to reveal whats goes into the making of these products.

In some sense, this shouldn’t be surprising. These systems are largely produced by private companies and sold to other companies, governments and institutions. They’re beholden to the incentives of those who create them, and whatever else they might do, they’re ultimately designed to increase the profits, efficiency and growth of those who use them. Put another way, these are the tools of the powerful, generally applied by those who have power over those who have less. Shipt’s own chief communications officer, Molly Snyder, said it herself:

“We believe the model we rolled out is the right one for the company.”

Here we see a tacit acknowledgment that the goals of the company are separate from those of its workers. And only one side has the power to choose how, and whether, to use the algorithm.

The article explains how the British government deployed a new grading algorithm that failed spectacularly along predictable racial and class lines. Trying to compensate for COVID-related interruptions in testing, the algorithm guesstimated the scores it assumed students would have achieved under normal conditions, basing this assumption on things like teacher estimates and the historical performance of a given school. In doing so, it reduced the scores of poor, Black, and brown students, while giving higher marks to students from elite schools and wealthy areas. Cori Crider, a lawyer from London law firm Foxglove, whose case won the reversal of the faulty grades, said,

“There’s been a refusal to have an actual debate about how these systems work and whether we want them at all.”

And that is the question here. Can human developed AI be allowed to take over the reins of a human operation and still be looking out for the best interests of the human workers and not the business bottom line? There is much more to read at the link below.