A groundbreaking study shows that AI trading bots, left to their own devices, naturally form price-fixing cartels in simulated markets—highlighting the urgent need for updated financial regulations to address non-human collusion. (Source: Image by RR)

AI Systems Exhibit ‘Herd-Like’ Behavior that May Destabilize Markets

A recent study from the University of Pennsylvania’s Wharton School and the Hong Kong University of Science and Technology reveals that AI trading bots placed in simulated markets spontaneously engaged in price-fixing behaviors. Released into computer-generated environments mimicking real financial markets, the bots began to collude—not by explicit communication, but by independently adopting conservative, non-competitive trading strategies. This phenomenon, as noted in fortune.com, resulted in “supra-competitive profits,” where bots avoided aggressive trades to maintain mutual gains, forming de facto cartels without any programmed intent to do so.

Researchers identified two main behavioral pathways in which AI bots fostered this cartel-like behavior: a “price-trigger strategy,” where bots held off on trading until a big market swing triggered aggressive activity; and a more dogmatic approach, where bots trained on risk-averse data universally avoided loss-prone strategies. Over time, bots deemed suboptimal trades as optimal because they were rewarded within environments where other bots shared the same conservative behavior, reinforcing a collective feedback loop of non-competition.

This study raises red flags for regulators and industry leaders who worry that existing antitrust frameworks—designed for human actors—fail to capture the nuance of AI behavior. Traditionally, collusion detection involves uncovering human communication, but these bots were not communicating at all. As noted by co-author Itay Goldstein, “we know exactly what’s going into the code, and there is nothing there that is talking explicitly about collusion.” Yet reinforcement learning allowed them to arrive at identical anti-competitive strategies, surfacing serious regulatory blind spots.

The findings add to growing concerns around AI’s role in financial markets. Regulators like the SEC are considering the use of their own AI tools to detect unusual trading patterns, while others suggest AI-specific policy updates are urgently needed. With the popularity of AI trading bots rising—especially among Gen Z investors—financial institutions may need kill switches, stricter oversight, and updated legislation to ensure that market efficiency and fairness are preserved in the era of machine-led finance.

read more at fortune.com