Pricing algorithms can learn to collude with each other to raise prices. If you shop on Amazon, an algorithm rather than a human probably set the price of the service or item you bought. Pricing algorithms have become ubiquitous in online retail as automated systems have grown increasingly affordable and easy to implement. But while companies like airlines and hotels have long used machines to set their prices, pricing systems have evolved. They have moved from rule-based programs to reinforcement-learning ones, where the logic of deciding a product’s price is no longer within a human’s control.
If you recall, reinforcement learning is a subset of machine learning that uses penalties and rewards to incentivize an AI agent toward a specific goal. AlphaGo famously used it to beat the best human players at the ancient board game Go. Within a pricing context, these systems are given a goal such as to maximize overall profit; then they experiment with different strategies in a simulated environment to find the optimal one.
Researchers at the University of Bologna in Italy created two simple reinforcement-learning-based pricing algorithms and set them loose in a controlled environment. They discovered that the two completely autonomous algorithms learned to respond to one another’s behavior and quickly pulled the price of goods above where it would have been had either operated alone.
“What is most worrying is that the algorithms leave no trace of concerted action,” the researchers wrote. “They learn to collude purely by trial and error, with no prior knowledge of the environment in which they operate, without communicating with one another, and without being specifically designed or instructed to collude.” This risks driving up the price of goods and ultimately harming consumers.
Artificial intelligence, algorithmic pricing, and collusion
Emilio Calvano, Giacomo Calzolari, Vincenzo Denicolò, Sergio Pastorello 03 February 2019
Note: The blue and red lines show the price dynamic over time of two autonomous pricing algorithms (agents) when the red algorithm deviates from the collusive price in the first period.
The figure shows the price path in the subsequent periods. Clearly, the deviation is punished immediately (the blue line price drops immediately after the deviation of the red line), making the deviation unprofitable. However, the punishment is not as harsh as it could be (i.e. reversion to the competitive price), and it is only temporary; afterwards, the algorithms gradually return to their pre-deviation prices.
What is particularly noteworthy is the behaviour of the deviating algorithm. Plainly, it is responding not only to the rival but also to its own action. (If it responded only to the rival, there would be no reason to cut the price in period t = 2, as the rival has charged the collusive price in period t = 1). This kind of self-reactive behaviour is a distinctive sign of genuine collusion, and it would be difficult to explain otherwise.
The collusion that we find is typically partial – the algorithms do not converge to the monopoly price but a somewhat lower one. However, we show that the propensity to collude is stubborn – substantial collusion continues to prevail even when the active firms are three or four in number, when they are asymmetric, and when they operate in a stochastic environment. The experimental literature with human subjects, by contrast, has consistently found that they are practically unable to coordinate without explicit communication save in the simplest case, with two symmetric agents and no uncertainty.
What is most worrying is that the algorithms leave no trace of concerted action – they learn to collude purely by trial and error, with no prior knowledge of the environment in which they operate, without communicating with one another, and without being specifically designed or instructed to collude. This poses a real challenge for competition policy. While more research is needed before considering policy moves, the antitrust agencies’ call for attention would appear to be well grounded.
Calvano, E, G Calzolari, V Denicolòand S Pastorello (2018a), “Artificial intelligence, algorithmic pricing and collusion,” CEPR Discussion Paper 13405.
Calvano, E, G Calzolari, V Denicolòand S Pastorello (2018b), “Algorithmic Pricing What Implications for Competition Policy?” forthcoming in Review of Industrial Organization.
Chen, L, A Mislove and C Wilson (2016), “An Empirical Analysis of Algorithmic Pricing on Amazon Marketplace”, in Proceedings of the 25th International Conference on World Wide Web, WWW’16, World Wide Web Conferences Steering Committee, pp. 1339-1349.
Ezrachi, A and M E Stucke (2015), “Artificial Intelligence and Collusion: When Computers Inhibit Competition”, Oxford Legal Studies Research Paper No. 18/2015, University of Tennessee Legal Studies Research Paper No. 267.
Harrington, J E, Jr (2018), “Developing Competition Law for Collusion by Autonomous Price-Setting Agents,” working paper.
Schwalbe, U (2018), “Algorithms, Machine Learning, and Collusion,” working paper.
Kühn K U and S Tadelis (2018), “The Economics of Algorithmic Pricing: Is collusion really inevitable?”, working paper.
 The only antitrust case involving algorithmic pricing was the successful challenge by US and British antitrust agencies of a pricing software allegedly designed to coordinate the price of posters by multiple online sellers. See Wired Magazine, U.S. v. Topkins, 2015 and CMA case 2015 n. 50223.
 See, for instance, the remarks of M. Vestager, European Commissioner, at the Bundeskartellamt 18th Conference on Competition, Berlin, 16 March 2017 (“Algorithms and Competition”), and the speech of M. Ohlhausen, Acting Chairman of the FTC , at the Concurrences Antitrust in the Financial Sectorconference, New York, 23 May 2017 (“Should We Fear the Things That Go Beep in the Night? Some Initial Thoughts on the Intersection of Antitrust Law and Algorithmic Pricing”). The OECD sponsored a Roundtable on Algorithms and Collusionin June 2017, and in September 2017 the Canadian Competition Bureau released a discussion paper on the ability of algorithms to collude as a major issue for antitrust enforcement (“Big data and Innovation: Implications for Competition Policy in Canada”). More recently, the British CMA published a white paper on “Pricing Algorithms” on 8 October 2018. Lastly, the seventh session of the FTC Hearings on competition and consumer protection, 13-14 November 2018, centred on the “impact of algorithms and Artificial Intelligence.”
 These simulations typically use models of staggered prices that do not fit well with algorithmic pricing (Calvano et al. 2018a, 2018b).