Machine learning is advancing rapidly, accompanied by grand promises of hype and doom. The everyday applications of machine learning are already to be found in our smartphones and our homes and, soon, in self-driving cars. But who is doing the learning?
Machine learning is advancing rapidly, accompanied by grand promises of hype and doom. Self-driving cars have become a test case for the efficacy of machine learning. But this quintessentially 'smart' technology is not born smart. The algorithms that control their movements are learning as the technology emerges. Self-driving cars represent a high-stakes test of the powers of machine learning, as well as a test case for social learning in technology governance. Society is learning about the technology while the technology learns about society. Understanding and governing the politics of this technology means asking 'Who is learning, what are they learning and how are they learning?' Trajectories and rhetorics of machine learning in transport pose a substantial governance challenge. 'Self-driving' or 'autonomous' cars are misnamed. As with other technologies, they are shaped by assumptions about social needs, solvable problems, and economic opportunities. Governing these technologies in the public interest means improving social learning by constructively engaging with the contingencies of machine learning. The popular debate about machine learning focuses on what is being learnt. However, the politics of these technologies are likely to revolve around alternative questions: who is learning and how? STS has the potential to inject social learning into what is currently a narrow debate about machine learning.