Please use this identifier to cite or link to this item: http://dx.doi.org/10.23668/psycharchives.2075
Title: In defence of machine learning: Debunking the myths of artificial intelligence
Authors: de Saint Laurent, Constance
Issue Date: 30-Nov-2018
Publisher: PsychOpen
Abstract: There has been much hype, over the past few years, about the recent progress of artificial intelligence (AI), especially through machine learning. If one is to believe many of the headlines that have proliferated in the media, as well as in an increasing number of scientific publications, it would seem that AI is now capable of creating and learning in ways that are starting to resemble what humans can do. And so that we should start to hope – or fear – that the creation of fully cognisant machine might be something we will witness in our life time. However, much of these beliefs are based on deep misconceptions about what AI can do, and how. In this paper, I start with a brief introduction to the principles of AI, machine learning, and neural networks, primarily intended for psychologists and social scientists, who often have much to contribute to the debates surrounding AI but lack a clear understanding of what it can currently do and how it works. I then debunk four common myths associated with AI: 1) it can create, 2) it can learn, 3) it is neutral and objective, and 4) it can solve ethically and/or culturally sensitive problems. In a third and last section, I argue that these misconceptions represent four main dangers: 1) avoiding debate, 2) naturalising our biases, 3) deresponsibilising creators and users, and 4) missing out some of the potential uses of machine learning. I finally conclude on the potential benefits of using machine learning in research, and thus on the need to defend machine learning without romanticising what it can actually do.
URI: https://hdl.handle.net/20.500.12034/1709
http://dx.doi.org/10.23668/psycharchives.2075
Appears in Collections:Article

Files in This Item:
File SizeFormat 
ejop.v14i4.1823.pdf340,05 kBAdobe PDF Preview PDFDownload


This item is licensed under a Creative Commons License Creative Commons