Conference Object

Competition for novelty in an information-sampling game

Author(s) / Creator(s)

Tiokhin, Leonid
Derex, Maxime

Abstract / Description

*This abstract is adapted from a registered report accepted in-principle at Royal Society Open Science. Many factors plausibly affect the reliability of science (Munafò et al., 2017). At the heart of these are incentive structures: by determining the professional payoffs for various types of research, incentives shape scientists’ research decisions (Nosek, Spies, & Motyl, 2012). One longstanding incentive in academic science is rewarding priority of discovery. Over 50 years ago, sociologist of science Robert Merton noted how this norm might benefit science: rewarding priority can incentivize scientists to invest effort to quickly solve important problems and share their discoveries with the scientific community (Merton, 1957). Nonetheless, scholars have also had longstanding concerns about the repercussions of this norm. Charles Darwin thought that rewarding priority by naming species after their first-describers incentivized biologists to produce “hasty and careless work” by “miserably describing a species in two or three lines.“ ((Merton, 1957), p. 644). More recently, concerns over the consequences of rewarding priority have led the academic journals eLife and PLOS Biology to offer “scoop protection” (i.e. allowing researchers to publish findings identical to those already published in the same journal) in attempts to reduce the disproportionate payoffs to scientists who publish first (Marder, 2017; The PLOS Biology Staff Editors, 2018; Yong, 2018). In the editorial justifying their new policy, The PLOS Biology Staff Editors write “…many of us know researchers who have rushed a study into publication before doing all the necessary controls because they were afraid of being scooped. Of course, healthy competition can be good for science, but the pressure to be first is often deleterious…” (The PLOS Biology Staff Editors, 2018). Despite these reasonable concerns, there is little empirical evidence for the hypothesis that competitive pressures to publish cause individuals to produce lower-quality research. In focus-group discussions with mid and early-career researchers, scientists acknowledge that competition incentivizes them to conduct careless work (Anderson, Ronning, De Vries, & Martinson, 2007), but laboratory experiments investigating competition more broadly demonstrate that competition also promotes individual effort (Baer, Vadera, Leenders, & Oldham, 2013; Balietti, Goldstone, & Helbing, 2016; Dechenaux, Kovenock, & Sheremeta, 2015; Dohmen & Falk, 2011; Gneezy, Niederle, & Rustichini, 2003; Niederle & Vesterlund, 2007). As a consequence, it is unclear how competition in general, and competition for priority in particular, affects research quality. On the one hand, competition might cause researchers to make dubious claims based on inadequate data. On the other, competition might encourage researchers to gather data more efficiently. Given the difficulty of experimentally manipulating incentives in real-world scientific practice, we develop a simple game that mimics aspects of scientific investigation. In our experiment, individuals gather data in order to guess true states of the world and face a tradeoff between guessing quickly and increasing accuracy by acquiring more information. To test whether competition affects accuracy, we compare a treatment in which individuals are rewarded for each correct guess to a treatment where individuals face the possibility of being “scooped” by a competitor. In a second set of conditions, we make information acquisition contingent on solving arithmetic problems to test whether competition increases individual effort (i.e. arithmetic-problem solving speed). We find that competition causes individuals to make guesses using less information, thereby reducing their accuracy. We find no evidence that competition increases individual effort. Our experiment provides proof of concept that rewarding priority of publication can incentivize individuals to acquire less information, producing lower-quality research as a consequence. More generally, it provides one example of the type of empirical work that is necessary to move beyond verbal arguments about the effects of incentive structures on scientists’ behavior. References: Anderson, M. S., Ronning, E. A., De Vries, R., & Martinson, B. C. (2007). The perverse effects of competition on scientists’ work and relationships. Science and Engineering Ethics, 13(4), 437–461. Baer, M., Vadera, A. K., Leenders, R. T., & Oldham, G. R. (2013). Intergroup competition as a double-edged sword: How sex composition regulates the effects of competition on group creativity. Organization Science, 25(3), 892–908. Balietti, S., Goldstone, R. L., & Helbing, D. (2016). Peer review and competition in the Art Exhibition Game. Proceedings of the National Academy of Sciences, 201603723. Dechenaux, E., Kovenock, D., & Sheremeta, R. M. (2015). A survey of experimental research on contests, all-pay auctions and tournaments. Experimental Economics, 18(4), 609–669. https://doi.org/10.1007/s10683-014-9421-0 Dohmen, T., & Falk, A. (2011). Performance pay and multidimensional sorting: Productivity, preferences, and gender. American Economic Review, 101(2), 556–90. Gneezy, U., Niederle, M., & Rustichini, A. (2003). Performance in competitive environments: Gender differences. The Quarterly Journal of Economics, 118(3), 1049–1074. Marder, E. (2017). Scientific Publishing: Beyond scoops to best practices. ELife, 6, e30076. https://doi.org/10.7554/eLife.30076 Merton, R. K. (1957). Priorities in scientific discovery: a chapter in the sociology of science. American Sociological Review, 22(6), 635–659. Munafò, M. R., Nosek, B. A., Bishop, D. V., Button, K. S., Chambers, C. D., du Sert, N. P., … Ioannidis, J. P. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1, 0021. https://doi.org/doi:10.1038/s41562-016-0021 Niederle, M., & Vesterlund, L. (2007). Do women shy away from competition? Do men compete too much? The Quarterly Journal of Economics, 122(3), 1067–1101. Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615–631. The PLOS Biology Staff Editors. (2018). The importance of being second. Public Library of Science San Francisco, CA USA. Yong, E. (2018, February 1). In Science, There Should Be a Prize for Second Place. The Atlantic. Retrieved from https://www.theatlantic.com/science/archive/2018/02/in-science-there-should-be-a-prize-for-second-place/552131/

Persistent Identifier

Date of first publication

2019-03-14

Is part of

Open Science 2019, Trier, Germany

Publisher

ZPID (Leibniz Institute for Psychology Information)

Citation

Tiokhin, L., & Derex, M. (2019, March 14). Competition for novelty in an information-sampling game. ZPID (Leibniz Institute for Psychology Information). https://doi.org/10.23668/psycharchives.2402
  • Author(s) / Creator(s)
    Tiokhin, Leonid
  • Author(s) / Creator(s)
    Derex, Maxime
  • PsychArchives acquisition timestamp
    2019-04-03T13:17:35Z
  • Made available on
    2019-04-03T13:17:35Z
  • Date of first publication
    2019-03-14
  • Abstract / Description
    *This abstract is adapted from a registered report accepted in-principle at Royal Society Open Science. Many factors plausibly affect the reliability of science (Munafò et al., 2017). At the heart of these are incentive structures: by determining the professional payoffs for various types of research, incentives shape scientists’ research decisions (Nosek, Spies, & Motyl, 2012). One longstanding incentive in academic science is rewarding priority of discovery. Over 50 years ago, sociologist of science Robert Merton noted how this norm might benefit science: rewarding priority can incentivize scientists to invest effort to quickly solve important problems and share their discoveries with the scientific community (Merton, 1957). Nonetheless, scholars have also had longstanding concerns about the repercussions of this norm. Charles Darwin thought that rewarding priority by naming species after their first-describers incentivized biologists to produce “hasty and careless work” by “miserably describing a species in two or three lines.“ ((Merton, 1957), p. 644). More recently, concerns over the consequences of rewarding priority have led the academic journals eLife and PLOS Biology to offer “scoop protection” (i.e. allowing researchers to publish findings identical to those already published in the same journal) in attempts to reduce the disproportionate payoffs to scientists who publish first (Marder, 2017; The PLOS Biology Staff Editors, 2018; Yong, 2018). In the editorial justifying their new policy, The PLOS Biology Staff Editors write “…many of us know researchers who have rushed a study into publication before doing all the necessary controls because they were afraid of being scooped. Of course, healthy competition can be good for science, but the pressure to be first is often deleterious…” (The PLOS Biology Staff Editors, 2018). Despite these reasonable concerns, there is little empirical evidence for the hypothesis that competitive pressures to publish cause individuals to produce lower-quality research. In focus-group discussions with mid and early-career researchers, scientists acknowledge that competition incentivizes them to conduct careless work (Anderson, Ronning, De Vries, & Martinson, 2007), but laboratory experiments investigating competition more broadly demonstrate that competition also promotes individual effort (Baer, Vadera, Leenders, & Oldham, 2013; Balietti, Goldstone, & Helbing, 2016; Dechenaux, Kovenock, & Sheremeta, 2015; Dohmen & Falk, 2011; Gneezy, Niederle, & Rustichini, 2003; Niederle & Vesterlund, 2007). As a consequence, it is unclear how competition in general, and competition for priority in particular, affects research quality. On the one hand, competition might cause researchers to make dubious claims based on inadequate data. On the other, competition might encourage researchers to gather data more efficiently. Given the difficulty of experimentally manipulating incentives in real-world scientific practice, we develop a simple game that mimics aspects of scientific investigation. In our experiment, individuals gather data in order to guess true states of the world and face a tradeoff between guessing quickly and increasing accuracy by acquiring more information. To test whether competition affects accuracy, we compare a treatment in which individuals are rewarded for each correct guess to a treatment where individuals face the possibility of being “scooped” by a competitor. In a second set of conditions, we make information acquisition contingent on solving arithmetic problems to test whether competition increases individual effort (i.e. arithmetic-problem solving speed). We find that competition causes individuals to make guesses using less information, thereby reducing their accuracy. We find no evidence that competition increases individual effort. Our experiment provides proof of concept that rewarding priority of publication can incentivize individuals to acquire less information, producing lower-quality research as a consequence. More generally, it provides one example of the type of empirical work that is necessary to move beyond verbal arguments about the effects of incentive structures on scientists’ behavior. References: Anderson, M. S., Ronning, E. A., De Vries, R., & Martinson, B. C. (2007). The perverse effects of competition on scientists’ work and relationships. Science and Engineering Ethics, 13(4), 437–461. Baer, M., Vadera, A. K., Leenders, R. T., & Oldham, G. R. (2013). Intergroup competition as a double-edged sword: How sex composition regulates the effects of competition on group creativity. Organization Science, 25(3), 892–908. Balietti, S., Goldstone, R. L., & Helbing, D. (2016). Peer review and competition in the Art Exhibition Game. Proceedings of the National Academy of Sciences, 201603723. Dechenaux, E., Kovenock, D., & Sheremeta, R. M. (2015). A survey of experimental research on contests, all-pay auctions and tournaments. Experimental Economics, 18(4), 609–669. https://doi.org/10.1007/s10683-014-9421-0 Dohmen, T., & Falk, A. (2011). Performance pay and multidimensional sorting: Productivity, preferences, and gender. American Economic Review, 101(2), 556–90. Gneezy, U., Niederle, M., & Rustichini, A. (2003). Performance in competitive environments: Gender differences. The Quarterly Journal of Economics, 118(3), 1049–1074. Marder, E. (2017). Scientific Publishing: Beyond scoops to best practices. ELife, 6, e30076. https://doi.org/10.7554/eLife.30076 Merton, R. K. (1957). Priorities in scientific discovery: a chapter in the sociology of science. American Sociological Review, 22(6), 635–659. Munafò, M. R., Nosek, B. A., Bishop, D. V., Button, K. S., Chambers, C. D., du Sert, N. P., … Ioannidis, J. P. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1, 0021. https://doi.org/doi:10.1038/s41562-016-0021 Niederle, M., & Vesterlund, L. (2007). Do women shy away from competition? Do men compete too much? The Quarterly Journal of Economics, 122(3), 1067–1101. Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615–631. The PLOS Biology Staff Editors. (2018). The importance of being second. Public Library of Science San Francisco, CA USA. Yong, E. (2018, February 1). In Science, There Should Be a Prize for Second Place. The Atlantic. Retrieved from https://www.theatlantic.com/science/archive/2018/02/in-science-there-should-be-a-prize-for-second-place/552131/
    en_US
  • Citation
    Tiokhin, L., & Derex, M. (2019, March 14). Competition for novelty in an information-sampling game. ZPID (Leibniz Institute for Psychology Information). https://doi.org/10.23668/psycharchives.2402
    en
  • Persistent Identifier
    https://hdl.handle.net/20.500.12034/2034
  • Persistent Identifier
    https://doi.org/10.23668/psycharchives.2402
  • Language of content
    eng
    en_US
  • Publisher
    ZPID (Leibniz Institute for Psychology Information)
    en_US
  • Is part of
    Open Science 2019, Trier, Germany
    en_US
  • Dewey Decimal Classification number(s)
    150
  • Title
    Competition for novelty in an information-sampling game
    en_US
  • DRO type
    conferenceObject
    en_US
  • Visible tag(s)
    ZPID Conferences and Workshops