Political scientists use experiments to test the predictions of game-theoretic models. In a typical experiment, each subject makes choices that determine her own earnings and the earnings of other subjects, with payments corresponding to the utility payoffs of a theoretical game. But social preferences distort the correspondence between a subject’s cash earnings and her subjective utility, and since social preferences vary, anonymously matched subjects cannot know their opponents’ preferences between outcomes, turning many laboratory tasks into games of incomplete information. We reduce the distortion of social preferences by pitting subjects against algorithmic agents (“Nashbots”). Across 11 experimental tasks, subjects facing human opponents played rationally only 36% of the time, but those facing algorithmic agents did so 60% of the time. We conclude that experimentalists have underestimated the economic rationality of laboratory subjects by designing tasks that are poor analogies to the games they purport to test.
Daniel Enemark et al., Nashbots: How Political Scientists have Underestimated Human Rationality, and How to Fix It (November 23, 2016)
Library of Congress Subject Headings
Decision making, Choice (Psychology), Game theory, Welfare economics