Insights into the accuracy of social scientists’ forecasts of societal change

Research output: Journal contributionsJournal articlesResearchpeer-review

Authors

  • Igor Grossmann
  • Amanda Rotella
  • Cendri A. Hutcherson
  • Konstantyn Sharpinskyi
  • Michael E.W. Varnum
  • Sebastian Achter
  • Mandeep K. Dhami
  • Xinqi Evie Guo
  • Mane Kara-Yakoubian
  • David R. Mandel
  • Louis Raes
  • Louis Tay
  • Aymeric Vie
  • Lisa Wagner
  • Matus Adamkovic
  • Arash Arami
  • Patrícia Arriaga
  • Kasun Bandara
  • Gabriel Baník
  • František Bartoš
  • Ernest Baskin
  • Christoph Bergmeir
  • Michał Białek
  • Caroline K. Børsting
  • Dillon T. Browne
  • Eugene M. Caruso
  • Rong Chen
  • Bin Tzong Chie
  • William J. Chopik
  • Robert N. Collins
  • Chin Wen Cong
  • Lucian G. Conway
  • Matthew Davis
  • Martin V. Day
  • Nathan A. Dhaliwal
  • Justin D. Durham
  • Martyna Dziekan
  • Christian T. Elbaek
  • Eric Shuman
  • Marharyta Fabrykant
  • Mustafa Firat
  • Geoffrey T. Fong
  • Jeremy A. Frimer
  • Jonathan M. Gallegos
  • Simon B. Goldberg
  • Anton Gollwitzer
  • Julia Goyal
  • Lorenz Graf-Vlachy

How well can social scientists predict societal change, and what processes underlie their predictions? To answer these questions, we ran two forecasting tournaments testing the accuracy of predictions of societal change in domains commonly studied in the social sciences: ideological preferences, political polarization, life satisfaction, sentiment on social media, and gender–career and racial bias. After we provided them with historical trend data on the relevant domain, social scientists submitted pre-registered monthly forecasts for a year (Tournament 1; N = 86 teams and 359 forecasts), with an opportunity to update forecasts on the basis of new data six months later (Tournament 2; N = 120 teams and 546 forecasts). Benchmarking forecasting accuracy revealed that social scientists’ forecasts were on average no more accurate than those of simple statistical models (historical means, random walks or linear regressions) or the aggregate forecasts of a sample from the general public (N = 802). However, scientists were more accurate if they had scientific expertise in a prediction domain, were interdisciplinary, used simpler models and based predictions on prior data.

Original languageEnglish
JournalNature Human Behaviour
Volume7
Issue number4
Pages (from-to)484-501
Number of pages18
DOIs
Publication statusPublished - 04.2023
Externally publishedYes

Bibliographical note

This programme of research was supported by the Basic Research Program at the National Research University Higher School of Economics (M. Fabrykant), John Templeton Foundation grant no. 62260 (I.G. and P.E.T.), Kega 079UK-4/2021 (P.K.), Ministerio de Ciencia e Innovación España grants no. PID2019-111512RB-I00-HMDM and no. HDL-HS-280218 (A.A.), the National Center for Complementary & Integrative Health of the National Institutes of Health under award no. K23AT010879 (S.B.G.), National Science Foundation RAPID grant no. 2026854 (M.E.W.V.), PID2019-111512RB-I00 (M.S.), NPO Systemic Risk Institute grant no. LX22NPO5101 (I.R.), the Slovak Research and Development Agency under contract no. APVV-20-0319 (M.A.), Social Sciences and Humanities Research Council of Canada Insight grant no. 435-2014-0685 (I.G.), Social Sciences and Humanities Research Council of Canada Connection grant no. 611-2020-0190 (I.G.), and Swiss National Science Foundation grant no. PP00P1_170463 (O. Strijbis). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. We thank J. Axt for providing monthly estimates of Project Implicit data and the members of the Forecasting Collaborative who chose to remain anonymous for their contribution to the tournaments.

Publisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer Nature Limited.