Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions

Research output: Journal contributionsJournal articlesResearchpeer-review

Standard

Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions. / Zantvoort, Kirsten; Nacke, Barbara; Görlich, Dennis et al.
In: npj Digital Medicine, Vol. 7, No. 1, 361, 12.2024.

Research output: Journal contributionsJournal articlesResearchpeer-review

Harvard

APA

Vancouver

Zantvoort K, Nacke B, Görlich D, Hornstein S, Jacobi C, Funk B. Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions. npj Digital Medicine. 2024 Dec;7(1):361. doi: 10.1038/s41746-024-01360-w

Bibtex

@article{b4dec5a903ca4151b5a36948a8217209,
title = "Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions",
abstract = "Artificial intelligence promises to revolutionize mental health care, but small dataset sizes and lack of robust methods raise concerns about result generalizability. To provide insights on minimal necessary data set sizes, we explore domain-specific learning curves for digital intervention dropout predictions based on 3654 users from a single study (ISRCTN13716228, 26/02/2016). Prediction performance is analyzed based on dataset size (N = 100–3654), feature groups (F = 2–129), and algorithm choice (from Naive Bayes to Neural Networks). The results substantiate the concern that small datasets (N ≤ 300) overestimate predictive power. For uninformative feature groups, in-sample prediction performance was negatively correlated with dataset size. Sophisticated models overfitted in small datasets but maximized holdout test results in larger datasets. While N = 500 mitigated overfitting, performance did not converge until N = 750–1500. Consequently, we propose minimum dataset sizes of N = 500–1000. As such, this study offers an empirical reference for researchers designing or interpreting AI studies on Digital Mental Health Intervention data.",
keywords = "Business informatics, Informatics",
author = "Kirsten Zantvoort and Barbara Nacke and Dennis G{\"o}rlich and Silvan Hornstein and Corinna Jacobi and Burkhardt Funk",
note = "Publisher Copyright: {\textcopyright} The Author(s) 2024.",
year = "2024",
month = dec,
doi = "10.1038/s41746-024-01360-w",
language = "English",
volume = "7",
journal = "npj Digital Medicine",
issn = "2398-6352",
publisher = "Nature Publishing Group",
number = "1",

}

RIS

TY - JOUR

T1 - Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions

AU - Zantvoort, Kirsten

AU - Nacke, Barbara

AU - Görlich, Dennis

AU - Hornstein, Silvan

AU - Jacobi, Corinna

AU - Funk, Burkhardt

N1 - Publisher Copyright: © The Author(s) 2024.

PY - 2024/12

Y1 - 2024/12

N2 - Artificial intelligence promises to revolutionize mental health care, but small dataset sizes and lack of robust methods raise concerns about result generalizability. To provide insights on minimal necessary data set sizes, we explore domain-specific learning curves for digital intervention dropout predictions based on 3654 users from a single study (ISRCTN13716228, 26/02/2016). Prediction performance is analyzed based on dataset size (N = 100–3654), feature groups (F = 2–129), and algorithm choice (from Naive Bayes to Neural Networks). The results substantiate the concern that small datasets (N ≤ 300) overestimate predictive power. For uninformative feature groups, in-sample prediction performance was negatively correlated with dataset size. Sophisticated models overfitted in small datasets but maximized holdout test results in larger datasets. While N = 500 mitigated overfitting, performance did not converge until N = 750–1500. Consequently, we propose minimum dataset sizes of N = 500–1000. As such, this study offers an empirical reference for researchers designing or interpreting AI studies on Digital Mental Health Intervention data.

AB - Artificial intelligence promises to revolutionize mental health care, but small dataset sizes and lack of robust methods raise concerns about result generalizability. To provide insights on minimal necessary data set sizes, we explore domain-specific learning curves for digital intervention dropout predictions based on 3654 users from a single study (ISRCTN13716228, 26/02/2016). Prediction performance is analyzed based on dataset size (N = 100–3654), feature groups (F = 2–129), and algorithm choice (from Naive Bayes to Neural Networks). The results substantiate the concern that small datasets (N ≤ 300) overestimate predictive power. For uninformative feature groups, in-sample prediction performance was negatively correlated with dataset size. Sophisticated models overfitted in small datasets but maximized holdout test results in larger datasets. While N = 500 mitigated overfitting, performance did not converge until N = 750–1500. Consequently, we propose minimum dataset sizes of N = 500–1000. As such, this study offers an empirical reference for researchers designing or interpreting AI studies on Digital Mental Health Intervention data.

KW - Business informatics

KW - Informatics

UR - http://www.scopus.com/inward/record.url?scp=85212424967&partnerID=8YFLogxK

U2 - 10.1038/s41746-024-01360-w

DO - 10.1038/s41746-024-01360-w

M3 - Journal articles

C2 - 39695276

AN - SCOPUS:85212424967

VL - 7

JO - npj Digital Medicine

JF - npj Digital Medicine

SN - 2398-6352

IS - 1

M1 - 361

ER -

Recently viewed

Researchers

  1. Horst Rode

Activities

  1. Statistische Woche - 2013
  2. Histories of Media Art (Networking) in Deep Europe in the 1990s
  3. What impact does a field experience have on on pre-service teachers' adaptive peer feedback expertise?
  4. Development Entrepreneurship and Personal Initiatives: Long term massive randomized experiments on personal inititiative training for entrepreneurs to reduce poverty in developing countries
  5. The effects of pragmatic intervention on directives in EIL feedback speech events
  6. Religious Activity, Risk Taking Preferences, and Financial Economic Behavior: Empirical Evidence from German Survey Data
  7. A diary study on the social dynamics of knowledge hiding and the role of entitlement
  8. What we mean when we talk about freedom – The KOMFOR study: an analysis of students' choices of courses in interdisciplinary parts of the curriculum.
  9. Lehrerfortbildung 2012
  10. Fostering inter-institutional Development Teams in ITE & School Practice: The Significance of epistemic, social and organisational integration.
  11. Situating Global Art - 2015
  12. Congress of Applied Psychology - IAAP 2006
  13. Field release modelling of pesticides and their transformation products during a first significant rainfall in a semi-arid region
  14. Developed materials for thermal energy storage: Design and Characterization
  15. Der neue EU-Nachhaltigkeitsbericht nach der CSRD. Fluch oder Segen?“
  16. Promoting Pre-Service Teachers' Professional Vision of Classroom Management During Practical School Training: An Online- and Video-Based Self-Reflection and Feedback Intervention
  17. Programm-Workshop zur Zukunft der Arbeitsforschung
  18. Lehrerfortbildung 2010
  19. Zoological Systematics (Fachzeitschrift)
  20. Graduate School (Organisation)