Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions

Research output: Journal contributionsJournal articlesResearchpeer-review

Standard

Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions. / Zantvoort, Kirsten; Nacke, Barbara; Görlich, Dennis et al.
In: npj Digital Medicine, Vol. 7, No. 1, 361, 12.2024.

Research output: Journal contributionsJournal articlesResearchpeer-review

Harvard

APA

Vancouver

Zantvoort K, Nacke B, Görlich D, Hornstein S, Jacobi C, Funk B. Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions. npj Digital Medicine. 2024 Dec;7(1):361. doi: 10.1038/s41746-024-01360-w

Bibtex

@article{b4dec5a903ca4151b5a36948a8217209,
title = "Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions",
abstract = "Artificial intelligence promises to revolutionize mental health care, but small dataset sizes and lack of robust methods raise concerns about result generalizability. To provide insights on minimal necessary data set sizes, we explore domain-specific learning curves for digital intervention dropout predictions based on 3654 users from a single study (ISRCTN13716228, 26/02/2016). Prediction performance is analyzed based on dataset size (N = 100–3654), feature groups (F = 2–129), and algorithm choice (from Naive Bayes to Neural Networks). The results substantiate the concern that small datasets (N ≤ 300) overestimate predictive power. For uninformative feature groups, in-sample prediction performance was negatively correlated with dataset size. Sophisticated models overfitted in small datasets but maximized holdout test results in larger datasets. While N = 500 mitigated overfitting, performance did not converge until N = 750–1500. Consequently, we propose minimum dataset sizes of N = 500–1000. As such, this study offers an empirical reference for researchers designing or interpreting AI studies on Digital Mental Health Intervention data.",
keywords = "Business informatics, Informatics",
author = "Kirsten Zantvoort and Barbara Nacke and Dennis G{\"o}rlich and Silvan Hornstein and Corinna Jacobi and Burkhardt Funk",
note = "Publisher Copyright: {\textcopyright} The Author(s) 2024.",
year = "2024",
month = dec,
doi = "10.1038/s41746-024-01360-w",
language = "English",
volume = "7",
journal = "npj Digital Medicine",
issn = "2398-6352",
publisher = "Nature Publishing Group",
number = "1",

}

RIS

TY - JOUR

T1 - Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions

AU - Zantvoort, Kirsten

AU - Nacke, Barbara

AU - Görlich, Dennis

AU - Hornstein, Silvan

AU - Jacobi, Corinna

AU - Funk, Burkhardt

N1 - Publisher Copyright: © The Author(s) 2024.

PY - 2024/12

Y1 - 2024/12

N2 - Artificial intelligence promises to revolutionize mental health care, but small dataset sizes and lack of robust methods raise concerns about result generalizability. To provide insights on minimal necessary data set sizes, we explore domain-specific learning curves for digital intervention dropout predictions based on 3654 users from a single study (ISRCTN13716228, 26/02/2016). Prediction performance is analyzed based on dataset size (N = 100–3654), feature groups (F = 2–129), and algorithm choice (from Naive Bayes to Neural Networks). The results substantiate the concern that small datasets (N ≤ 300) overestimate predictive power. For uninformative feature groups, in-sample prediction performance was negatively correlated with dataset size. Sophisticated models overfitted in small datasets but maximized holdout test results in larger datasets. While N = 500 mitigated overfitting, performance did not converge until N = 750–1500. Consequently, we propose minimum dataset sizes of N = 500–1000. As such, this study offers an empirical reference for researchers designing or interpreting AI studies on Digital Mental Health Intervention data.

AB - Artificial intelligence promises to revolutionize mental health care, but small dataset sizes and lack of robust methods raise concerns about result generalizability. To provide insights on minimal necessary data set sizes, we explore domain-specific learning curves for digital intervention dropout predictions based on 3654 users from a single study (ISRCTN13716228, 26/02/2016). Prediction performance is analyzed based on dataset size (N = 100–3654), feature groups (F = 2–129), and algorithm choice (from Naive Bayes to Neural Networks). The results substantiate the concern that small datasets (N ≤ 300) overestimate predictive power. For uninformative feature groups, in-sample prediction performance was negatively correlated with dataset size. Sophisticated models overfitted in small datasets but maximized holdout test results in larger datasets. While N = 500 mitigated overfitting, performance did not converge until N = 750–1500. Consequently, we propose minimum dataset sizes of N = 500–1000. As such, this study offers an empirical reference for researchers designing or interpreting AI studies on Digital Mental Health Intervention data.

KW - Business informatics

KW - Informatics

UR - http://www.scopus.com/inward/record.url?scp=85212424967&partnerID=8YFLogxK

U2 - 10.1038/s41746-024-01360-w

DO - 10.1038/s41746-024-01360-w

M3 - Journal articles

C2 - 39695276

AN - SCOPUS:85212424967

VL - 7

JO - npj Digital Medicine

JF - npj Digital Medicine

SN - 2398-6352

IS - 1

M1 - 361

ER -

Recently viewed

Publications

  1. An Experimental Approach to the Optimization of Customer Information at the Point of Sale
  2. Influence of data clouds fusion from 3D real-time vision system on robotic group dead reckoning in unknown terrain
  3. Language Model Transformers as Evaluators for Open-domain Dialogues
  4. Transcending Methodological Nationalism through a Transversal Method?
  5. Entry, Exit and Productivity
  6. Using LLMs in sensory service research
  7. Changing societies, changing journalism
  8. Enterprise Architecture Management Support for Digital Transformation Projects in Very Large Enterprises
  9. Considering Teachers’ Beliefs, Motivation, and Emotions Regarding Teaching Mathematics With Digital Tools
  10. Performance Saga: Interview 06
  11. The relation of flow-experience and physiological arousal under stress - can u shape it?
  12. Responsibility and environment
  13. Spectral Kinetic Simulation of the Ideal Multipole Resonance Probe
  14. Construal level theory
  15. A Transatlantic Symposium on the Restatement (Fourth)
  16. Optimising patterns of life conduct
  17. Time-varying persistence in real oil prices and its determinant
  18. Development and characterisation of a new interface for coupling capillary LC with collision-cell ICPMS and its application for phosphorylation profiling of tryptic protein digests
  19. A hybrid hydraulic piezo actuator modeling and hysteresis effect identification for control in camless internal combustion engines
  20. Exploring the uncanny valley effect in affective social robotics
  21. CAN BUSINESS MODEL COMPONENTS EXPLAIN DIGITAL START-UP SUCCESS?
  22. Sliding Mode Control for a Vertical Dynamics in the Presence of Nonlinear Friction
  23. Release of monomers from four different composite materials after halogen and LED curing
  24. System and action theory
  25. Paired case research design and mixed-methods approach
  26. Schreibt Ihr Unternehmen auch "grüne" Zahlen?
  27. Mapping the vegetation of southern mongolian protected areas: application of GIS and remote sensing techniques
  28. How many organic compounds are graph-theoretically nonplanar?
  29. Survey Response and Observed Behavior
  30. Essential ecosystem service variables for monitoring progress towards sustainability
  31. On the optimal design of insurance contracts with guarantees
  32. Sustainability Science with Ozzy Osbourne, Julia Roberts and Ai Weiwei
  33. Multiobjective optimal control of fluid mixing
  34. Sustainable Development