Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions

Research output: Journal contributionsJournal articlesResearchpeer-review

Standard

Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions. / Zantvoort, Kirsten; Nacke, Barbara; Görlich, Dennis et al.
In: npj Digital Medicine, Vol. 7, No. 1, 361, 12.2024.

Research output: Journal contributionsJournal articlesResearchpeer-review

Harvard

APA

Vancouver

Zantvoort K, Nacke B, Görlich D, Hornstein S, Jacobi C, Funk B. Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions. npj Digital Medicine. 2024 Dec;7(1):361. doi: 10.1038/s41746-024-01360-w

Bibtex

@article{b4dec5a903ca4151b5a36948a8217209,
title = "Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions",
abstract = "Artificial intelligence promises to revolutionize mental health care, but small dataset sizes and lack of robust methods raise concerns about result generalizability. To provide insights on minimal necessary data set sizes, we explore domain-specific learning curves for digital intervention dropout predictions based on 3654 users from a single study (ISRCTN13716228, 26/02/2016). Prediction performance is analyzed based on dataset size (N = 100–3654), feature groups (F = 2–129), and algorithm choice (from Naive Bayes to Neural Networks). The results substantiate the concern that small datasets (N ≤ 300) overestimate predictive power. For uninformative feature groups, in-sample prediction performance was negatively correlated with dataset size. Sophisticated models overfitted in small datasets but maximized holdout test results in larger datasets. While N = 500 mitigated overfitting, performance did not converge until N = 750–1500. Consequently, we propose minimum dataset sizes of N = 500–1000. As such, this study offers an empirical reference for researchers designing or interpreting AI studies on Digital Mental Health Intervention data.",
keywords = "Business informatics, Informatics",
author = "Kirsten Zantvoort and Barbara Nacke and Dennis G{\"o}rlich and Silvan Hornstein and Corinna Jacobi and Burkhardt Funk",
note = "Publisher Copyright: {\textcopyright} The Author(s) 2024.",
year = "2024",
month = dec,
doi = "10.1038/s41746-024-01360-w",
language = "English",
volume = "7",
journal = "npj Digital Medicine",
issn = "2398-6352",
publisher = "Nature Publishing Group",
number = "1",

}

RIS

TY - JOUR

T1 - Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions

AU - Zantvoort, Kirsten

AU - Nacke, Barbara

AU - Görlich, Dennis

AU - Hornstein, Silvan

AU - Jacobi, Corinna

AU - Funk, Burkhardt

N1 - Publisher Copyright: © The Author(s) 2024.

PY - 2024/12

Y1 - 2024/12

N2 - Artificial intelligence promises to revolutionize mental health care, but small dataset sizes and lack of robust methods raise concerns about result generalizability. To provide insights on minimal necessary data set sizes, we explore domain-specific learning curves for digital intervention dropout predictions based on 3654 users from a single study (ISRCTN13716228, 26/02/2016). Prediction performance is analyzed based on dataset size (N = 100–3654), feature groups (F = 2–129), and algorithm choice (from Naive Bayes to Neural Networks). The results substantiate the concern that small datasets (N ≤ 300) overestimate predictive power. For uninformative feature groups, in-sample prediction performance was negatively correlated with dataset size. Sophisticated models overfitted in small datasets but maximized holdout test results in larger datasets. While N = 500 mitigated overfitting, performance did not converge until N = 750–1500. Consequently, we propose minimum dataset sizes of N = 500–1000. As such, this study offers an empirical reference for researchers designing or interpreting AI studies on Digital Mental Health Intervention data.

AB - Artificial intelligence promises to revolutionize mental health care, but small dataset sizes and lack of robust methods raise concerns about result generalizability. To provide insights on minimal necessary data set sizes, we explore domain-specific learning curves for digital intervention dropout predictions based on 3654 users from a single study (ISRCTN13716228, 26/02/2016). Prediction performance is analyzed based on dataset size (N = 100–3654), feature groups (F = 2–129), and algorithm choice (from Naive Bayes to Neural Networks). The results substantiate the concern that small datasets (N ≤ 300) overestimate predictive power. For uninformative feature groups, in-sample prediction performance was negatively correlated with dataset size. Sophisticated models overfitted in small datasets but maximized holdout test results in larger datasets. While N = 500 mitigated overfitting, performance did not converge until N = 750–1500. Consequently, we propose minimum dataset sizes of N = 500–1000. As such, this study offers an empirical reference for researchers designing or interpreting AI studies on Digital Mental Health Intervention data.

KW - Business informatics

KW - Informatics

UR - http://www.scopus.com/inward/record.url?scp=85212424967&partnerID=8YFLogxK

U2 - 10.1038/s41746-024-01360-w

DO - 10.1038/s41746-024-01360-w

M3 - Journal articles

C2 - 39695276

AN - SCOPUS:85212424967

VL - 7

JO - npj Digital Medicine

JF - npj Digital Medicine

SN - 2398-6352

IS - 1

M1 - 361

ER -