Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions
Research output: Journal contributions › Journal articles › Research › peer-review
Standard
In: npj Digital Medicine, Vol. 7, No. 1, 361, 12.2024.
Research output: Journal contributions › Journal articles › Research › peer-review
Harvard
APA
Vancouver
Bibtex
}
RIS
TY - JOUR
T1 - Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions
AU - Zantvoort, Kirsten
AU - Nacke, Barbara
AU - Görlich, Dennis
AU - Hornstein, Silvan
AU - Jacobi, Corinna
AU - Funk, Burkhardt
N1 - Publisher Copyright: © The Author(s) 2024.
PY - 2024/12
Y1 - 2024/12
N2 - Artificial intelligence promises to revolutionize mental health care, but small dataset sizes and lack of robust methods raise concerns about result generalizability. To provide insights on minimal necessary data set sizes, we explore domain-specific learning curves for digital intervention dropout predictions based on 3654 users from a single study (ISRCTN13716228, 26/02/2016). Prediction performance is analyzed based on dataset size (N = 100–3654), feature groups (F = 2–129), and algorithm choice (from Naive Bayes to Neural Networks). The results substantiate the concern that small datasets (N ≤ 300) overestimate predictive power. For uninformative feature groups, in-sample prediction performance was negatively correlated with dataset size. Sophisticated models overfitted in small datasets but maximized holdout test results in larger datasets. While N = 500 mitigated overfitting, performance did not converge until N = 750–1500. Consequently, we propose minimum dataset sizes of N = 500–1000. As such, this study offers an empirical reference for researchers designing or interpreting AI studies on Digital Mental Health Intervention data.
AB - Artificial intelligence promises to revolutionize mental health care, but small dataset sizes and lack of robust methods raise concerns about result generalizability. To provide insights on minimal necessary data set sizes, we explore domain-specific learning curves for digital intervention dropout predictions based on 3654 users from a single study (ISRCTN13716228, 26/02/2016). Prediction performance is analyzed based on dataset size (N = 100–3654), feature groups (F = 2–129), and algorithm choice (from Naive Bayes to Neural Networks). The results substantiate the concern that small datasets (N ≤ 300) overestimate predictive power. For uninformative feature groups, in-sample prediction performance was negatively correlated with dataset size. Sophisticated models overfitted in small datasets but maximized holdout test results in larger datasets. While N = 500 mitigated overfitting, performance did not converge until N = 750–1500. Consequently, we propose minimum dataset sizes of N = 500–1000. As such, this study offers an empirical reference for researchers designing or interpreting AI studies on Digital Mental Health Intervention data.
KW - Business informatics
KW - Informatics
UR - http://www.scopus.com/inward/record.url?scp=85212424967&partnerID=8YFLogxK
U2 - 10.1038/s41746-024-01360-w
DO - 10.1038/s41746-024-01360-w
M3 - Journal articles
C2 - 39695276
AN - SCOPUS:85212424967
VL - 7
JO - npj Digital Medicine
JF - npj Digital Medicine
SN - 2398-6352
IS - 1
M1 - 361
ER -