Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions

Publikation: Beiträge in ZeitschriftenZeitschriftenaufsätzeForschungbegutachtet

Standard

Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions. / Zantvoort, Kirsten; Nacke, Barbara; Görlich, Dennis et al.
in: npj Digital Medicine, Jahrgang 7, Nr. 1, 361, 12.2024.

Publikation: Beiträge in ZeitschriftenZeitschriftenaufsätzeForschungbegutachtet

Harvard

APA

Vancouver

Zantvoort K, Nacke B, Görlich D, Hornstein S, Jacobi C, Funk B. Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions. npj Digital Medicine. 2024 Dez;7(1):361. doi: 10.1038/s41746-024-01360-w

Bibtex

@article{b4dec5a903ca4151b5a36948a8217209,
title = "Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions",
abstract = "Artificial intelligence promises to revolutionize mental health care, but small dataset sizes and lack of robust methods raise concerns about result generalizability. To provide insights on minimal necessary data set sizes, we explore domain-specific learning curves for digital intervention dropout predictions based on 3654 users from a single study (ISRCTN13716228, 26/02/2016). Prediction performance is analyzed based on dataset size (N = 100–3654), feature groups (F = 2–129), and algorithm choice (from Naive Bayes to Neural Networks). The results substantiate the concern that small datasets (N ≤ 300) overestimate predictive power. For uninformative feature groups, in-sample prediction performance was negatively correlated with dataset size. Sophisticated models overfitted in small datasets but maximized holdout test results in larger datasets. While N = 500 mitigated overfitting, performance did not converge until N = 750–1500. Consequently, we propose minimum dataset sizes of N = 500–1000. As such, this study offers an empirical reference for researchers designing or interpreting AI studies on Digital Mental Health Intervention data.",
keywords = "Business informatics, Informatics",
author = "Kirsten Zantvoort and Barbara Nacke and Dennis G{\"o}rlich and Silvan Hornstein and Corinna Jacobi and Burkhardt Funk",
note = "Publisher Copyright: {\textcopyright} The Author(s) 2024.",
year = "2024",
month = dec,
doi = "10.1038/s41746-024-01360-w",
language = "English",
volume = "7",
journal = "npj Digital Medicine",
issn = "2398-6352",
publisher = "Nature Publishing Group",
number = "1",

}

RIS

TY - JOUR

T1 - Estimation of minimal data sets sizes for machine learning predictions in digital mental health interventions

AU - Zantvoort, Kirsten

AU - Nacke, Barbara

AU - Görlich, Dennis

AU - Hornstein, Silvan

AU - Jacobi, Corinna

AU - Funk, Burkhardt

N1 - Publisher Copyright: © The Author(s) 2024.

PY - 2024/12

Y1 - 2024/12

N2 - Artificial intelligence promises to revolutionize mental health care, but small dataset sizes and lack of robust methods raise concerns about result generalizability. To provide insights on minimal necessary data set sizes, we explore domain-specific learning curves for digital intervention dropout predictions based on 3654 users from a single study (ISRCTN13716228, 26/02/2016). Prediction performance is analyzed based on dataset size (N = 100–3654), feature groups (F = 2–129), and algorithm choice (from Naive Bayes to Neural Networks). The results substantiate the concern that small datasets (N ≤ 300) overestimate predictive power. For uninformative feature groups, in-sample prediction performance was negatively correlated with dataset size. Sophisticated models overfitted in small datasets but maximized holdout test results in larger datasets. While N = 500 mitigated overfitting, performance did not converge until N = 750–1500. Consequently, we propose minimum dataset sizes of N = 500–1000. As such, this study offers an empirical reference for researchers designing or interpreting AI studies on Digital Mental Health Intervention data.

AB - Artificial intelligence promises to revolutionize mental health care, but small dataset sizes and lack of robust methods raise concerns about result generalizability. To provide insights on minimal necessary data set sizes, we explore domain-specific learning curves for digital intervention dropout predictions based on 3654 users from a single study (ISRCTN13716228, 26/02/2016). Prediction performance is analyzed based on dataset size (N = 100–3654), feature groups (F = 2–129), and algorithm choice (from Naive Bayes to Neural Networks). The results substantiate the concern that small datasets (N ≤ 300) overestimate predictive power. For uninformative feature groups, in-sample prediction performance was negatively correlated with dataset size. Sophisticated models overfitted in small datasets but maximized holdout test results in larger datasets. While N = 500 mitigated overfitting, performance did not converge until N = 750–1500. Consequently, we propose minimum dataset sizes of N = 500–1000. As such, this study offers an empirical reference for researchers designing or interpreting AI studies on Digital Mental Health Intervention data.

KW - Business informatics

KW - Informatics

UR - http://www.scopus.com/inward/record.url?scp=85212424967&partnerID=8YFLogxK

U2 - 10.1038/s41746-024-01360-w

DO - 10.1038/s41746-024-01360-w

M3 - Journal articles

C2 - 39695276

AN - SCOPUS:85212424967

VL - 7

JO - npj Digital Medicine

JF - npj Digital Medicine

SN - 2398-6352

IS - 1

M1 - 361

ER -

DOI

Zuletzt angesehen

Publikationen

  1. The complementarity of single-species and ecosystem-oriented research in conservation research
  2. Innovation in Continuing Engineering Education with focus on gender and non-traditional students' pathways
  3. Does transition to IFRS substantially affect key financial ratios in shareholder-oriented common law regimes?
  4. Do it again
  5. Classification of playing position in elite junior Australian football using technical skill indicators
  6. Global patterns of ecologically unequal exchange
  7. The use of force against terrorists
  8. Wir sind ihr
  9. Delivering community benefits through REDD plus : Lessons from Joint Forest Management in Zambia
  10. Internet-Based Prevention of Depression in Employees
  11. Toward a Production-Oriented Imagology
  12. The Computational Turn, or, a New Weltbild
  13. Archival research on carbon reporting quality. A review of determinants and consequences for firm value
  14. Community and Training in NFDI4DS
  15. Kriminalisierung und Versicherheitlichung von Migration. Editorial
  16. Assoggettamento/Soggettivazione
  17. On the micro-structure of the German export boom
  18. The Measurement of Grip-Strength in Automobiles
  19. Front in the mouth, front in the word
  20. Intra- and interspecific hybridization in invasive Siberian elm
  21. Design und Methode der Studie
  22. Benchmarking question answering systems
  23. Logistisches Montagecontrolling
  24. Seabirds as a subsistence and cultural resource in two remote Alaskan communities
  25. Das Bild im Monitor
  26. Stakeholder Governance – An analysis of BITC Corporate Responsibility Index Data on Stakeholder Engagement and Governance
  27. Call for Submissions Business Ethics Quarterly Special Issue on
  28. Benno Reifenberg (1892-1970)