Reporting and Analysing the Environmental Impact of Language Models on the Example of Commonsense Question Answering with External Knowledge

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearch

Standard

Reporting and Analysing the Environmental Impact of Language Models on the Example of Commonsense Question Answering with External Knowledge. / Usmanova, Aida; Huang, Junbo; Banerjee, Debayan et al.
Sustainable AI Conference 2023: Sustainable AI Across Borders: Conference Proceedings. Vol. abs/2408.01453 2024.

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearch

Harvard

Usmanova, A, Huang, J, Banerjee, D & Usbeck, R 2024, Reporting and Analysing the Environmental Impact of Language Models on the Example of Commonsense Question Answering with External Knowledge. in Sustainable AI Conference 2023: Sustainable AI Across Borders: Conference Proceedings. vol. abs/2408.01453, 2. Sustainable AI Conference 2023, Bonn, North Rhine-Westphalia, Germany, 30.05.23. https://doi.org/10.48550/ARXIV.2408.01453

APA

Usmanova, A., Huang, J., Banerjee, D., & Usbeck, R. (2024). Reporting and Analysing the Environmental Impact of Language Models on the Example of Commonsense Question Answering with External Knowledge. Manuscript in preparation. In Sustainable AI Conference 2023: Sustainable AI Across Borders: Conference Proceedings (Vol. abs/2408.01453) https://doi.org/10.48550/ARXIV.2408.01453

Vancouver

Usmanova A, Huang J, Banerjee D, Usbeck R. Reporting and Analysing the Environmental Impact of Language Models on the Example of Commonsense Question Answering with External Knowledge. In Sustainable AI Conference 2023: Sustainable AI Across Borders: Conference Proceedings. Vol. abs/2408.01453. 2024 doi: 10.48550/ARXIV.2408.01453

Bibtex

@inbook{4f4ea465d3704eceba687547285bf745,
title = "Reporting and Analysing the Environmental Impact of Language Models on the Example of Commonsense Question Answering with External Knowledge",
abstract = "Human-produced emissions are growing at an alarming rate, causing already observable changes in the climate and environment in general. Each year global carbon dioxide emissions hit a new record, and it is reported that 0.5% of total US greenhouse gas emissions are attributed to data centres as of 2021. The release of ChatGPT in late 2022 sparked social interest in Large Language Models (LLMs), the new generation of Language Models with a large number of parameters and trained on massive amounts of data. Currently, numerous companies are releasing products featuring various LLMs, with many more models in development and awaiting release. Deep Learning research is a competitive field, with only models that reach top performance attracting attention and being utilized. Hence, achieving better accuracy and results is often the first priority, while the model's efficiency and the environmental impact of the study are neglected. However, LLMs demand substantial computational resources and are very costly to train, both financially and environmentally. It becomes essential to raise awareness and promote conscious decisions about algorithmic and hardware choices. Providing information on training time, the approximate carbon dioxide emissions and power consumption would assist future studies in making necessary adjustments and determining the compatibility of available computational resources with model requirements. In this study, we infused T5 LLM with external knowledge and fine-tuned the model for Question-Answering task. Furthermore, we calculated and reported the approximate environmental impact for both steps. The findings demonstrate that the smaller models may not always be sustainable options, and increased training does not always imply better performance. The most optimal outcome is achieved by carefully considering both performance and efficiency factors.",
keywords = "Informatics",
author = "Aida Usmanova and Junbo Huang and Debayan Banerjee and Ricardo Usbeck",
year = "2024",
doi = "10.48550/ARXIV.2408.01453",
language = "English",
volume = "abs/2408.01453",
booktitle = "Sustainable AI Conference 2023: Sustainable AI Across Borders",
note = "2. Sustainable AI Conference 2023 : Sustainable AI Across Borders ; Conference date: 30-05-2023 Through 01-06-2023",
url = "https://www.uni-bonn.de/de/veranstaltungen/sustainable-ai-conference-2023-sustainable-ai-across-borders",

}

RIS

TY - CHAP

T1 - Reporting and Analysing the Environmental Impact of Language Models on the Example of Commonsense Question Answering with External Knowledge

AU - Usmanova, Aida

AU - Huang, Junbo

AU - Banerjee, Debayan

AU - Usbeck, Ricardo

N1 - Conference code: 2

PY - 2024

Y1 - 2024

N2 - Human-produced emissions are growing at an alarming rate, causing already observable changes in the climate and environment in general. Each year global carbon dioxide emissions hit a new record, and it is reported that 0.5% of total US greenhouse gas emissions are attributed to data centres as of 2021. The release of ChatGPT in late 2022 sparked social interest in Large Language Models (LLMs), the new generation of Language Models with a large number of parameters and trained on massive amounts of data. Currently, numerous companies are releasing products featuring various LLMs, with many more models in development and awaiting release. Deep Learning research is a competitive field, with only models that reach top performance attracting attention and being utilized. Hence, achieving better accuracy and results is often the first priority, while the model's efficiency and the environmental impact of the study are neglected. However, LLMs demand substantial computational resources and are very costly to train, both financially and environmentally. It becomes essential to raise awareness and promote conscious decisions about algorithmic and hardware choices. Providing information on training time, the approximate carbon dioxide emissions and power consumption would assist future studies in making necessary adjustments and determining the compatibility of available computational resources with model requirements. In this study, we infused T5 LLM with external knowledge and fine-tuned the model for Question-Answering task. Furthermore, we calculated and reported the approximate environmental impact for both steps. The findings demonstrate that the smaller models may not always be sustainable options, and increased training does not always imply better performance. The most optimal outcome is achieved by carefully considering both performance and efficiency factors.

AB - Human-produced emissions are growing at an alarming rate, causing already observable changes in the climate and environment in general. Each year global carbon dioxide emissions hit a new record, and it is reported that 0.5% of total US greenhouse gas emissions are attributed to data centres as of 2021. The release of ChatGPT in late 2022 sparked social interest in Large Language Models (LLMs), the new generation of Language Models with a large number of parameters and trained on massive amounts of data. Currently, numerous companies are releasing products featuring various LLMs, with many more models in development and awaiting release. Deep Learning research is a competitive field, with only models that reach top performance attracting attention and being utilized. Hence, achieving better accuracy and results is often the first priority, while the model's efficiency and the environmental impact of the study are neglected. However, LLMs demand substantial computational resources and are very costly to train, both financially and environmentally. It becomes essential to raise awareness and promote conscious decisions about algorithmic and hardware choices. Providing information on training time, the approximate carbon dioxide emissions and power consumption would assist future studies in making necessary adjustments and determining the compatibility of available computational resources with model requirements. In this study, we infused T5 LLM with external knowledge and fine-tuned the model for Question-Answering task. Furthermore, we calculated and reported the approximate environmental impact for both steps. The findings demonstrate that the smaller models may not always be sustainable options, and increased training does not always imply better performance. The most optimal outcome is achieved by carefully considering both performance and efficiency factors.

KW - Informatics

UR - https://dblp.org/db/journals/corr/index.html

U2 - 10.48550/ARXIV.2408.01453

DO - 10.48550/ARXIV.2408.01453

M3 - Article in conference proceedings

VL - abs/2408.01453

BT - Sustainable AI Conference 2023: Sustainable AI Across Borders

T2 - 2. Sustainable AI Conference 2023

Y2 - 30 May 2023 through 1 June 2023

ER -

Recently viewed

Activities

  1. Artificial Intelligence in Criminal Law
  2. Migrations of Knowledge - Migknow 2014
  3. Rational Design of Molecules by Life Cycle Engineering.
  4. International Conference of Mathematical Modelling and Applications - ICTMA 17
  5. Intercultural Relations in Practice 2017
  6. DIY as a Mode of Organization
  7. Sprach-Los - Grenzen-Los
  8. “Relying on Spontaneity”
  9. Prototypes: The Usefulf Ambiguity of the „Biological Computer" (Annual Meeting of the AMERICAN SOCIETY FOR CYBERNETICS)
  10. Der "als-ob" Modus: Polizei, Protest, Staatlichkeit
  11. Provenance as (Linked) Data
  12. Urban spaces of possibility and imaginaries of sustainability
  13. AHV Writing Workshop - 2013
  14. Workshop Open Educational Ressourcen für das Sprachenlernen
  15. Co-creating transformative processes - a designerly approach
  16. Carl Einstein Re-Visited.
  17. 5th Int. Summer Academy „Energy and the Environment“ 2008
  18. 7th Space, Creativity, and Organization Workshop - SCO 2022
  19. Is the question of democracy a blind spot in the debate about transformation?
  20. Legal Expertise: From Above and From Below
  21. Active Participation in Inclusive Classrooms - Results of a Multiperspective Video Analysis
  22. Positiver Aktionismus
  23. IEEE International Conference on Advanced Intelligent Mechatronics 2017
  24. Workshop on the Exploration of Low Temperature plasma Physics - WELTPP 2018
  25. 3 Months Later
  26. 7th Institute of Electrical and Electronics Engineers International Conference on Cybernetic Intelligent Systems - CIS2008
  27. Declining Trust in Institutions: Why We Should Not Worry
  28. Roundtable: Autonomous Intelligent Systems in space: Operational and Legal Challenges