Fine-tuning generative pre-trained transformers for clinical dialogue summarization

dc.contributor.authorRonan, Isabelen
dc.contributor.authorTabirca, Sabinen
dc.contributor.funderScience Foundation Irelanden
dc.date.accessioned2025-02-19T16:19:54Z
dc.date.available2025-02-19T16:19:54Z
dc.date.issued2025-01-17en
dc.description.abstractAutomated clinical dialogue summarization can help make health professional workflows more efficient. With the advent of large language models, machine learning can be used to provide accurate and efficient summarization tools. Generative Pre-Trained Transformers (GPT) have shown huge promise in this area. While larger GPT models, such as GPT-4, have been used, these models pose their own problems in terms of precision and expense. Fine-tuning smaller models can lead to more accurate results with less computational expense. In this paper, we fine-tune a GPT-3.5 model to summarize clinical dialogue. We use both default hyperparameters along with manual hyperparameters for comparison purposes. We also compare our default model to past work using ROUGE-1, ROUGE-2, ROUGE-L, and BERTScores. We find our model outperforms GPT-4 across all measures. As our fine-tuning process is based on the smaller GPT-3.5 model, we show that fine-tuning leads to more accurate and less expensive results. Informal human observation also reveals our notes to be of acceptable quality.en
dc.description.statusPeer revieweden
dc.description.versionAccepted Versionen
dc.format.mimetypeapplication/pdfen
dc.identifier.citationRonan, I. and Tabirca, S. (2025) 'Fine-tuning generative pre-trained transformers for clinical dialogue summarization', 2024 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 9-10 December 2024, pp. 1-6. https://doi.org/10.1109/FIT63703.2024.10838420en
dc.identifier.doihttps://doi.org/10.1109/fit63703.2024.10838420en
dc.identifier.eissn2473-7569en
dc.identifier.endpage6en
dc.identifier.isbn979-8-3315-1050-3en
dc.identifier.isbn979-8-3315-1051-0en
dc.identifier.issn2334-3141en
dc.identifier.startpage1en
dc.identifier.urihttps://hdl.handle.net/10468/17083
dc.language.isoenen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en
dc.relation.ispartof2024 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 9-10 December, 2024en
dc.relation.projectinfo:eu-repo/grantAgreement/SFI/SFI Centres for Research Training Programme::Data and ICT Skills for the Future/18/CRT/6222/IE/SFI Centre for Research Training in Advanced Networks for Sustainable Societies/en
dc.rights© 2025, IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en
dc.subjectMachine translationen
dc.subjectTransformersen
dc.subjectData augmentationen
dc.subjectSynthetic data generationen
dc.subjectParameter tuningen
dc.titleFine-tuning generative pre-trained transformers for clinical dialogue summarizationen
dc.typeConference itemen
dc.typeproceedings-articleen
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
fineTuningGenerativePreTrainedTransformersForClinicalDialogueSummarization.pdf
Size:
530.06 KB
Format:
Adobe Portable Document Format
Description:
Accepted Version
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.71 KB
Format:
Item-specific license agreed upon to submission
Description: