An experimental machine learning study investigating the decision-making process of students and qualified radiographers when interpreting radiographic images

dc.contributor.authorRainey, Clareen
dc.contributor.authorVillikudathil, A. T.en
dc.contributor.authorMcConnell, J. en
dc.contributor.authorHughes C.en
dc.contributor.authorBond, R.en
dc.contributor.authorMcFadden, S.en
dc.date.accessioned2025-01-30T10:03:57Z
dc.date.available2025-01-30T10:03:57Z
dc.date.issued2023en
dc.description.abstractAI is becoming more prevalent in healthcare and is predicted to be further integrated into workflows to ease the pressure on an already stretched service. The National Health Service in the UK has prioritised AI and Digital health as part of its Long-Term Plan. Few studies have examined the human interaction with such systems in healthcare, despite reports of biases being present with the use of AI in other technologically advanced fields, such as finance and aviation. Understanding is needed of how certain user characteristics may impact how radiographers engage with AI systems in use in the clinical setting to mitigate against problems before they arise. The aim of this study is to determine correlations of skills, confidence in AI and perceived knowledge amongst student and qualified radiographers in the UK healthcare system. A machine learning based AI model was built to predict if the interpreter was either a student (n = 67) or a qualified radiographer (n = 39) in advance, using important variables from a feature selection technique named Boruta. A survey, which required the participant to interpret a series of plain radiographic examinations with and without AI assistance, was created on the Qualtrics survey platform and promoted via social media (Twitter/LinkedIn), therefore adopting convenience, snowball sampling This survey was open to all UK radiographers, including students and retired radiographers. Pearson's correlation analysis revealed that males who were proficient in their profession were more likely than females to trust AI. Trust in AI was negatively correlated with age and with level of experience. A machine learning model was built, the best model predicted the image interpreter to be qualified radiographers with 0.93 area under curve and a prediction accuracy of 93%. Further testing in prospective validation cohorts using a larger sample size is required to determine the clinical utility of the proposed machine learning model. © 2023 Rainey et al.en
dc.description.statusPeer revieweden
dc.description.versionPublished Versionen
dc.format.mimetypeapplication/pdfen
dc.identifier.articleide0000229en
dc.identifier.citationRainey, C., Villikudathil, A. T., McConnell, J., Hughes, C., Bond, R. and McFadden, S. (2023) 'An experimental machine learning study investigating the decision-making process of students and qualified radiographers when interpreting radiographic images', PLOS Digital Health, 2(10), p.e0000229. https://doi.org/10.1371/journal.pdig.0000229en
dc.identifier.doihttps://doi.org/10.1371/journal.pdig.0000229en
dc.identifier.issn27673170en
dc.identifier.issued10
dc.identifier.journaltitlePLOS Digital Healthen
dc.identifier.urihttps://hdl.handle.net/10468/16931
dc.identifier.volume2
dc.language.isoenen
dc.publisherPublic Library of Scienceen
dc.rights© 2023, the Authors. This work is made available under the CC BY license (https://creativecommons.org/licenses/by/4.0/)en
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectArtificial Intelligenceen
dc.subjectRadiographyen
dc.subjectRadiologyen
dc.titleAn experimental machine learning study investigating the decision-making process of students and qualified radiographers when interpreting radiographic imagesen
dc.typeArticle (peer reviewed)en
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
journal.pdig.0000229.pdf
Size:
1.03 MB
Format:
Adobe Portable Document Format
Description:
Published Version