Medicine - Conference Items
Permanent URI for this collection
Browse
Recent Submissions
Item Trial of a digital training tool to support chest image interpretation in radiography(European Society of Radiology, 2018) McLaughlin, Laura; McFadden, Sonyia L.; McConnell, Jonathan; Bond, Raymond; Woznitza, N. H.; Cairns, A.; Elsayed, A.; Hughes, CiaraItem An evaluation of a training tool and study day in chest image interpretation(ROC Events, 2022) McLaughlin, Laura; Johnstone, Graham; Nesbitt, Linda; McFadden, Sonyia L.; Hughes, Ciara; Bond, Raymond; McConnell, JonathanBackground: With the use of expert consensus a digital tool was developed by the research team which proved useful when teaching radiographers how to interpret chest images. The training tool included A) a search strategy training tool and B) an educational tool to communicate the search strategies using eye tracking technology. This training tool has the potential to improve interpretation skills for other healthcare professionals. Methods: To investigate this, 31 healthcare professionals i.e. nurses and physiotherapists, were recruited and participants were randomised to receive access to the training tool (intervention group) or not to have access to the training tool (control group) for a period of 4-6 weeks. Participants were asked to interpret different sets of 20 chest images before and after the intervention period. A study day was then provided to all participants following which participants were again asked to interpret a different set of 20 chest images (n=1860). Each participant was asked to complete a questionnaire on their perceptions of the training provided. Results: Data analysis is in progress. 50% of participants did not have experience in image interpretation prior to the study. The study day and training tool were useful in improving image interpretation skills. Participants perception of the usefulness of the tool to aid image interpretation skills varied among respondents. Conclusion: This training tool has the potential to improve patient diagnosis and reduce healthcare costs.Item Radiographer/radiologist education and learning in artificial intelligence (REAL-AI)(ROC Events, 2023) Doherty, Geraldine; McLaughlin, Laura; Rainey, Clare; Hughes, Ciara; Bond, Raymond; McConnell, Jonathan; McFadden, Sonyia L.Background: Artificial intelligence (AI) is incipient in radiography, and whilst there are many studies investigating its potential in the clinical environment, there is a paucity of research investigating the needs of clinical staff. Further research is required to identify what training and preparation is required for a new AI-powered work environment, or indeed what AI education is available at undergraduate and postgraduate levels. Method: This CoRIPS funded study included two electronic surveys (i) one was performed amongst radiographers and radiologists investigating their baseline AI knowledge, identifying what training they desire and preferred method of delivery. (ii) the second survey was for academics and educators in Higher Education Institutions to identify educational provision of AI in the radiography curriculum across the UK and Europe. Method: Data collection and analysis are underway and will be completed at the European Congress of Radiology in Vienna, March 2023. Participant feedback will determine perceptions of clinical staff and identify topics for inclusion in postgraduate/undergraduate programmes. Method: will inform the next phase of the study which will incorporate focus groups with staff to explore adaptation of the curricula to enable incorporation of AI into clinical practice. Conclusion: Radiographers, radiologists and Higher Education Institutions have been surveyed to ascertain current knowledge and needs for AI training. Collaboration and symbiosis between academia, clinical and industry partners is possible, to pioneer AI education tailored to medical imaging staff. The impact of this research has the potential to be of significant value across disciplines within the wider healthcare sector.Item The impact of forms of AI feedback and image quality on reporting radiographers trust and decision switching when interpreting plain radiographic images of the appendicular skeleton(European Society of Radiology, 2024) Rainey, Clare; McConnell, Jonathan; Hughes, Ciara; McLaughlin, Laura; Bond, Raymond; McFadden, Sonyia L.Purpose or Learning Objective Reported accuracies and workforce shortages have increased integration of AI into clinical environments. Furthermore, radiographer reporting helps ease the burden of image reporting. ‘System trust’ is identified as a challenge to clinical AI integration. To the authors’ knowledge, no research has been conducted on the factors impacting reporting radiographers’ trust and decision making when using different forms of AI feedback. Methods or Background Twelve reporting radiographers, three from each region of the UK, participated in this study. The Qualtrics® platform was used to randomly allocate 18 radiographic examinations to each participant. Participants were asked to locate any pathology and indicate their agreement with the AI localisation, represented by GradCAM heatmaps and the AI binary diagnosis. Spearman’s rho and Kendall’s tau were used to investigate any correlation between trust and agreement with various forms of AI feedback and initial image quality. Results or Findings Participants disagreed with the AI heatmaps for the abnormal examinations 45.8% (n=66 of 144 individual images) of the time and agreed with binary feedback on 86.7% of examinations (26 of 30 cases). 0.7% (n=2) indicated that they would decision switch following AI feedback. 22.2% (n=32) agreed with the localisation of pathology from the heatmap. Agreement with AI feedback was correlated with trust (-.515; -.584, significant large negative correlation (p=<.01) and -.309; -.369, significant medium negative correlation (p=<.01) for GradCAM and binary diagnosis respectively). Conclusion The extent of agreement with both AI binary diagnosis and heatmap is correlated with trust in AI, where greater agreement with AI feedback is associated with greater trust, with a large effect size in agreement with GradCAM feedback. Limitations The Qualtrics® platform may not allow for an accurate simulation of the clinical setting. This will be further investigated in subsequent studies.Item Radiographer education and learning in artificial intelligence (REAL_AI)(Springer Open, 2024) Doherty, Geraldine; McLaughlin, Laura; Bond, Raymond; McConnell, Jonathan; Hughes, Ciara; McFadden, Sonyia L.; College of RadiographersIntroduction: Artificial intelligence (AI) is widespread in medical imaging, yet there is a paucity of information on education and training available for staff. Further research is required to identify what training is available, and what preparations are required to bring AI knowledge to levels that will enable radiographers to work competently alongside AI. This study aimed to a) investigate current provision of AI education at UK higher education institutes (HEIs); b) explore the attitudes and opinions of educators. Methods: Data were collected through two online surveys: one for UK HEIs, the other for medical imaging educators. The surveys were distributed in the UK by the Heads of Radiography Education (HRE), The Society of Radiographers, and at the Research Hub at ECR 2023, as well as promotion on LinkedIn and Twitter(X), and through university channels. Results: Responses were received from 22 HEIs in the UK and 33 educators from across Europe. Data analysis is ongoing, but preliminary findings show that 68.2% (n=15) of responding HEIs claim to have introduced AI to the curriculum already. 84.8% (n=28) of educators claim they themselves have received no training on AI despite having to embed it into the curriculum. The main reason for this cited by HEIs is limited resources. 69.7% (n=23) of educators believe that AI concepts should be taught by an AI expert. Conclusion: By surveying educators and HEIs separately, this study captured two different perspectives regarding the provision of AI education. This unique insight highlighted disharmony between HEIs and educators. Preliminary insights highlight that educators feel unprepared to deliver AI content, and HEIs are under pressure to add AI concepts to an already full curriculum.