Computer Science - Conference Items
Permanent URI for this collection
Browse
Recent Submissions
Item Designing and evaluating an accessible smartphone-based app for blind and visually impaired students for learning mathematics(International Academy of Technology, Education and Development (IATED), 2025) Shoaib, Muhammad; Minghim, Rosane; Pitt, Ian; Science Foundation IrelandThe design and development of educational applications to enhance the learning experiences of blind and visually impaired people have improved considerably in recent years, but research suggests there is a need for further improvement. Despite the increasing prominence of accessibility features in mobile devices, barriers remain in providing efficient learning resources for these students, especially in STEM topics like mathematics. This paper discusses the design and usability evaluation of AccessMath, a mobile application designed to help blind and visually impaired primary school pupils learn mathematics. The application was developed using accessibility design guidelines, offering adjustable intrinsic brightness, multimodal feedback, a comprehensive settings panel, and intuitive swipe control functions. A usability test was carried out with five blind and visually impaired students employing a variety of metrics, such as task completion time, Usability Metric for User Experience (UMUX), NASA Task Load Index (NASA-TLX), and the System Usability Scale (SUS). The results revealed improved usability, lower cognitive load, and satisfactory user experiences, highlighting the usefulness of the implemented guidelines for designing and developing accessible mobile applications for blind and visually impaired users. Methodology: a) Design and Development of AccessMath (Shoaib, M. et al., 2024) The AccessMath app was designed to focus on four essential accessibility features: changeable intrinsic brightness, multimodal feedback, user settings panel, and swipe controls. These features enable them to better use their visual skills by changing brightness and contrast to minimize dependency on visual information by audio cues. Auditory, vibro, and gestural inputs help them navigate the app easily. With the help of a user-controlled setting panel, users can customize the app extensively to their preferences. b) Usability. Evaluation: A usability test was conducted with five blind and visually impaired users to examine the app's usability, cognitive load, and user experience. The study utilized measurements, i.e., task completion time to measure efficiency, the UMUX for perceived usability, NASA-TLX to evaluate cognitive workload, and the SUS for overall user satisfaction. Participants interacted with the app to perform the task and access the mathematical information. Later, they provided valuable feedback on the effectiveness and accessible features of the application. c) Participant Information: Five blind and visually impaired students were involved in this study, two males and three females. Their ages ranged from 9 to 12 years. All students had experience using mobile applications with assistive features. Ethical considerations were observed carefully, and all students were given informed consent before the study. This study has two phases (named "Design and Development of AccessMath" and "Usability Evaluation"). First, it expands existing accessibility guidelines to better suit the mobile context for blind and visually impaired users. Second, it presents empirical information on the usability and user experience of AccessMath, giving valuable insights for developers and researchers in accessible educational technology.Item iSee: intelligent sharing of explanation experiences(CEUR-WS, 2022) Martin, Kyle; Wijekoon, Anjana; Wiratunga, Nirmalie; Palihawadana, Chamath; Nkisi-Orji, Ikechukwu; Corsar, David; Díaz-Agudo, Belén; Recio-García, Juan A.; Caro-Martínez, Marta; Bridge, Derek; Pradeep, Preeja; Liret, Anne; Fleisch, Bruno; Engineering and Physical Sciences Research CouncilThe right to an explanation of the decision reached by a machine learning (ML) model is now an EU regulation. However, different system stakeholders may have different background knowledge, competencies and goals, thus requiring different kinds of explanations. There is a growing armoury of XAI methods, interpreting ML models and explaining their predictions, recommendations and diagnoses. We refer to these collectively as ”explanation strategies”. As these explanation strategies mature, practitioners gain experience in understanding which strategies to deploy in different circumstances. What is lacking, and what the iSee project will address, is the science and technology for capturing, sharing and re-using explanation strategies based on similar user experiences, along with a much-needed route to explainable AI (XAI) compliance. Our vision is to improve every user’s experience of AI, by harnessing experiences of best practice in XAI by providing an interactive environment where personalised explanation experiences are accessible to everyone. Video Link: https://youtu.be/81O6-q_yx0sItem Involvement of autistic adults in the participatory design of technology: a scoping review(Association for Computing Machinery (ACM), 2025) Maye, Laura; Brodersen Hansen, NicolaiResearch in HCI and autism has become more focused on involving autistic adults in technological design. In this paper, we present the results of a scoping review analysis of 11 projects across 18 papers that focused on including autistic adults in the design of technology that impacts their lives. This paper contributes a deeper understanding of how autistic adults were involved in participatory design processes. Our findings reveal mixed positions on how the lived autistic perspective was harnessed to direct the application of topics and technologies chosen. Most projects employed infrastructures to enhance participation (e.g., providing multiple modes to participate or employing a tailored methodology). We pose future opportunities for autistic involvement, for example, in topics and technologies where autistic research is employed (e.g., autism diagnosis and machine learning), reviewing the importance of formal diagnosis for inclusion, and harnessing the multiple expertise of autistic adults.Item UNCCER: unified network for cancer classification and efficient representation using microarray data(Institute of Electrical and Electronics Engineers (IEEE), 2024-12-26) Younis, Haseeb; Brahmi, Imane; Byrne, Jonathan; Minghim, Rosane; Science Foundation IrelandIn recent years, there has been a significant advance in the use of machine learning (ML) techniques to extract gene expression data from microarray databases, particularly in cancer-related research. There no unified method for classifying cancer microarray data, even after ML adoption. Due to the high dimensionality of microarray data, it is difficult to extract the relevant features and provide insights that can be helpful in identifying cancer types and stages. In this paper, we propose a Unified Network for Cancer Classification and Efficient Representation (UNCCER) using Deep Learning (DL) on cancer microarray data. To implement this methodology, we employed a microarray database (CuMiDa) that has 78 carefully curated datasets for different types of cancers. Our single model has the capability to learn the patterns, cluster instances into their corresponding classes, and classify the cancer. We also used the Uniform Manifold Approximation and Projection (UMAP) to visualise, in low dimension, the instance separation both on original data and transformed data by our methodology. Using the proposed methodology, we achieved average 94% average accuracy, precision, recall, F1 Score, and 91% G-Mean.Item MANet: a deep learning framework for multi-cancer microarray analysis, classification, and visualization(Institute of Electrical and Electronics Engineers (IEEE), 2024) Younis, Haseeb; Azeem, Muhammad; Ronan, Isabel; Minghim, RosaneMachine learning (ML) methods have been used much more frequently in recent years to extract gene expression data from microarray studies, especially in cancer research. Even after the continued interest in applying ML to scientific cancer research, there is still no universal approach for categorizing cancer microarray data. A system is needed that can detect and classify a normal profile and a cancer profile, specifying the type of cancer. Due to the variance and high dimensionality of microarray data, it is difficult to extract the relevant descriptors and provide insights that can be helpful in identifying cancer types and stages. In this paper, we proposed MANet: a methodology using cancer microarray data based on Deep Learning (DL) to classify 13 different types of cancers as well as normal profiles. To implement this methodology, we used a Curated Microarray Database (CuMiDa) that has 78 datasets for different types of cancers. Due to the diverse feature vectors for each dataset, we used Principal Component Analysis (PCA) for uniform feature engineering. Our single model has the capability to learn the patterns, cluster instances into their corresponding classes and classify the cancer. We also used the Uniform Manifold Approximation and Projection (UMAP) to visualise the instance separation on original data. Additionally, this UMAP visualises segregation done by our methodology in low dimensions. Using the proposed methodology, we achieved approximately 80% average accuracy, precision, recall, and F1 Score for 14 classes using a single model