Computer Science - Conference Items

Permanent URI for this collection


Recent Submissions

Now showing 1 - 5 of 336
  • Item
    Performance and energy savings trade-off with uncertainty-aware cloud workload forecasting
    (IEEE, 2023-10-10) Carraro, Diego; Rossi, Andrea; Visentin, Andrea; Prestwich , Steven D.; Brown, Kenneth N.; Science Foundation Ireland; European Regional Development Fund; Horizon 2020 Framework Programme
    Cloud managers typically leverage future workload predictions to make informed decisions on resource allocation, where the ultimate goal of the allocation is to meet customers’ demands while reducing the provisioning cost. Among several workload forecasting approaches proposed in the literature, uncertainty-aware time series analysis solutions are desirable in cloud scenarios because they can predict the distribution of future demand and provide bounds associated with a given service level set by the resource manager. The effectiveness of uncertainty-based workload predictions is normally assessed in terms of accuracy metrics (e.g. MAE) and service level (e.g. Success Rate), but the effect on the resource provisioning cost is under investigated. We propose an evaluation framework to assess the impact of uncertainty-aware predictions on the performance vs cost trade-off, where we express the cost in terms of energy savings. We illustrate the framework’s effectiveness by simulating two real-world cloud scenarios where an optimizer leverages workload predictions to allocate resources to satisfy a desired service level while minimizing energy waste. Offline experiments compare representative uncertainty-aware models and a new model (HBNN++) that we propose, which predict a cluster trace’s GPU demand. We show that more effective uncertainty modelling can save energy without violating desired service level targets and that model performance varies depending on the specific details of the allocation scheme, server and GPU energy costs.
  • Item
    Application of the CeHRes Framework to the Development of Pathway: An HCI Perspective
    (Association for Computing Machinery, 2023-12) Oti, Olugbenga; Pitt, Ian; Science Foundation Ireland; European Regional Development Fund
    Mental health difficulties are highly prevalent among young people. Despite this, research shows that many young people do not seek help for mental health difficulties. In this paper, we explore the experiences of students who have accessed mental health support services in Ireland. Their experiences highlight the barriers students face while accessing mental health support. We found that students lack access to the necessary information to support their decision-making process when selecting a support service. For instance, how a service operates, what kind of support a service provides, the privacy policies, etc. Based on these identified needs, we designed a prototype of an application called Pathway. Pathway aims to address these barriers by providing students with mental health support services that fit their preferences. It also provides detailed information on mental health support services. This paper details the use of the eHealth development framework by the Center for eHealth Research and Disease Management towards the development of this application.
  • Item
    Unhelpful assumptions in software security research
    (Association for Computing Machinery, 2023-11-21) Ryan, Ita; Roedig, Utz; Stol, Klaas-Jan; Science Foundation Ireland
    In the study of software security many factors must be considered. Once venturing beyond the simplest of laboratory experiments, the researcher is obliged to contend with exponentially complex conditions. Software security has been shown to be affected by priming, tool usability, library documentation, organisational security culture, the content and format of internet resources, IT team and developer interaction, Internet search engine ordering, developer personality, security warning placement, mentoring, developer experience and more. In a systematic review of software security papers published since 2016, we have identified a number of unhelpful assumptions that are commonly made by software security researchers. In this paper we list these assumptions, describe why they sometimes do not reflect reality, and suggest implications for researchers.
  • Item
    Assessing and enforcing fairness in the AI lifecycle
    (IJCAI International Joint Conference on Artificial Intelligence, 2023) Calegari, Roberta; Castañé, Gabriel G.; Milano, Michela; O'Sullivan, Barry
    A significant challenge in detecting and mitigating bias is creating a mindset amongst AI developers to address unfairness. The current literature on fairness is broad, and the learning curve to distinguish where to use existing metrics and techniques for bias detection or mitigation is difficult. This survey systematises the state-of-the-art about distinct notions of fairness and relative techniques for bias mitigation according to the AI lifecycle. Gaps and challenges identified during the development of this work are also discussed.
  • Item
    Key factors affecting European reactions to AI in European full and flawed democracies
    (2023-08-20) Pham, Long; O'Sullivan, Barry; Mai, Tai Tan; Horizon 2020 Framework Programme; Science Foundation Ireland; European Regional Development Fund
    This study examines the key factors that affect European reactions to artificial intelligence (AI) in the context of both full and flawed democracies in Europe. Analysing a dataset of 4,006 respondents, categorised into full democracies and flawed democracies based on the Democracy Index developed by the Economist Intelligence Unit (EIU), this research identifies crucial factors that shape European attitudes toward AI in these two types of democracies. The analysis reveals noteworthy findings. Firstly, it is observed that flawed democracies tend to exhibit higher levels of trust in government entities compared to their counterparts in full democracies. Additionally, individuals residing in flawed democracies demonstrate a more positive attitude toward AI when compared to respondents from full democracies. However, the study finds no significant difference in AI awareness between the two types of democracies, indicating a similar level of general knowledge about AI technologies among European citizens. Moreover, the study reveals that trust in AI measures, specifically “Trust AI Solution,” does not significantly vary between full and flawed democracies. This suggests that despite the differences in democratic quality, both types of democracies have similar levels of confidence in AI solutions.