Research
THuMP is investigating transparency and explainability in collaborative and intelligent decision support systems, focusing on problems involving complex planning and resource allocation.
Today, there are increasing concerns around the opacity and inscrutability of decision-making driven by Artificial Intelligence (AI) systems. Policies and legal measures are being introduced that address—and perhaps raise—anxiety about machines making decisions or offering recommendations and will likely have tangible and potentially far-reaching effects on individuals and communities. There is an urgent need to ensure that the underlying reasons for those decisions and recommendations are communicated transparently and comprehensibly to those who are expected to act upon them and/or are directly affected by them. Users must be able to interrogate recommendations and take part in the decision-making process.
THuMP focusses on allocation of resources in critical domains, bringing together experts in AI, Law and Ethics to develop and test methods for explaining the reasoning behind plans and actions recommended by data-backed AI-driven systems. Collaborative decision-making involves balanced exchange of ideas, backed by appropriate evidence and gathered from reliable sources, forming a solid foundation for clearly considered arguments and conclusions. Such interaction will foster trust in the AI system, as users gain confidence in decisions reached through mutual understanding of an intricate shared-reasoning process.
The THuMP project addresses three specific research questions:
What are the technical challenges involved in creating Explainable AI Planning (XAIP) systems?
THuMP aims to provide transparency in AI systems, in order to improve users’ trustworthiness in these types of systems. Information can be acquired by analysing data sets for the “who”, “what”, “where” and “when” factors related to key decisions. Knowledge can be obtained using AI Planning in order to determine “how”. Computational Argumentation can lead to understanding by answering “why”. Putting these technologies together will advance the state-of-the-art in human-AI planning systems.
What are the technical, legal and ethical challenges involved in creating XAIP systems for solving resource allocation problems in critical domains?
THuMP aims to address this question by co-creating use cases with two project partners: Schlumberger, a leading oil & gas services corporation, with whom an oil well planning scenario will be developed where AI is used to help engineers make decisions around allocation of resources and personnel to a construction project; and Save the Children, a leading international charity, with whom.a disaster response scenario will be developed where AI is used to help response coordinators make decisions around allocation of resources and personnel to a relief programme. Developing specific use cases will reveal challenges and highlight opportunities for deployment of human-AI decision support systems in practical, real-world settings.
What are the legal and social implications of enhancing machines with transparency and the ability to explain?
THuMP aims to consider legal issues through collaboration with academic experts in technology regulation and project partner Hogan Lovells, a leading global law firm. Addressing the challenges of transparency in AI will not only improve computational approaches, but also provide insights from social science on how people define, generate, present and interpret explanations.