Faculty

Stefano V. Albrecht

Dr. Stefano V. Albrecht

Lecturer (Assistant Professor) in Artificial Intelligence

Research Group Leader

Personal page

PhD Research Students

Muhammad Arrasy Rahman

Muhammad Arrasy Rahman

MSc Data Science, University of Edinburgh, 2017; BSc Computer Science, Universitas Indonesia, 2015

Project: Ad Hoc Teamwork in Open Multi-Agent Systems using Graph Neural Networks

Many real-world problems require an agent to achieve specific goals while interacting with other agents. It is common for agents to have limited knowledge regarding other agents' internal information while also not having predefined coordination protocols with other agents. Prior work on ad hoc teamwork focused on multi-agent systems in which the number of agents is assumed fixed. My project focuses using Graph Neural Networks to handle interaction data between varying number of agents. We explore the possibility of combining GNNs with Reinforcement Learning techniques to implement agents that can perform well in teams with changing number of agents.

Filippos Christianos

Filippos Christianos

Diploma in Electronic and Computer Engineering, Technical University of Crete, 2017

Project: Coordinated Exploration in Multi-Agent Deep Reinforcement Learning

In the increasingly large state space encountered in deep reinforcement learning, exploration plays a critical role by narrowing down the search for an optimal policy. In multi-agent settings, the joint action space also grows exponentially, further complicating the search. The use of a partially centralized policy while exploring can coordinate the exploration and more easily locate promising, even decentralized, policies. In this project, we investigate how the coordination of agents in the exploration phase can improve the performance of deep reinforcement learning algorithms.

Georgios Papoudakis

Georgios Papoudakis

Diploma in Electrical and Computer Engineering, Aristotle University of Thessaloniki, 2017

Project: Modelling in Multi-Agent Systems Using Representation Learning

Multi-agent systems in partially observable environments face many challenging problems which traditional reinforcement learning algorithms fail to address. Agents have to deal with the lack of information about the environment's state and the opponents' beliefs and goals. A promising research direction is to learn models of the other agents to better understand their interactions. This project will investigate representation learning for opponent modelling in order to improve learning in multi-agent systems.

Ibrahim Ahmed

Ibrahim Ahmed

MS in Computer Science, UC Davis, 2018; BS in Computer Science, UC Davis, 2016

Project: Quantum-Resistant Authentication and Key Establishment Using Abstract Multi-Agent Interaction

Authentication and key establishment are the foundation for secure communication over computer networks. However, modern protocols which rely on public key cryptography for secure communication are vulnerable to quantum technology–based attacks. My project studies a novel quantum-safe method for authentication and key establishment based on abstract multi-agent interaction. It introduces these fields to multi-agent techniques for optimisation and rational decision-making.

Cillian Brewitt

Cillian Brewitt

MSc Artificial Intelligence, University of Edinburgh, 2017; BE Electrical and Electronic Engineering, University College Cork, 2016

Project: Systematic Analysis and Comparison of Agent Modelling Methods

The development of autonomous agents that can interact with other agents is an important task in the field of artificial intelligence. To achieve this, agents must have the ability to reason about the beliefs, goals and actions of other agents, which can by done by constructing models of the other agents. This project will build upon previous work as described in the recent survey of Albrecht and Stone, by implementing a diverse range of agent modelling methods and carrying out a systematic and in-depth comparison of these methods. New insights may lead to the development of novel modelling approaches.

Lukas Schäfer

Lukas Schäfer

MSc Informatics, University of Edinburgh, 2019; BSc Computer Science, Saarland University, 2018

Project: Collaborative Exploration in Multi-Agent Reinforcement Learning using Intrinsic Curiosity

The challenge of multi-agent reinforcement learning is largely defined by the non-stationarity and credit assignment problem introduced by multiple agents acting concurrently. In order to learn effective behaviour in such environments, efficient exploration techniques beyond simple, randomised policies are required. This project will investigate novel methods with a particular focus on intrinsic rewards as exploration incentives. Such self-assigned rewards serve as additional feedback to motivate guided exploration, which could enable collaborative behaviour in multi-agent systems.

Elliot Fosong

Elliot Fosong

BA & MEng in Engineering, University of Cambridge, 2019

Project: Model Criticism in Multi-Agent Systems

Agents operating in multi-agent systems often need to predict and reason about the behaviour of other agents. Candidate models for this behaviour are informed by observations. It is desirable to provide agents with a way to contemplate the truth and usefulness of such models, which may dictate how confidently the agent should act, or inform exploration strategies in learning. This project will develop a principled model criticism framework and examine the theoretical guarantees provided by this framework.