Ondrej Bajgar

Ondrej Bajgar

AI Safety Researcher

University of Oxford

About

I'm currently finishing a doctorate in Bayesian machine learning at the University of Oxford under the supervision of Michael A. Osborne (Machine Learning Group, Department of Engineering Science), Alessandro Abate (Oxford Control and Automated Verification Lab, Department of Computer Science), and Konstantinos Gatsis (Control Group, Department of Engineering Science). Previously, I spent two years at the Future of Humanity Institute at Oxford as a Senior Research Scholar thinking about AI safety and three years as a Research Scientist at IBM Watson working on machine learning research in text understanding and dialogue systems. I studied mathematics at the University of Warwick, focusing mainly on uncertainty quantification and modeling of complex systems (e.g. transport systems or biological systems).

Beside research, I've been actively involved in organizing Summer Academy Discover helping high-school students find a meaningful future direction.

Peer-reviewed publications

Walking the Values in Bayesian Inverse Reinforcement Learning

Ondrej Bajgar, Alessandro Abate, Konstantinos Gatsis, and Michael A. Osborne

Proceedings of UAI 2024 (The 40th Conference on Uncertainty in Artificial Intelligence)

The goal of Bayesian inverse reinforcement learning (IRL) is recovering a posterior distribution over reward functions using a set of demonstrations from an expert optimizing for a reward unknown to the learner. The resulting posterior over rewards can then be used to synthesize an apprentice policy that performs well on the same or a similar task. A key challenge in Bayesian IRL is bridging the computational gap between the hypothesis space of possible rewards and the likelihood, often defined in terms of Q values: vanilla Bayesian IRL needs to solve the costly forward planning problem - going from rewards to the Q values - at every step of the algorithm, which may need to be done thousands of times. We propose to solve this by a simple change: instead of focusing on primarily sampling in the space of rewards, we can focus on primarily working in the space of Q-values, since the computation required to go from Q-values to reward is radically cheaper. Furthermore, this reversion of the computation makes it easy to compute the gradient allowing efficient sampling using Hamiltonian Monte Carlo. We propose ValueWalk - a new Markov chain Monte Carlo method based on this insight - and illustrate its advantages on several tasks.

Paper

Negative Human Rights as a Basis for Long-term AI Safety and Regulation

Ondrej Bajgar and Jan Horenovsky

Journal of Artificial Intelligence Research 76 (2023) 1043-1075

If future AI systems are to be reliably safe in novel situations, they will need to incorporate general principles guiding them to robustly recognize which outcomes and behaviours would be harmful. Such principles may need to be supported by a binding system of regulation, which would need the underlying principles to be widely accepted. They should also be specific enough for technical implementation. Drawing inspiration from law, this article explains how negative human rights could fulfil the role of such principles and serve as a foundation both for an international regulatory system and for building technical safety constraints for future AI systems.

Paper | Blog

A Boo(n) for Evaluating Architecture Performance

Ondrej Bajgar, Rudolf Kadlec, and Jan Kleindienst

Proceedings of ICML 2018

We point out important problems with the common practice of using the best single model performance for comparing deep learning architectures, and we propose a method that corrects these flaws. Each time a model is trained, one gets a different result due to random factors in the training process, which include random parameter initialization and random data shuffling. Reporting the best single model performance does not appropriately address this stochasticity. We propose a normalized expected best-out-of-n performance (Boon) as a way to correct these problems.

Talk at ICML | Paper | Gitlab

Knowledge Base Completion: Baselines Strike Back

Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst

Proceedings of the 2nd Workshop on Representation Learning for NLP, ACL 2017

Many papers have been published on the knowledge base completion task in the past few years. Most of these introduce novel architectures for relation learning that are evaluated on standard datasets such as FB15k and WN18. This paper shows that the accuracy of almost all models published on the FB15k can be outperformed by an appropriately tuned baseline -- our reimplementation of the DistMult model. Our findings cast doubt on the claim that the performance improvements of recent models are due to architectural changes as opposed to hyper-parameter tuning or different training objectives. This should prompt future research to re-consider how the performance of models is evaluated and reported

Paper

Embracing Data Abundance

Ondrej Bajgar*, Rudolf Kadlec*, and Jan Kleindienst

ICLR 2017 (Workshop track)

There is a practically unlimited amount of natural language data available. Still, recent work in text comprehension has focused on datasets which are small relative to current computing possibilities. This article is making a case for the community to move to larger data. It shows that improvements due to adding more data (using a new BookTest dataset) are much larger than all recent attempts to gain performance using architectural improvements.

Paper | Poster

Finding a Jack-of-All-Trades: An Examination of Transfer Learning in Reading Comprehension

Rudolf Kadlec*, Ondrej Bajgar*, Peter Hrincar, and Jan Kleindienst

Machine Intelligence Workshop, NIPS 2016

Deep learning has proven useful on many NLP tasks including reading comprehension. However, it requires a lot of training data which are not available in some domains of application. We examine the possibility of using data-rich domains to pre-train models and then apply them in domains where training data are harder to get. Specifically, we train a neural-network-based model on two context-question-answer datasets - the BookTest and CNN/Daily Mail - and we monitor transfer to subsets of bAbI, a set of artificial tasks designed to test specific reasoning abilities, and of SQuAD, a question-answering dataset, which is much closer to real-world applications. Our experiments show very limited transfer if the model isn’t shown any training examples from the target domain; however, the results are promising if the model is shown at least a few target-domain examples. Furthermore we show that the effect of pre-training is not limited to word embeddings.

Paper

Text Understanding with the Attention Sum Reader Network

Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst

Proceedings of ACL 2016

Several large cloze-style context-question-answer datasets have been introduced recently: the CNN and Daily Mail news data and the Children's Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches. We present a new, simple model that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models. This makes the model particularly suitable for question-answering problems where the answer is a single word from the document. Ensemble of our models sets new state of the art on all evaluated datasets.

Paper

* marks shared first authorship.

Working papers