What
My project is about giving numerical methods a probabilistic treatment. This means that methods such as gradient descent and numerical integration, which are used to find optimal solutions to difficult tasks, can begin to assert their approximation error. These methods only get more difficult when we move into high dimensions, in the sense of problems with thousands of variables and parameters. In detail, my project aims to advance probabilistic numerics in high dimensions.
Why
So much of modern statistics and scientific computing involve numerical methods for optimization, integration, or basic linear algebra. Mostly, we trust these to be precise and forget their associated errors. However, if these methods are parts of a larger computational pipeline their errors can propagate and the consequences can be dramatic with much too overconfident models.
Beyond this, Bayesian optimization (a branch of probabilistic numerics) provides guarantees of global optimization, as opposed to gradient descent algorithms, thus there is even room to improve the performance of modern solvers.
How
In high dimensions, most of the intuitive behaviour we know from 2 and 3 dimensions fall apart. To build algorithms that recognize this it will be important to understand these differences mathematically and statistically.
Some models, such as neural networks, are built with gradient descent in mind. Thus, another element of the project will be looking for new models that potentially could be built with probabilistic numerics at the computational level.
SSR
There is no doubt that artificial intelligence will continue to invade our everyday lives. The need for principled algorithms that can assert their own uncertainties is crucial for the trustworthy future deployment of artificial intelligence.