PhD topics & MSc projects
PhD topics & MSc projects
PhD topics
I’m always looking for potential PhD candidates. Funding nowadays is always difficult to find, but the search for a PhD should always start with a topic that is fascinating, before we dive into funding opportunities. The examples below are PhD topic “prototypes”. I prefer to tailor projects to the knowledge and interest of candidates, as long as they align with my own research interest. Typically, I also aim for PhD topics that align with major research projects, as it makes it easier to embed students into existing teams such that they acquire professional skills around team work and collaboration “on the fly”. Therefore, the descriptions below are just kind of an inspiration or a starting point before we flesh out your individual PhD project.
Higher-dimensional PDE solvers
Almost all mainstream PDE solvers in my work are 2d or 3d (with a few exceptions). However, a lot of the problems of people using my code are somehow higher-dimensional. The prime example is solvers where we don’t know some parameters precisely. In this case, the thing most people do is to repeatedly run simulations with varying parameters to match the outcomes they are after. Within optimisation cycles, this is a forward cycle. Within a Bayesian framework, they would probably run multiple forward solves for different parameters in parallel. All codes in the field to some degree follow this approach.
I think that this is a poor idea: We could solve a d+n-dimensional problem right from the start (n being different parameters), i.e. evolve multiple PDE “shots” on a higher-dimensional grid. I assume that this saves compute in total, as different parameters will not change the solution everywhere all the time. Therefore, we can employ a (dynamically) adaptive mesh both in the physical space (d) and the parameter space (n). Such type of solvers do not yet really exist.
With Peano, we have a framework which, in theory, can already host such higher-dimensional adaptive meshes. However, there are a few key challenges to address before we use it for this kind of use case:
- How do we introduce such a complex piece of software but still make the user program/reuse code that is all very traditional d-dimensional?
- Can we make the grid unfold in additional dimensions rather than refine? At the moment, the space dimensions n have to be fixed, and their maximum extend has to be known. Is it possible to grow n dynamically or expand along the dimension n dynamically?
- Can we use the additional dimensions to exploit a supercomputer more efficiently and what scenarios do really benefit from such a feature?
Time stepping
The solvers in my group typically use fixed time stepping (where all parts of the mesh advance in time with the same speed) or they use something like bucketed time stepping. Here, finer mesh parts advance with smaller time step sizes, but this “smaller” is dictated by the mesh resolution difference. We have some examples where mesh parts can advance in time totally anarchically anticipating the information propagation speed of the physic. All of these schemes have massive issues:
- The load balancing is hard;
- the gains in efficiency as we only advance cells where “things happen” is eaten up by administration overhead and the fact that GPUs are particularly good in handling large, regular workload.
For these reasons, adaptive time stepping is still rare for very large simulations. We want to study if we can construct time stepping schemes which are faster than everything out there, by combining a couple of different ideas: Can we intelligently balance the mesh adaptivity with the time step sizes and the compute efficiency – notably a part of the mesh resides on a GPU? Can we update certain cells that couple, for example, a GPU partition with the CPU, first, and then let the other cells catch up? Can we learn all of this behaviour and feed it into some time stepping logic?
Hierarchical, variable precision
Over the past years, we have proposed several C++ language extensions realised through attributes which bring the memory footprint of a solver down and pick optimised data layouts: The programmer for example annotates a code telling the compiler that a floating-point variable holds only 10 valid digits and data organised as an array of structs (AoS). Our Clang compiler extensions then take this information and alter the compiled code: It stores the data really only with the 10 valid digits, but converts it into native C++ types ahead of the actual compute. It also can reorganise data from AoS into SoA and rewrite the compute kernels to benefit from the altered data layout – notably if these kernels end up on the GPU.
The logical follow-up work now is to apply all of this to higher-dimensional arrays. At the moment, we mainly study 1d data such as a series of particles. How can these ideas be applied to 2d or 3d fields of quantities?
From hereon, it is a natural question to start to study the ideas for iterative algorithms – as they are omnipresent in scientific computing – and to apply them to dynamic precision. For example, we could store a 2d array of quantities where we know that each entry holds up to 10 valid digits as a sequence of presentations: the first one has only 1 valid digit, the second one adds a second one and so forth. If we have such a hierarchical representation, a compiler can generate compute code which starts to compute with the 1 digit data representation and lets the code load, parallel to the compute, load the next, more accurate presentation, which is then in a subsequent iteration of the algorithm used to refine the outcome successively.
With such a mindset, it should be possible to determine reasonable precisions dynamcially, i.e. the user does not have to know how many bits matter anymore, but the system can find out on-the-fly by assessing if additional bits of accuracy make a difference or not.
Numerical Relativity in the Realm of New Physics

Collaborative project with Baojiu Li (ICC)
Preferred funding stream: CSC or DDS
Numerical Relativity is the holy grail of computational physics. Yet, it took nearly a century after Einstein published General Relativity (GR) that long-term stable simulations of black holes became possible due to the highly nonlinear nature of the gravitational field equations. Since then, there have been huge progresses in this field, and interest has grown much stronger after the first gravitational wave (GW) detection less than a decade ago. Already, such detections have been used to shed light on the mysterious accelerated expansion of our Universe, ruling classes of gravity theories beyond GR.
With the advent of a new generation of GW detectors, in the coming decades GW cosmology will evolve into a mature subject in astronomy. The data collected will allow people to test new theories of fundamental physics with unprecedented exquisiteness. However, even today, simulating the evolution of compact object systems such as black holes and neutron stars in various theories of gravity is still a big challenge.
In this project, the candidate will work on the scientific developments and applications of our numerical relativity simulation code, ExaGRyPE, developed by a collaboration between the Physics and Computer Science departments at Durham University. There are a range of potential directions this project can lead to. Some of these directions sit more in Physics, others have a stronger computer science touch. From a numerics/computer science touch, I’m particularly interested in
- the coupling of the code base with a multigrid solver. We have a prototype of a solver already, and we have ExaGRyPE up and running (obviously), but the actual coupling between the two of them is something largely unexplored. If we manage to couple the two types of solvers, we can do things that are currently out of reach for many competitor codes in the field. Notably, we can address something like the holy grail of this domain (from an HPC point of view): We can start to look into implicit time stepping schemes and hence allow for way bigger time step sizes even though we increase the resolution further. The time step constraints currently are a major showstopper for many calculations. The other interesting option offered by a multigrid solver is that we can treat the Einstein constraints explicitly: Each solution has to fulfil certain conditions. At the moment, we do not enforce these conditions, but evaluate them after each time step and add them as penalty again for the next one, pushing the solution into a direction that is phyiscally valid (penalty approach). With a multigrid solver, we could finally solve the constraints exactly, which should improve the solver’s stability.
- the reformulation of the underlying Physics in a higher-dimensional sense. There is this theory that we live and observe phenomena on a brane world, i.e. something like a shower curtain embedded into a higher-dimensional space. The formulae for this idea do exist, but there are no larger codes that can simulate it, as they would have to evolve the PDE in 6,7,….-dimensional spaces, only to then construct a submanifold through this space. ExaGRyPE is built on top of Peano which can handle such “high-dimensional” meshes, and there is, at the moment, no other bigger code that could do this, which makes this project potentially ground breaking.
MSc projects
Every year, I propose a couple of MSc/BSc topics at the department. These topics are tailored to the projects that I current drive, i.e. I always aim for projects that can, in theory, make a direct contribution towards a larger research theme. That does not mean that I only supervise such topics. If you are interested in a particular area of my research which is not covered by current proposals, please come to see me to discuss if we can tailor a bespoke project around your particular interests.

- Great talk on simulations in biophysics by @durham-comp-sci.bsky.social 's Gokberk Kabacaoglu. Lots of sweat and blood - but all ML- and PDE-driven.
- 2022 I started to write down ideas I had started to work on with my students. Today, it got accepted in ACM TOMS. What a great way to kick off a week, and what a nice evidence that papers are hard work but worth pursuing; over years and years. arxiv.org/abs/2406.06095 https://arxiv.org/abs/2406.06095
- Great talk by Thomas Flynn in our @durham-comp-sci.bsky.social SciComp seminar on his work within the @shareing.bsky.social grant.
- Excellent @shareing.bsky.social consortium meeting in Durham's IAS building today. We plan to open up the consortium to the wider UK community soon!
- Might be an interesting job opportunity for our former @miscada.bsky.social graduates who stayed on the academic track: https://careers.amd.com/careers-home/jobs/77244?lang=en-us