Artificial Intelligence for Physics and Scientific Discovery

Can a computer make substantial novel contributions to science? Can the computer even be 'creative'? These are the questions that we are exploring in the Marquardt group already since 2017.

 

Quantum computing is one area where we have pioneered the use of deep neural networks and their ability to discover fresh problem solutions. As quantum computers are scaling up, one wants to tackle the resulting complexity and exploit it. Modern machine learning is perfectly suited for this challenge. We have shown how one can find improved solutions to important problems in quantum computation via 'reinforcement learning', a technique for the automated discovery of strategies. One early example of our work in this area is the discovery of quantum error correction procedures from scratch, back in 2018. Since then, we have demonstrated how one can follow up on this by significantly boosting the size of the quantum circuits that can be found. We have illustrated the power of machine learning for quantum technologies in many other applications, including the prediction and optimization of photonic crystal geometries, better quantum feedback schemes, optimization of quantum circuits, and neural-network based fast agents applied to superconducting qubits in an experiment.

 

On the more general level, we are interested in "artificial scientific discovery". This is a catch-all term encompassing all kinds of methods that share one goal: automating the scientific process. This includes many nontrivial steps: coming up with suitable hypotheses, designing experimental setups, choosing which exact experiment to try to obtain the maximum amount of information, analyzing the results, and continuing this cycle. We are making forays into this area, for example by introducing novel ways to discover the "essential features" of a complex physical system, by using active learning to pinpoint the most informative experiments, by inventing new algorithms to search for optimal and interpretable experimental designs, and by describing observations using automatically discovered symbolic equations.

 

Right now, no general "artificial scientist" exists that can tackle this grand challenge in arbitrary scenarios. The field is still in its infancy. Approaching the long-term goal requires innovations on many different levels. This includes conceptual advances but also progress in efficient machine learning techniques. Above all, it requires us to develop good insights into how scientists deal with a variety of challenging research tasks: how the "mind of a scientist" works.

 

A short sample of selected illustrative references (please also refer to the division publication list): 2018 Reinforcement Learning for Quantum Error Correction, 2024 Discovering Quantum Circuits for Fault-Tolerant Encoding2021 Deep Learning for Topological Band Structures, 2021 Discovering Essential Features of a Physical System, 2024 Automated Discovery of Scattering Setup Designs 

 

Can physics also benefit machine learning?

The relentless drive towards larger neural networks (e.g. for large-language models) imposes ever increasing demands on the resources to train and run such models. Eventually, this is unsustainable. At the same time, it is also clear that it seems quite wasteful to employ digital computers, which have been engineered to run mathematical algorithms without a single error, in order to train neural networks, with their famously fuzzy behaviour. There is a growing community of researchers who are trying to come up with novel solutions to this challenge, by designing "physical learning machines". These would have the same functionality as digital neural networks, i.e. they can be trained on a large set of examples. However, under the hood they exploit the available microscopic physical interactions to produce large gains in energy efficiency and parallelism. Sometimes, this new field is known under the name of "neuromorphic computing", because it is inspired by the way the brain works.

 

A large variety of physical platforms are in principle available to build neuromorphic learning machines. These include nonlinear or even linear optics (e.g. integrated photonics or free-space optics), analog electronic systems with novel elements like memristors, coupled laser arrays, spintronic systems, mechanical devices, and many more.

 

One important question we have explored is that of training. Are there efficient ways to exploit the physical interactions not only for generating the output from the input, but also for obtaining the training gradients? In our work on "Hamiltonian Echo Backpropagation", we have shown how one can obtain both the gradients and even apply the training updates of the parameters using entirely the physical dynamics of a system, e.g. in nonlinear optics. This is the only known procedure of that kind. In other work, we have explored "Equilibrium Propagation", which is another category of physics-based training that applies to the large class of dissipative, thermalizing systems.

 

Going forward, there are many open questions at this new frontier of energy-efficient machine learning, and physicists are perfectly placed to invent new platforms and training techniques. Moreover, it is still wide open in which settings these new machines will be able to replace the existing highly optimized digital computers.

 

Some selected illustrative references: 2023 Self-Learning Machines with Hamiltonian Echo Backpropagation, 2024 Nonlinear Neuromorphic Computing based on Linear Scattering, 2024 Quantum Equilibrium Propagation

 

The toolbox

First of all, everything we do is grounded in our understanding of physics, encompassing areas like nonlinear dynamical systems, wave physics, quantum optics, quantum many-body theory, and quantum computing. On the technical level, besides numerical techniques in general, we use a variety of machine learning approaches. This includes methods like deep reinforcement learning, transformers, generative techniques (like normalizing flows and diffusion models), and more generally all kinds of optimization techniques, sometimes based on automatic differentiation. Additionally, on the more conceptual level, we employ insights from fields like information theory and representation learning.

 

Some reviews, tutorials, and courses: 2021 Machine Learning and Quantum Devices Les Houches Lecture Notes, 2019 Machine Learning for Physicists Recordings, 2023 Advanced Machine Learning for Artificial Scientific Discovery Recordings, 2023 Review Artificial Intelligence and Machine Learning for Quantum Technologies, 2024 Machine Learning in Three Easy Lessons

Joining the team

We are a vibrant team where we enjoy lively scientific discussions. If you are interested to work on any of these rapidly developing topics in our group, at any level (bachelor/master student, PhD student, postdoc), please contact us by sending an email to Florian.Marquardt@mpl.mpg.de, always cc Gesine.Murphy@mpl.mpg.de ! Note: We accept applications and inquiries year-round, unlike many US institutions. For  PhD applications, please send a CV and the name of at least one expert reference (potential recommendation letter writer). PhD positions are for three years, with one year of regular (default) extension. For postdoc applications, please send a CV, a list of publications, and the names of two expert references. Postdoc positions are typically for two years, with extension possible upon mutual agreement. For bachelor/master students or project student inquiries, please send your grades so far. [Disclaimer for the PhD and postdoc positions: The Max Planck Society has set itself the goal of employing more severely handicapped people. Applications from the severely handicapped are expressly welcome. The Society wants to increase the proportion of women in areas where they are underrepresented. Women are therefore expressly invited to apply.]

 

MPL Research Centers and Schools