Machine Learning for Physicists - summer term 2019

Please visit the official domain machine-learning-for-physicists.org, where we collected all the videos and slides from the 2017 Machine Learning for Physics Lecture Series for quick download!

 

Basic Information about this lecture series

Description: This is a course introducing modern techniques of machine learning, especially deep neural networks, to an audience of physicists. Neural networks can be trained to perform many challenging tasks, including image recognition and natural language processing, just by showing them many examples. While neural networks have been introduced already in the 50s, they really have taken off in the past decade, with spectacular successes in many areas. Often, their performance now surpasses humans, as proven by the recent achievements in handwriting recognition and in winning the game of 'Go' against expert human players. They are now also being considered more and more for applications in physics, ranging from predictions of material properties to analyzing phase transitions.

Contents: We will cover the basics of neural networks (backpropagation), convolutional networks, autoencoders, restricted Boltzmann machines, and recurrent neural networks, as well as the recently emerging applications in physics. Time permitting, we will address other topics, like the relation to spin glass models, curriculum learning, reinforcement learning, adversarial learning, active learning, "robot scientists", deducing nonlinear dynamics, and dynamical neural computers.

Prerequisites: As a prerequisite you will only need matrix multiplication and the chain rule, i.e. the course will be understandable to bachelor students, master students and graduate students. However, knowledge of any computer programming language will make it much more fun. We will sometimes present examples using the 'python' programming language, which is a modern interpreted language with powerful linear algebra and plotting functions.

Book: The first parts of the course will rely heavily on the excellent and free online book by Nielsen: "Neural Networks and Deep Learning"

Software: Modern standard computers are powerful enough to run neural networks in a reasonable time. The following list of software packages helps to keep the programming effort low (it is possible to implement advanced structures like a deep convolutional neural network in only a dozen lines of code, which is quite amazing):

  • Python is a widely used high-level programming language for general-purpose programming; both Theano and Keras are Python moduls. We highly recommend the usage of the 3.x branch (cmp. Python2 vs Python3).
  • TensorFlow is a package for dataflow and differentiable programming, developed by Google. It is a symbolic math library for a broad range of tasks, including machine learning applications such as neural networks. For that purpose, TensorFlow provides the low-level tools (multi-dimensional arrays, convolutional layers, efficient computation of the gradient, ...).
  • Keras is a high-level framework for neural networks, running on top of TensorFlow. Designed to enable fast experimentation with deep neural networks, it focuses on being minimal, modular and extensible.
  • Matplotlib is a plotting library for the Python programming language. We use it to visualize our results.
  • Jupyter is a browser-based application that allows to create and share documents that contain live (Python) code, equations, visualizations and explanatory text. So, Jupyter serves a similar purpose like Mathematica notebooks.

All the software above is open source and freely available for a large number of platforms. See also the installation instructions.

 

Links

 

MPL Newsletter

Stay up-to-date with MPL’s latest research via our Newsletter. 

Current issue: Newsletter No 13 - October 2018

Click here to view previous issues.

MPL Research Centers and Schools