Please note that this website refers to a past event - for reference use only
The recent explosion of deep learning applications imposes an urgent need for new energy-efficient alternative neuromorphic hardware concepts running at high speeds and relying on a high degree of parallelism. In this workshop, we will explore this rapidly developing area of (classical) neuromorphic computing over a range of scalable platforms, both on a theoretical and experimental level. These platforms include systems in the domains of optics, integrated photonics, spin systems, semi- and superconducting systems, soft matter, and others. In addition, new physical learning approaches will be discussed.
Confirmed invited speakers
- Firooz Aflatouni (U Penn)
- Daniel Brunner (CNRS, FEMTO-ST)
- Sonia Buckley (NIST)
- Darius Bunandar (lightmatter)
- Claudio Conti (Rome)
- György Csaba (Budapest)
- Sylvain Gigan (Paris)
- Julie Grollier (CNRF Thales)
- Alexander Khajetoorians (Radboud University)
- Andrea Liu (U Penn)
- Alexander Lvovsky (Oxford)
- Tatsuhiro Onodera (Cornell)
- Wolfram Pernice (Heidelberg)
- Demetri Psaltis (EPFL)
- Benjamin Scellier (Rain)
- Johannes Schemmel (Heidelberg)
- Abu Sebastian (IBM)
- Menachem (Nachi) Stern (U Penn)
Format
The in-person workshop will start on 5 September at 9 am and end on 7 September at approximately 5:30 pm. There will be invited talks, contributed talks, a poster session and a panel discussion. On Wednesday, we will organise a conference dinner which is included in the registration fee.
Click here for the workshop program.