direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Neuromorphic computing with dynamical systems

Friday, 14th May 2021

Online Symposium

For information on how to access the event, please contact Henning Reinken via:

Guests are welcome!

Programme

Programme
15:00 - 15:40                    
Next generation reservoir computing
 
Prof. Dr. Dan Gauthier (Department of Physics, Ohio State University)

15:40 - 16:20
The Folded-in-Time Deep Neural Networks: How to emulate a large feed-forward network with only a single node and delays

Dr. André Röhm (Institute for Cross-Disciplinary Physics and Complex Systems, Mallorca)

16:20 - 17:00
Time-multiplexed optical systems for reservoir computing

Prof. Dr. Guy Van der Sande (Applied Physics Research Group, Vrije Universiteit Brussel)


Abstracts

Next generation reservoir computing

Dan Gauthier (Department of Physics, Ohio State University)

Recent theoretical results demonstrate that a reservoir computer based on a nonlinear reservoir with a linear output layer or a linear reservoir with a nonlinear output layer are equivalent universal approximators of dynamical systems. Furthermore, the equivalence of nonlinear vector autoregression (NVAR) and a reservoir computer with a linear reservoir and a nonlinear output layer has been proven. This result is surprising because an NVAR has no reservoir, but, mathematically, it has an implicit or hidden reservoir. Thus, I claim that either approach will work equally well on any task on which reservoir computing has been previously applied. To highlight the underlying implicit reservoir computer, I call this approach a next generation reservoir computer (NG-RC). The NG-RC has fewer metaparameters, no random matrices, and requires extremely short data sets for training. I apply a NG-RC to two common tasks: 1) forecasting a dynamical system and 2) inferring one variable of a dynamical system when measuring other variables of the same system

 

The Folded-in-Time Deep Neural Networks: How to emulate a large feed-forward network with only a single node and delays

André Röhm (Institute for Cross-Disciplinary Physics and Complex Systems, Mallorca)

Deep neural networks are among the most widely applied machine learning tools showing outstanding performance in a broad range of tasks. Their main ingredient is a mulit-layer feed-forward network. In this talk, a method for folding a deep neural network of arbitrary size into a single neuron with multiple modulated time-delayed feedback loops is presented. The network states emerge in time as a temporal unfolding of the neuron's dynamics by adjusting the feedback-modulation within the loops, the network's connection weights can be adapted. These connection weights are determined, i.e. learned, via a back-propagation algorithm. This approach fully recovers standard Deep Neural Networks (DNN), encompasses sparse DNNs, and extends the DNN concept toward dynamical systems implementations. This new method, which we have called Folded-in-time DNN (Fit-DNN), exhibits promising performance in a set of benchmark tasks.

 

Time-multiplexed optical systems for reservoir computing

Guy Van der Sande (Applied Physics Research Group, Vrije Universiteit Brussel)

Multiple photonic systems show great promise for providing a practical yet powerful hardware substrate for neuromorphic computing. Among those, delay-based systems offer -through a time-multiplexing technique - a simple technological route to implement photonic neuromorphic computation. We discuss our advances in substrates implemented as passive coherent fibre-ring cavities and integrated semiconductor lasers with short external cavities. We pay special attention to the design of input and output layer to maximise the computational capabilities of these systems.

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

Auxiliary Functions