BahaaEddin AlAila

Tuesday, February 12, 2019

Neural Ordinary Differential Equations

What happens when you stack infinitely many residual layers in a neural network? Neural ordinary differential equations. Several works in the deep learning literature have already noted the similarity between the dynamics of ResNets and numerical integration of ODEs. However, only recently have advances in deep learning methods made Neural ODEs practical. A recent work by Chen et al (NeurIPS 2018) has enabled training Neural ODEs in an end-to-end mannar without backpropagating through the numerical solver. This not only reduces space requirements but also opens up the potential to tap into a century’s worth of literature on ODE solvers to help advance deep machine learning. Aside from the reduced memory footprint as well as the explicit control of accuracy-efficiency trade-off, several important results follow from continous neural networks, most notably, alleviating the high computational costs for normalizing flows, without resolving to architectural constraints through the introduction of continous normalizing flows (CNF). Moreover, Neural ODE can work with irregularly-sampled time series data, which make it suitable for many real-world problems. Neural ODEs is an exciting new advacement in neural networks that will definitely open up the path to more exciting advancements and applications.

Bahaa is a PhD student at the Computer Science department at UGA working on representation learning in machine learning under the supervision of Dr. Shannon Quinn.