While tremendous recent progress has been made in analyzing the training dynamics of overparameterized neural networks, prior work primarily focuses on the very wide or infinitely wide regime and therefore does not sufficiently address the role of depth in deep learning. In this work, we present the first trainability guarantee of infinitely deep but narrow neural networks. We study the infinite-depth limit of a multi-layer perceptron (MLP) with a specific initialization and establish a trainability guarantee using the NTK theory. We then extend the analysis to an infinitely deep convolutional neural network (CNN) and perform brief experiments.
While tremendous recent progress has been made in analyzing the training dynamics of overparameterized neural networks, prior work primarily focuses on the very wide or infinitely wide regime and therefore does not sufficiently address the role of depth in deep learning. In this work, we present the first trainability guarantee of infinitely deep but narrow neural networks. We study the infinite-depth limit of a multi-layer perceptron (MLP) with a specific initialization and establish a trainability guarantee using the NTK theory. We then extend the analysis to an infinitely deep convolutional neural network (CNN) and perform brief experiments.