Recent advances in deep neural networks (DNNs), combined with open, easily-accessible implementations, have made DNNs a powerful, versatile method used widely in both machine learning and neuroscience. These advances in practical results, however, have far outpaced a formal understanding of these networks and their training. The dearth of rigorous analysis for these techniques limits their usefulness in addressing scientific questions and, more broadly, hinders systematic design of the next generation of networks. Recently, long-past-due theoretical results have begun to emerge from researchers in a number of fields. The purpose of this conference is to give visibility to these results, and those that will follow in their wake, to shed light on the properties of large, adaptive, distributed learning architectures, and to revolutionize our understanding of these systems.