Conceptual Understanding of Deep Learning Workshop
May 17, 2021
|
75 views
Details
0:00 Welcome by Rina Panigrahy
11:40 Workshop's goal
16:23 talk on "How to Augment Supervised Learning with Reasoning" by Leslie Valiant
39:40:00 talk on "Language in the brain and word representations" by Christos Papadimitriou
1:02:15 talk on "What Do Our Models Really Learn?" by Aleksander Madry
1:27:08 talk on "Implicit Symbolic Representation and Reasoning in Deep Networks" by Jacob Andreas
1:48:16 Panel Discussion on "Is there a Mathematical model for the Mind?"
2:47:09 talk on "Deep Reinforcement Learning and Distributional Shift" by Sergey Levine
3:10:53 talk on "Towards a Representation Learning framework for Reinforcement Learning" by Alekh Agarwal
3:32:37 talk on "Principles for Tackling Distribution Shift: Pessimism, Adaptation, and Anticipation" by Chelsea Finn
4:20:27 talk on "Can human brain recordings help us design better AI models?" by Leila Wehbe
4:38:52 talk on "The benefits of unified frameworks for language understanding" by Colin Raffel
4:54:26 talk on "Are Transformers Universal Approximators of sequence-to-sequence Functions?" by Srinadh Bhojanapalli
5:07:05 talk on "Function space view of Multi-Channel Linear Convolutional Networks with Bounded Weight Norm" by Suriya Gunasekhar
5:32:48 talk on "Theoretical Analysis of Contrastive Learning and Self-training with Neural Networks" by Tengyu Ma
5:55:17 talk on "Escaping Global Minima Using Stochastic Gradients" by Jason Lee
6:09:56 talk on "Guarantees for Tuning the Step Size using a Learning-to-Learn Approach" by Rong Ge
Goal: How does the Brain/Mind (perhaps even an artificial one) work at an algorithmic level? While deep learning has produced tremendous technological strides in recent decades, there is an unsettling feeling of a lack of “conceptual” understanding of why it works and to what extent it will work in the current form. The goal of the workshop is to bring together theorists and practitioners to develop an understanding of the right algorithmic view of deep learning, characterizing the class of functions that can be learned, coming up with the right learning architecture that may (provably) learn multiple functions, concepts and remember them over time as humans do, theoretical understanding of language, logic, RL, meta learning and lifelong learning.
Panel Discussion: There will also be a panel discussion on the fundamental question of “Is there a mathematical model for the Mind?”. We will explore basic questions such as “Is there a provable algorithm that captures the essential capabilities of the mind?”, “How do we remember complex phenomena?”, “How is a knowledge graph created automatically?”, “How do we learn new concepts, function and action hierarchies over time?” and “Why do human decisions seem so interpretable?”
Website can be found here: https://sites.google.com/view/conceptualdlworkshop
Twitter: #ConceptualDLWorkshop, @rinapy
Speakers include:
Alekh Agarwal - Microsoft Research
Aleksander Madry - Massachusetts Institute of Technology
Chelsea Finn - Stanford University, Google
Christos Papadimitriou - Columbia University
Colin Raffel - University of North Carolina and Google
Jacob Andreas - Massachusetts Institute of Technology
Jason Lee - Princeton University
Leila Wehbe - Carnegie Mellon University
Leslie Valiant - School of Engineering and Applied Sciences, Harvard University
Rong Ge - Duke University
Sergey Levine - UC Berkeley and Google
Srinadh Bhojanapalli - Google
Suriya Gunasekhar - Microsoft Research
Tengyu Ma - Stanford University
Panelists:
Bin Yu - UC Berkeley
Geoffrey Hinton - University of Toronto and Google
Jack Gallant - UC Berkeley
Lenore Blum -CMU/UC Berkeley
Percy Liang - Stanford University
Workshop Chair: Rina Panigrahy, Google
Thanks to: Pranjal Awasthi, Manzil Zaheer, Kristen Konrad, Hanieh Haddadian
Comments
loading...
Reactions (0) | Note
📝 No reactions yet
Be the first one to share your thoughts!
Reactions(0)
Note
loading...