GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
Details
#glom #hinton #capsules Geoffrey Hinton describes GLOM, a Computer Vision model that combines transformers, neural fields, contrastive learning, capsule networks, denoising autoencoders and RNNs. GLOM decomposes an image into a parse tree of objects and their parts. However, unlike previous systems, the parse tree is constructed dynamically and differently for each input, without changing the underlying neural network. This is done by a multi-step consensus algorithm that runs over different levels of abstraction at each location of an image simultaneously. GLOM is just an idea for now but suggests a radically new approach to AI visual scene understanding. OUTLINE: 0:00 - Intro & Overview 3:10 - Object Recognition as Parse Trees 5:40 - Capsule Networks 8:00 - GLOM Architecture Overview 13:10 - Top-Down and Bottom-Up communication 18:30 - Emergence of Islands 22:00 - Cross-Column Attention Mechanism 27:10 - My Improvements for the Attention Mechanism 35:25 - Some Design Decisions 43:25 - Training GLOM as a Denoising Autoencoder & Contrastive Learning 52:20 - Coordinate Transformations & Representing Uncertainty 57:05 - How GLOM handles Video 1:01:10 - Conclusion & Comments Paper: https://arxiv.org/abs/2102.12627 Abstract: This paper does not describe a working system. Instead, it presents a single idea about representation which allows advances made by several different groups to be combined into an imaginary system called GLOM. The advances include transformers, neural fields, contrastive representation learning, distillation and capsules. GLOM answers the question: How can a neural network with a fixed architecture parse an image into a part-whole hierarchy which has a different structure for each image? The idea is simply to use islands of identical vectors to represent the nodes in the parse tree. If GLOM can be made to work, it should significantly improve the interpretability of the representations produced by transformer-like systems when applied to vision or language Authors: Geoffrey Hinton Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

0:00 - Intro & Overview 3:10 - Object Recognition as Parse Trees 5:40 - Capsule Networks 8:00 - GLOM Architecture Overview 13:10 - Top-Down and Bottom-Up communication 18:30 - Emergence of Islands 22:00 - Cross-Column Attention Mechanism 27:10 - My Improvements for the Attention Mechanism 35:25 - Some Design Decisions 43:25 - Training GLOM as a Denoising Autoencoder & Contrastive Learning 52:20 - Coordinate Transformations & Representing Uncertainty 57:05 - How GLOM handles Video 1:01:10 - Conclusion & Comments
Comments
loading...