···
Home
···
Explore
···
Library
···
Apple or iPod??? Easy Fix for Adversarial Textual Attacks on OpenAI's CLIP Model! #Shorts
Mar 27, 2021
|
37 views
Yannic Kilcher
Follow
GPT-3
17 videos · undefined sub area
GPT
35 videos · undefined sub area
Adversarial Attack
20 videos · undefined sub area
Neural Network
3337 videos · undefined sub area
Machine Learning
3874 videos · undefined sub area
Deep Learning
1601 videos · undefined sub area
Details
#Shorts #shorts #openai In the paper Multimodal Neurons in Artificial Neural Networks OpenAI suggests that CLIP can be attacked adversarially by putting textual labels onto pictures. They demonstrated this with an apple labeled as an iPod. I reproduce that experiment and suggest a simple, but effective fix. Yes, this is a joke ;) Original Video:
https://youtu.be/Z_kWZpgEZ7w
OpenAI does a huge investigation into the inner workings of their recent CLIP model via faceted feature visualization and finds amazing things: Some neurons in the last layer respond to distinct concepts across multiple modalities, meaning they fire for photographs, drawings, and signs depicting the same concept, even when the images are vastly distinct. Through manual examination, they identify and investigate neurons corresponding to persons, geographical regions, religions, emotions, and much more. In this video, I go through the publication and then I present my own findings from digging around in the OpenAI Microscope. Paper:
https://distill.pub/2021/multimodal-neurons/
My Findings:
https://www.notion.so/CLIP-OpenAI-Microscope-Findings-27465eac373c451d8083428443e0837c
My Video on CLIP:
https://youtu.be/T9XSU0pKX2E
My Video on Feature Visualizations & The OpenAI Microscope:
https://youtu.be/Ok44otx90D4
Links: TabNine Code Completion (Referral):
http://bit.ly/tabnine-yannick
YouTube:
https://www.youtube.com/c/yannickilcher
Twitter:
https://twitter.com/ykilcher
Discord:
https://discord.gg/4H8xxDF
BitChute:
https://www.bitchute.com/channel/yannic-kilcher
Minds:
https://www.minds.com/ykilcher
Parler:
https://parler.com/profile/YannicKilcher
LinkedIn:
https://www.linkedin.com/in/yannic-kilcher-488534136/
BiliBili:
https://space.bilibili.com/1824646584
If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar:
https://www.subscribestar.com/yannickilcher
Patreon:
https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Category: Research Paper
Comments
loading...
Reactions
(0)
| Note
📝 No reactions yet
Be the first one to share your thoughts!
Reactions
(0)
Note
loading...
Recommended
25:52
Approximation-Aware Dependency Parsing by Belief Propagation
ACL
| Dec 30, 2015
21:15
USENIX Enigma 2016 - Hacking Health: Security in Healthcare IT Systems
USENIX Enigma Conference
| Jan 26, 2016
19:56
USENIX Enigma 2016 - The Golden Age of Bulk Surveillance
USENIX Enigma Conference
| Jan 28, 2016
34:56
USENIX Enigma 2016 - NSA TAO Chief on Disrupting Nation State Hackers
USENIX Enigma Conference
| Jan 28, 2016
20:57
USENIX Enigma 2016 - Why Is Usable Security Hard, and What Should We Do about it?
USENIX Enigma Conference
| Jan 29, 2016