BERT - Crossminds
BERT| Bidirectional Encoder Representations from Transformers
A language representation model designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.
Sub Field
Sub Task
135 videos | by date