From 6b6560eaf5ff2e250b00c50f380c5389a9c2d82e Mon Sep 17 00:00:00 2001 From: Lysandre Date: Wed, 13 Jan 2021 15:06:44 +0000 Subject: [PATCH] Add size details --- README.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/README.md b/README.md index 63e15de..ad553a7 100644 --- a/README.md +++ b/README.md @@ -36,6 +36,16 @@ classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. +This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. + +This model has the following configuration: + +- 12 repeating layers +- 128 embedding dimension +- 768 hidden dimension +- 12 attention heads +- 11M parameters + ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to