Compare commits
10 Commits
5f0832a907
...
0a6aa9128b
Author | SHA1 | Date |
---|---|---|
|
0a6aa9128b | |
|
5546055f03 | |
|
0e9f43ffc3 | |
|
bdb420bf56 | |
|
418430c3b5 | |
|
b96743c503 | |
|
345fd30026 | |
|
7a9371512e | |
|
b3409d9fba | |
|
2f8677524b |
|
@ -6,3 +6,6 @@
|
|||
*.tar.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
model.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
|
|
|
@ -0,0 +1,201 @@
|
|||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
|
@ -0,0 +1,251 @@
|
|||
---
|
||||
language: en
|
||||
tags:
|
||||
- exbert
|
||||
license: apache-2.0
|
||||
datasets:
|
||||
- bookcorpus
|
||||
- wikipedia
|
||||
---
|
||||
|
||||
# BERT base model (uncased)
|
||||
|
||||
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
|
||||
[this paper](https://arxiv.org/abs/1810.04805) and first released in
|
||||
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
|
||||
between english and English.
|
||||
|
||||
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
|
||||
the Hugging Face team.
|
||||
|
||||
## Model description
|
||||
|
||||
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
|
||||
was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
|
||||
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
|
||||
was pretrained with two objectives:
|
||||
|
||||
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
|
||||
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
|
||||
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
|
||||
GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the
|
||||
sentence.
|
||||
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
|
||||
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
|
||||
predict if the two sentences were following each other or not.
|
||||
|
||||
This way, the model learns an inner representation of the English language that can then be used to extract features
|
||||
useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
|
||||
classifier using the features produced by the BERT model as inputs.
|
||||
|
||||
## Model variations
|
||||
|
||||
BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.
|
||||
Chinese and multilingual uncased and cased versions followed shortly after.
|
||||
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
|
||||
Other 24 smaller models are released afterward.
|
||||
|
||||
The detailed release history can be found on the [google-research/bert readme](https://github.com/google-research/bert/blob/master/README.md) on github.
|
||||
|
||||
| Model | #params | Language |
|
||||
|------------------------|--------------------------------|-------|
|
||||
| [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) | 110M | English |
|
||||
| [`bert-large-uncased`](https://huggingface.co/bert-large-uncased) | 340M | English | sub
|
||||
| [`bert-base-cased`](https://huggingface.co/bert-base-cased) | 110M | English |
|
||||
| [`bert-large-cased`](https://huggingface.co/bert-large-cased) | 340M | English |
|
||||
| [`bert-base-chinese`](https://huggingface.co/bert-base-chinese) | 110M | Chinese |
|
||||
| [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) | 110M | Multiple |
|
||||
| [`bert-large-uncased-whole-word-masking`](https://huggingface.co/bert-large-uncased-whole-word-masking) | 340M | English |
|
||||
| [`bert-large-cased-whole-word-masking`](https://huggingface.co/bert-large-cased-whole-word-masking) | 340M | English |
|
||||
|
||||
## Intended uses & limitations
|
||||
|
||||
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
|
||||
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
|
||||
fine-tuned versions of a task that interests you.
|
||||
|
||||
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
|
||||
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
|
||||
generation you should look at model like GPT2.
|
||||
|
||||
### How to use
|
||||
|
||||
You can use this model directly with a pipeline for masked language modeling:
|
||||
|
||||
```python
|
||||
>>> from transformers import pipeline
|
||||
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
|
||||
>>> unmasker("Hello I'm a [MASK] model.")
|
||||
|
||||
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
|
||||
'score': 0.1073106899857521,
|
||||
'token': 4827,
|
||||
'token_str': 'fashion'},
|
||||
{'sequence': "[CLS] hello i'm a role model. [SEP]",
|
||||
'score': 0.08774490654468536,
|
||||
'token': 2535,
|
||||
'token_str': 'role'},
|
||||
{'sequence': "[CLS] hello i'm a new model. [SEP]",
|
||||
'score': 0.05338378623127937,
|
||||
'token': 2047,
|
||||
'token_str': 'new'},
|
||||
{'sequence': "[CLS] hello i'm a super model. [SEP]",
|
||||
'score': 0.04667217284440994,
|
||||
'token': 3565,
|
||||
'token_str': 'super'},
|
||||
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
|
||||
'score': 0.027095865458250046,
|
||||
'token': 2986,
|
||||
'token_str': 'fine'}]
|
||||
```
|
||||
|
||||
Here is how to use this model to get the features of a given text in PyTorch:
|
||||
|
||||
```python
|
||||
from transformers import BertTokenizer, BertModel
|
||||
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
|
||||
model = BertModel.from_pretrained("bert-base-uncased")
|
||||
text = "Replace me by any text you'd like."
|
||||
encoded_input = tokenizer(text, return_tensors='pt')
|
||||
output = model(**encoded_input)
|
||||
```
|
||||
|
||||
and in TensorFlow:
|
||||
|
||||
```python
|
||||
from transformers import BertTokenizer, TFBertModel
|
||||
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
|
||||
model = TFBertModel.from_pretrained("bert-base-uncased")
|
||||
text = "Replace me by any text you'd like."
|
||||
encoded_input = tokenizer(text, return_tensors='tf')
|
||||
output = model(encoded_input)
|
||||
```
|
||||
|
||||
### Limitations and bias
|
||||
|
||||
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
|
||||
predictions:
|
||||
|
||||
```python
|
||||
>>> from transformers import pipeline
|
||||
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
|
||||
>>> unmasker("The man worked as a [MASK].")
|
||||
|
||||
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
|
||||
'score': 0.09747550636529922,
|
||||
'token': 10533,
|
||||
'token_str': 'carpenter'},
|
||||
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
|
||||
'score': 0.0523831807076931,
|
||||
'token': 15610,
|
||||
'token_str': 'waiter'},
|
||||
{'sequence': '[CLS] the man worked as a barber. [SEP]',
|
||||
'score': 0.04962705448269844,
|
||||
'token': 13362,
|
||||
'token_str': 'barber'},
|
||||
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
|
||||
'score': 0.03788609802722931,
|
||||
'token': 15893,
|
||||
'token_str': 'mechanic'},
|
||||
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
|
||||
'score': 0.037680890411138535,
|
||||
'token': 18968,
|
||||
'token_str': 'salesman'}]
|
||||
|
||||
>>> unmasker("The woman worked as a [MASK].")
|
||||
|
||||
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
|
||||
'score': 0.21981462836265564,
|
||||
'token': 6821,
|
||||
'token_str': 'nurse'},
|
||||
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
|
||||
'score': 0.1597415804862976,
|
||||
'token': 13877,
|
||||
'token_str': 'waitress'},
|
||||
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
|
||||
'score': 0.1154729500412941,
|
||||
'token': 10850,
|
||||
'token_str': 'maid'},
|
||||
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
|
||||
'score': 0.037968918681144714,
|
||||
'token': 19215,
|
||||
'token_str': 'prostitute'},
|
||||
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
|
||||
'score': 0.03042375110089779,
|
||||
'token': 5660,
|
||||
'token_str': 'cook'}]
|
||||
```
|
||||
|
||||
This bias will also affect all fine-tuned versions of this model.
|
||||
|
||||
## Training data
|
||||
|
||||
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
|
||||
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
|
||||
headers).
|
||||
|
||||
## Training procedure
|
||||
|
||||
### Preprocessing
|
||||
|
||||
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
|
||||
then of the form:
|
||||
|
||||
```
|
||||
[CLS] Sentence A [SEP] Sentence B [SEP]
|
||||
```
|
||||
|
||||
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
|
||||
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
|
||||
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
|
||||
"sentences" has a combined length of less than 512 tokens.
|
||||
|
||||
The details of the masking procedure for each sentence are the following:
|
||||
- 15% of the tokens are masked.
|
||||
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
|
||||
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
|
||||
- In the 10% remaining cases, the masked tokens are left as is.
|
||||
|
||||
### Pretraining
|
||||
|
||||
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
|
||||
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
|
||||
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
|
||||
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
|
||||
|
||||
## Evaluation results
|
||||
|
||||
When fine-tuned on downstream tasks, this model achieves the following results:
|
||||
|
||||
Glue test results:
|
||||
|
||||
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|
||||
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
|
||||
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
|
||||
|
||||
|
||||
### BibTeX entry and citation info
|
||||
|
||||
```bibtex
|
||||
@article{DBLP:journals/corr/abs-1810-04805,
|
||||
author = {Jacob Devlin and
|
||||
Ming{-}Wei Chang and
|
||||
Kenton Lee and
|
||||
Kristina Toutanova},
|
||||
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
|
||||
Understanding},
|
||||
journal = {CoRR},
|
||||
volume = {abs/1810.04805},
|
||||
year = {2018},
|
||||
url = {http://arxiv.org/abs/1810.04805},
|
||||
archivePrefix = {arXiv},
|
||||
eprint = {1810.04805},
|
||||
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
|
||||
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
|
||||
bibsource = {dblp computer science bibliography, https://dblp.org}
|
||||
}
|
||||
```
|
||||
|
||||
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
|
||||
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
|
||||
</a>
|
|
@ -3,6 +3,7 @@
|
|||
"BertForMaskedLM"
|
||||
],
|
||||
"attention_probs_dropout_prob": 0.1,
|
||||
"gradient_checkpointing": false,
|
||||
"hidden_act": "gelu",
|
||||
"hidden_dropout_prob": 0.1,
|
||||
"hidden_size": 768,
|
||||
|
@ -14,6 +15,9 @@
|
|||
"num_attention_heads": 12,
|
||||
"num_hidden_layers": 12,
|
||||
"pad_token_id": 0,
|
||||
"position_embedding_type": "absolute",
|
||||
"transformers_version": "4.6.0.dev0",
|
||||
"type_vocab_size": 2,
|
||||
"use_cache": true,
|
||||
"vocab_size": 30522
|
||||
}
|
||||
|
|
Binary file not shown.
Binary file not shown.
Loading…
Reference in New Issue