add models

This commit is contained in:
root 2023-07-13 15:04:11 +08:00
parent 17b4cedfc3
commit c4fb484e7a
12 changed files with 131657 additions and 0 deletions

1
.gitattributes vendored Normal file
View File

@ -0,0 +1 @@
*.bin filter=lfs diff=lfs merge=lfs -text

236
README.md
View File

@ -0,0 +1,236 @@
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: false
license: apache-2.0
---
# 🚀 Falcon-7B
**Falcon-7B is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.**
*Paper coming soon* 😊.
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B?
* **It outperforms comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
* **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B.
# Model Card for Falcon-7B
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0.
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B is trained on English and French data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B was trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)).
| **Data source** | **Fraction** | **Tokens** | **Sources** |
|--------------------|--------------|------------|-----------------------------------|
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 79% | 1,185B | massive web crawl |
| Books | 7% | 110B | |
| Conversations | 6% | 85B | Reddit, StackOverflow, HackerNews |
| Code | 3% | 45B | |
| RefinedWeb-French | 3% | 45B | massive web crawl |
| Technical | 2% | 30B | arXiv, PubMed, USPTO, etc. |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
### Training Procedure
Falcon-7B was trained on 384 A100 40GB GPUs, using a 2D parallelism strategy (PP=2, DP=192) combined with ZeRO.
#### Training Hyperparameters
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|------------|-------------------------------------------|
| Precision | `bfloat16` | |
| Optimizer | AdamW | |
| Learning rate | 6e-4 | 4B tokens warm-up, cosine decay to 1.2e-5 |
| Weight decay | 1e-1 | |
| Z-loss | 1e-4 | |
| Batch size | 2304 | 30B tokens ramp-up |
#### Speeds, Sizes, Times
Training happened in early March 2023 and took about two weeks.
## Evaluation
*Paper coming soon*.
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
## Technical Specifications
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B is made available under the Apache 2.0 license.
## Contact
falconllm@tii.ae

32
config.json Normal file
View File

@ -0,0 +1,32 @@
{
"alibi": false,
"apply_residual_connection_post_layernorm": false,
"architectures": [
"RWForCausalLM"
],
"attention_dropout": 0.0,
"auto_map": {
"AutoConfig": "configuration_RW.RWConfig",
"AutoModel": "modelling_RW.RWModel",
"AutoModelForSequenceClassification": "modelling_RW.RWForSequenceClassification",
"AutoModelForTokenClassification": "modelling_RW.RWForTokenClassification",
"AutoModelForQuestionAnswering": "modelling_RW.RWForQuestionAnswering",
"AutoModelForCausalLM": "modelling_RW.RWForCausalLM"
},
"bias": false,
"bos_token_id": 11,
"eos_token_id": 11,
"hidden_dropout": 0.0,
"hidden_size": 4544,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "RefinedWebModel",
"multi_query": true,
"n_head": 71,
"n_layer": 32,
"parallel_attn": true,
"torch_dtype": "bfloat16",
"transformers_version": "4.27.4",
"use_cache": true,
"vocab_size": 65024
}

79
configuration_RW.py Normal file
View File

@ -0,0 +1,79 @@
# coding=utf-8
# Copyright 2022 the Big Science Workshop and HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Bloom configuration"""
from transformers.configuration_utils import PretrainedConfig
from transformers.utils import logging
logger = logging.get_logger(__name__)
class RWConfig(PretrainedConfig):
model_type = "RefinedWebModel"
keys_to_ignore_at_inference = ["past_key_values"]
attribute_map = {
"num_hidden_layers": "n_layer",
"num_attention_heads": "n_head",
}
def __init__(
self,
vocab_size=250880,
hidden_size=64,
n_layer=2,
n_head=8,
layer_norm_epsilon=1e-5,
initializer_range=0.02,
use_cache=True,
bos_token_id=1,
eos_token_id=2,
apply_residual_connection_post_layernorm=False,
hidden_dropout=0.0,
attention_dropout=0.0,
multi_query=False,
alibi=False,
bias=False,
parallel_attn=False,
**kwargs,
):
self.vocab_size = vocab_size
# Backward compatibility with n_embed kwarg
n_embed = kwargs.pop("n_embed", None)
self.hidden_size = hidden_size if n_embed is None else n_embed
self.n_layer = n_layer
self.n_head = n_head
self.layer_norm_epsilon = layer_norm_epsilon
self.initializer_range = initializer_range
self.use_cache = use_cache
self.apply_residual_connection_post_layernorm = apply_residual_connection_post_layernorm
self.hidden_dropout = hidden_dropout
self.attention_dropout = attention_dropout
self.bos_token_id = bos_token_id
self.eos_token_id = eos_token_id
self.multi_query = multi_query
self.alibi = alibi
self.bias = bias
self.parallel_attn = parallel_attn
super().__init__(bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)
@property
def head_dim(self):
return self.hidden_size // self.n_head
@property
def rotary(self):
return not self.alibi

6
generation_config.json Normal file
View File

@ -0,0 +1,6 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 2,
"transformers_version": "4.27.4"
}

1100
modelling_RW.py Normal file

File diff suppressed because it is too large Load Diff

BIN
pytorch_model-00001-of-00002.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00002-of-00002.bin (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,203 @@
{
"metadata": {
"total_size": 14434379520
},
"weight_map": {
"lm_head.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.0.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.0.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.0.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.0.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.0.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.0.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.1.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.1.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.1.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.1.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.1.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.1.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.10.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.10.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.10.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.10.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.10.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.10.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.11.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.11.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.11.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.11.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.11.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.11.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.12.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.12.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.12.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.12.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.12.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.12.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.13.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.13.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.13.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.13.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.13.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.13.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.14.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.14.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.14.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.14.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.14.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.14.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.15.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.15.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.15.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.15.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.15.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.15.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.16.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.16.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.16.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.16.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.16.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.16.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.17.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.17.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.17.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.17.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.17.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.17.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.18.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.18.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.18.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.18.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.18.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.18.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.19.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.19.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.19.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.19.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.19.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.19.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.2.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.2.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.2.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.2.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.2.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.2.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.20.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.20.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.20.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.20.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.20.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.20.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.21.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.21.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.21.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.21.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.21.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.21.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.22.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.22.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.22.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.22.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.22.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.22.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.23.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
"transformer.h.23.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.23.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.23.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.23.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.23.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.24.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
"transformer.h.24.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.24.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.24.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.24.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.24.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.25.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
"transformer.h.25.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.25.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.25.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.25.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.25.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.26.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
"transformer.h.26.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.26.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.26.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.26.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.26.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.27.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
"transformer.h.27.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.27.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.27.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.27.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.27.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.28.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
"transformer.h.28.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.28.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.28.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.28.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.28.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.29.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
"transformer.h.29.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.29.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.29.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.29.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.29.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.3.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.3.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.3.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.3.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.3.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.3.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.30.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
"transformer.h.30.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.30.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.30.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.30.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.30.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.31.input_layernorm.bias": "pytorch_model-00002-of-00002.bin",
"transformer.h.31.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.31.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.31.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.31.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.31.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
"transformer.h.4.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.4.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.4.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.4.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.4.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.4.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.5.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.5.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.5.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.5.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.5.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.5.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.6.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.6.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.6.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.6.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.6.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.6.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.7.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.7.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.7.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.7.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.7.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.7.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.8.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.8.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.8.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.8.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.8.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.8.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.9.input_layernorm.bias": "pytorch_model-00001-of-00002.bin",
"transformer.h.9.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.9.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.9.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.9.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.h.9.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.ln_f.bias": "pytorch_model-00002-of-00002.bin",
"transformer.ln_f.weight": "pytorch_model-00002-of-00002.bin",
"transformer.word_embeddings.weight": "pytorch_model-00001-of-00002.bin"
}
}

16
special_tokens_map.json Normal file
View File

@ -0,0 +1,16 @@
{
"additional_special_tokens": [
">>TITLE<<",
">>ABSTRACT<<",
">>INTRODUCTION<<",
">>SUMMARY<<",
">>COMMENT<<",
">>ANSWER<<",
">>QUESTION<<",
">>DOMAIN<<",
">>PREFIX<<",
">>SUFFIX<<",
">>MIDDLE<<"
],
"eos_token": "<|endoftext|>"
}

129970
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

8
tokenizer_config.json Normal file
View File

@ -0,0 +1,8 @@
{
"add_prefix_space": false,
"eos_token": "<|endoftext|>",
"model_max_length": 2048,
"name_or_path": "tiiuae/falcon_tokenizer",
"special_tokens_map_file": null,
"tokenizer_class": "PreTrainedTokenizerFast"
}