add internlm_7b

This commit is contained in:
mjchen6 2023-08-22 14:33:35 +08:00
parent 704e084064
commit 6464199fe7
19 changed files with 1997 additions and 0 deletions

1
.gitattributes vendored Normal file
View File

@ -0,0 +1 @@
*.bin filter=lfs diff=lfs merge=lfs -text

130
README.md
View File

@ -0,0 +1,130 @@
---
pipeline_tag: text-generation
---
# InternLM
<div align="center">
<img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/>
<div>&nbsp;</div>
<div align="center">
<b><font size="5">InternLM</font></b>
<sup>
<a href="https://internlm.intern-ai.org.cn/">
<i><font size="4">HOT</font></i>
</a>
</sup>
<div>&nbsp;</div>
</div>
[![evaluation](https://github.com/InternLM/InternLM/assets/22529082/f80a2a58-5ddf-471a-8da4-32ab65c8fd3b)](https://github.com/internLM/OpenCompass/)
[🤔Reporting Issues](https://github.com/InternLM/InternLM/issues/new)
</div>
## Introduction
InternLM has open-sourced a 7 billion parameter base model tailored for practical scenarios. The model has the following characteristics:
- It leverages trillions of high-quality tokens for training to establish a powerful knowledge base.
- It provides a versatile toolset for users to flexibly build their own workflows.
## InternLM-7B
### Performance Evaluation
We conducted a comprehensive evaluation of InternLM using the open-source evaluation tool [OpenCompass](https://github.com/internLM/OpenCompass/). The evaluation covered five dimensions of capabilities: disciplinary competence, language competence, knowledge competence, inference competence, and comprehension competence. Here are some of the evaluation results, and you can visit the [OpenCompass leaderboard](https://opencompass.org.cn/rank) for more evaluation results.
| Datasets\Models | **InternLM-Chat-7B** | **InternLM-7B** | LLaMA-7B | Baichuan-7B | ChatGLM2-6B | Alpaca-7B | Vicuna-7B |
| -------------------- | --------------------- | ---------------- | --------- | --------- | ------------ | --------- | ---------- |
| C-Eval(Val) | 53.2 | 53.4 | 24.2 | 42.7 | 50.9 | 28.9 | 31.2 |
| MMLU | 50.8 | 51.0 | 35.2* | 41.5 | 46.0 | 39.7 | 47.3 |
| AGIEval | 42.5 | 37.6 | 20.8 | 24.6 | 39.0 | 24.1 | 26.4 |
| CommonSenseQA | 75.2 | 59.5 | 65.0 | 58.8 | 60.0 | 68.7 | 66.7 |
| BUSTM | 74.3 | 50.6 | 48.5 | 51.3 | 55.0 | 48.8 | 62.5 |
| CLUEWSC | 78.6 | 59.1 | 50.3 | 52.8 | 59.8 | 50.3 | 52.2 |
| MATH | 6.4 | 7.1 | 2.8 | 3.0 | 6.6 | 2.2 | 2.8 |
| GSM8K | 34.5 | 31.2 | 10.1 | 9.7 | 29.2 | 6.0 | 15.3 |
| HumanEval | 14.0 | 10.4 | 14.0 | 9.2 | 9.2 | 9.2 | 11.0 |
| RACE(High) | 76.3 | 57.4 | 46.9* | 28.1 | 66.3 | 40.7 | 54.0 |
- The evaluation results were obtained from [OpenCompass 20230706](https://github.com/internLM/OpenCompass/) (some data marked with *, which means come from the original papers), and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/internLM/OpenCompass/).
- The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/internLM/OpenCompass/), so please refer to the latest evaluation results of [OpenCompass](https://github.com/internLM/OpenCompass/).
**Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.
### Import from Transformers
To load the InternLM 7B Chat model using Transformers, use the following code:
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-7b", trust_remote_code=True)
>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-7b", trust_remote_code=True).cuda()
>>> model = model.eval()
>>> inputs = tokenizer(["A beautiful flower"], return_tensors="pt")
>>> for k,v in inputs.items():
inputs[k] = v.cuda()
>>> gen_kwargs = {"max_length": 128, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
>>> output = model.generate(**inputs, **gen_kwargs)
>>> output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
>>> print(output)
<s> A beautiful flower box made of white rose wood. It is a perfect gift for weddings, birthdays and anniversaries.
All the roses are from our farm Roses Flanders. Therefor you know that these flowers last much longer than those in store or online!</s>
```
## Open Source License
The InternLM weights are fully open for academic research and also allow commercial use with written permission from the official team. For inquiries about commercial licenses and collaborations, please contact internlm@pjlab.org.cn.
## 简介
InternLM 即书生·浦语大模型包含面向实用场景的70亿参数基础模型 InternLM-7B。模型具有以下特点
- 使用上万亿高质量预料,建立模型超强知识体系;
- 通用工具调用能力,支持用户灵活自助搭建流程;
## InternLM-7B
### 性能评测
我们使用开源评测工具 [OpenCompass](https://github.com/internLM/OpenCompass/) 从学科综合能力、语言能力、知识能力、推理能力、理解能力五大能力维度对InternLM开展全面评测部分评测结果如下表所示欢迎访问[ OpenCompass 榜单 ](https://opencompass.org.cn/rank)获取更多的评测结果。
| 数据集\模型 | **InternLM-Chat-7B** | **InternLM-7B** | LLaMA-7B | Baichuan-7B | ChatGLM2-6B | Alpaca-7B | Vicuna-7B |
| -------------------- | --------------------- | ---------------- | --------- | --------- | ------------ | --------- | ---------- |
| C-Eval(Val) | 53.2 | 53.4 | 24.2 | 42.7 | 50.9 | 28.9 | 31.2 |
| MMLU | 50.8 | 51.0 | 35.2* | 41.5 | 46.0 | 39.7 | 47.3 |
| AGIEval | 42.5 | 37.6 | 20.8 | 24.6 | 39.0 | 24.1 | 26.4 |
| CommonSenseQA | 75.2 | 59.5 | 65.0 | 58.8 | 60.0 | 68.7 | 66.7 |
| BUSTM | 74.3 | 50.6 | 48.5 | 51.3 | 55.0 | 48.8 | 62.5 |
| CLUEWSC | 78.6 | 59.1 | 50.3 | 52.8 | 59.8 | 50.3 | 52.2 |
| MATH | 6.4 | 7.1 | 2.8 | 3.0 | 6.6 | 2.2 | 2.8 |
| GSM8K | 34.5 | 31.2 | 10.1 | 9.7 | 29.2 | 6.0 | 15.3 |
| HumanEval | 14.0 | 10.4 | 14.0 | 9.2 | 9.2 | 9.2 | 11.0 |
| RACE(High) | 76.3 | 57.4 | 46.9* | 28.1 | 66.3 | 40.7 | 54.0 |
- 以上评测结果基于 [OpenCompass 20230706](https://github.com/internLM/OpenCompass/) 获得(部分数据标注`*`代表数据来自原始论文),具体测试细节可参见 [OpenCompass](https://github.com/internLM/OpenCompass/) 中提供的配置文件。
- 评测数据会因 [OpenCompass](https://github.com/internLM/OpenCompass/) 的版本迭代而存在数值差异,请以 [OpenCompass](https://github.com/internLM/OpenCompass/) 最新版的评测结果为主。
**局限性:** 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。
### 通过 Transformers 加载
通过以下的代码加载 InternLM 7B Chat 模型
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-7b", trust_remote_code=True)
>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-7b", trust_remote_code=True).cuda()
>>> model = model.eval()
>>> inputs = tokenizer(["来到美丽的大自然,我们发现"], return_tensors="pt")
>>> for k,v in inputs.items():
inputs[k] = v.cuda()
>>> gen_kwargs = {"max_length": 128, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
>>> output = model.generate(**inputs, **gen_kwargs)
>>> output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
>>> print(output)
来到美丽的大自然,我们发现各种各样的花千奇百怪。有的颜色鲜艳亮丽,使人感觉生机勃勃;有的是红色的花瓣儿粉嫩嫩的像少女害羞的脸庞一样让人爱不释手.有的小巧玲珑; 还有的花瓣粗大看似枯黄实则暗藏玄机!
不同的花卉有不同的“脾气”,它们都有着属于自己的故事和人生道理.这些鲜花都是大自然中最为原始的物种,每一朵都绽放出别样的美令人陶醉、着迷!
```
## 开源许可证
InternLM 权重对学术研究完全开放,在获得官方的书面许可后,亦允许商业使用。申请商用许可与合作请联系 internlm@pjlab.org.cn。

28
config.json Normal file
View File

@ -0,0 +1,28 @@
{
"architectures": [
"InternLMForCausalLM"
],
"auto_map": {
"AutoConfig": "configuration_internlm.InternLMConfig",
"AutoModel": "modeling_internlm.InternLMForCausalLM",
"AutoModelForCausalLM": "modeling_internlm.InternLMForCausalLM"
},
"bias": true,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 2048,
"model_type": "internlm",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"pad_token_id": 0,
"rms_norm_eps": 1e-06,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.29.2",
"use_cache": true,
"vocab_size": 103168
}

120
configuration_internlm.py Normal file
View File

@ -0,0 +1,120 @@
# coding=utf-8
# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
#
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
# and OPT implementations in this library. It has been modified from its
# original forms to accommodate minor architectural differences compared
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" InternLM model configuration"""
from transformers.utils import logging
from transformers.configuration_utils import PretrainedConfig
logger = logging.get_logger(__name__)
INTERNLM_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
class InternLMConfig(PretrainedConfig):
r"""
This is the configuration class to store the configuration of a [`InternLMModel`]. It is used to instantiate an InternLM
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the InternLM-7B.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 32000):
Vocabulary size of the InternLM model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`InternLMModel`]
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 11008):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
tie_word_embeddings(`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
Example:
```python
>>> from transformers import InternLMModel, InternLMConfig
>>> # Initializing a InternLM internlm-7b style configuration
>>> configuration = InternLMConfig()
>>> # Initializing a model from the internlm-7b style configuration
>>> model = InternLMModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```"""
model_type = "internlm"
_auto_class = "AutoConfig"
def __init__(
self,
vocab_size=103168,
hidden_size=4096,
intermediate_size=11008,
num_hidden_layers=32,
num_attention_heads=32,
hidden_act="silu",
max_position_embeddings=2048,
initializer_range=0.02,
rms_norm_eps=1e-6,
use_cache=True,
pad_token_id=0,
bos_token_id=1,
eos_token_id=2,
tie_word_embeddings=False,
bias=True,
**kwargs,
):
self.vocab_size = vocab_size
self.max_position_embeddings = max_position_embeddings
self.hidden_size = hidden_size
self.intermediate_size = intermediate_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.hidden_act = hidden_act
self.initializer_range = initializer_range
self.rms_norm_eps = rms_norm_eps
self.use_cache = use_cache
self.bias = bias
super().__init__(
pad_token_id=pad_token_id,
bos_token_id=bos_token_id,
eos_token_id=eos_token_id,
tie_word_embeddings=tie_word_embeddings,
**kwargs,
)

7
generation_config.json Normal file
View File

@ -0,0 +1,7 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 2,
"pad_token_id": 0,
"transformers_version": "4.29.2"
}

966
modeling_internlm.py Normal file
View File

@ -0,0 +1,966 @@
# coding=utf-8
# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
#
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
# and OPT implementations in this library. It has been modified from its
# original forms to accommodate minor architectural differences compared
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" PyTorch InternLM model."""
import math
from typing import List, Optional, Tuple, Union
import torch
import torch.utils.checkpoint
from torch import nn
from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
from transformers.activations import ACT2FN
from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast, SequenceClassifierOutputWithPast
from transformers.modeling_utils import PreTrainedModel
from transformers.generation.streamers import BaseStreamer
from transformers.utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings
from .configuration_internlm import InternLMConfig
logger = logging.get_logger(__name__)
_CONFIG_FOR_DOC = "InternLMConfig"
# Copied from transformers.models.bart.modeling_bart._make_causal_mask
def _make_causal_mask(
input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0
):
"""
Make causal mask used for bi-directional self-attention.
"""
bsz, tgt_len = input_ids_shape
mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
mask_cond = torch.arange(mask.size(-1), device=device)
mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
mask = mask.to(dtype)
if past_key_values_length > 0:
mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1)
return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
# Copied from transformers.models.bart.modeling_bart._expand_mask
def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
"""
Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
"""
bsz, src_len = mask.size()
tgt_len = tgt_len if tgt_len is not None else src_len
expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
inverted_mask = 1.0 - expanded_mask
return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)
class InternLMRMSNorm(nn.Module):
def __init__(self, hidden_size, eps=1e-6):
"""
InternLMRMSNorm is equivalent to T5LayerNorm
"""
super().__init__()
self.weight = nn.Parameter(torch.ones(hidden_size))
self.variance_epsilon = eps
def forward(self, hidden_states):
variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
# convert into half-precision if necessary
if self.weight.dtype in [torch.float16, torch.bfloat16]:
hidden_states = hidden_states.to(self.weight.dtype)
return self.weight * hidden_states
class InternLMRotaryEmbedding(torch.nn.Module):
def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None):
super().__init__()
inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim))
self.register_buffer("inv_freq", inv_freq)
# Build here to make `torch.jit.trace` work.
self.max_seq_len_cached = max_position_embeddings
t = torch.arange(self.max_seq_len_cached, device=self.inv_freq.device, dtype=self.inv_freq.dtype)
freqs = torch.einsum("i,j->ij", t, self.inv_freq)
# Different from paper, but it uses a different permutation in order to obtain the same calculation
emb = torch.cat((freqs, freqs), dim=-1)
self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False)
self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False)
def forward(self, x, seq_len=None):
# x: [bs, num_attention_heads, seq_len, head_size]
# This `if` block is unlikely to be run after we build sin/cos in `__init__`. Keep the logic here just in case.
if seq_len > self.max_seq_len_cached:
self.max_seq_len_cached = seq_len
t = torch.arange(self.max_seq_len_cached, device=x.device, dtype=self.inv_freq.dtype)
freqs = torch.einsum("i,j->ij", t, self.inv_freq)
# Different from paper, but it uses a different permutation in order to obtain the same calculation
emb = torch.cat((freqs, freqs), dim=-1).to(x.device)
self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False)
self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False)
return (
self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
)
def rotate_half(x):
"""Rotates half the hidden dims of the input."""
x1 = x[..., : x.shape[-1] // 2]
x2 = x[..., x.shape[-1] // 2 :]
return torch.cat((-x2, x1), dim=-1)
def apply_rotary_pos_emb(q, k, cos, sin, position_ids):
# The first two dimensions of cos and sin are always 1, so we can `squeeze` them.
cos = cos.squeeze(1).squeeze(0) # [seq_len, dim]
sin = sin.squeeze(1).squeeze(0) # [seq_len, dim]
cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]
sin = sin[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]
q_embed = (q * cos) + (rotate_half(q) * sin)
k_embed = (k * cos) + (rotate_half(k) * sin)
return q_embed, k_embed
class InternLMMLP(nn.Module):
def __init__(
self,
hidden_size: int,
intermediate_size: int,
hidden_act: str,
):
super().__init__()
self.gate_proj = nn.Linear(hidden_size, intermediate_size, bias=False)
self.down_proj = nn.Linear(intermediate_size, hidden_size, bias=False)
self.up_proj = nn.Linear(hidden_size, intermediate_size, bias=False)
self.act_fn = ACT2FN[hidden_act]
def forward(self, x):
return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
class InternLMAttention(nn.Module):
"""Multi-headed attention from 'Attention Is All You Need' paper"""
def __init__(self, config: InternLMConfig):
super().__init__()
self.config = config
self.hidden_size = config.hidden_size
self.num_heads = config.num_attention_heads
self.head_dim = self.hidden_size // self.num_heads
self.max_position_embeddings = config.max_position_embeddings
if (self.head_dim * self.num_heads) != self.hidden_size:
raise ValueError(
f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
f" and `num_heads`: {self.num_heads})."
)
self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=config.bias)
self.k_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=config.bias)
self.v_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=config.bias)
self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=config.bias)
self.rotary_emb = InternLMRotaryEmbedding(self.head_dim, max_position_embeddings=self.max_position_embeddings)
def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_value: Optional[Tuple[torch.Tensor]] = None,
output_attentions: bool = False,
use_cache: bool = False,
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
bsz, q_len, _ = hidden_states.size()
query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
kv_seq_len = key_states.shape[-2]
if past_key_value is not None:
kv_seq_len += past_key_value[0].shape[-2]
cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
# [bsz, nh, t, hd]
if past_key_value is not None:
# reuse k, v, self_attention
key_states = torch.cat([past_key_value[0], key_states], dim=2)
value_states = torch.cat([past_key_value[1], value_states], dim=2)
past_key_value = (key_states, value_states) if use_cache else None
attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
raise ValueError(
f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
f" {attn_weights.size()}"
)
if attention_mask is not None:
if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
raise ValueError(
f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
)
attn_weights = attn_weights + attention_mask
attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min))
# upcast attention to fp32
attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
attn_output = torch.matmul(attn_weights, value_states)
if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
raise ValueError(
f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
f" {attn_output.size()}"
)
attn_output = attn_output.transpose(1, 2)
attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
attn_output = self.o_proj(attn_output)
if not output_attentions:
attn_weights = None
return attn_output, attn_weights, past_key_value
class InternLMDecoderLayer(nn.Module):
def __init__(self, config: InternLMConfig):
super().__init__()
self.hidden_size = config.hidden_size
self.self_attn = InternLMAttention(config=config)
self.mlp = InternLMMLP(
hidden_size=self.hidden_size,
intermediate_size=config.intermediate_size,
hidden_act=config.hidden_act,
)
self.input_layernorm = InternLMRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
self.post_attention_layernorm = InternLMRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_value: Optional[Tuple[torch.Tensor]] = None,
output_attentions: Optional[bool] = False,
use_cache: Optional[bool] = False,
) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
"""
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
output_attentions (`bool`, *optional*):
Whether or not to return the attentions tensors of all attention layers. See `attentions` under
returned tensors for more detail.
use_cache (`bool`, *optional*):
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
(see `past_key_values`).
past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
"""
residual = hidden_states
hidden_states = self.input_layernorm(hidden_states)
# Self Attention
hidden_states, self_attn_weights, present_key_value = self.self_attn(
hidden_states=hidden_states,
attention_mask=attention_mask,
position_ids=position_ids,
past_key_value=past_key_value,
output_attentions=output_attentions,
use_cache=use_cache,
)
hidden_states = residual + hidden_states
# Fully Connected
residual = hidden_states
hidden_states = self.post_attention_layernorm(hidden_states)
hidden_states = self.mlp(hidden_states)
hidden_states = residual + hidden_states
outputs = (hidden_states,)
if output_attentions:
outputs += (self_attn_weights,)
if use_cache:
outputs += (present_key_value,)
return outputs
INTERNLM_START_DOCSTRING = r"""
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`InternLMConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
"""
@add_start_docstrings(
"The bare InternLM Model outputting raw hidden-states without any specific head on top.",
INTERNLM_START_DOCSTRING,
)
class InternLMPreTrainedModel(PreTrainedModel):
config_class = InternLMConfig
base_model_prefix = "model"
supports_gradient_checkpointing = True
_no_split_modules = ["InternLMDecoderLayer"]
_keys_to_ignore_on_load_unexpected = [r"decoder\.version"]
def _init_weights(self, module):
std = self.config.initializer_range
if isinstance(module, nn.Linear):
module.weight.data.normal_(mean=0.0, std=std)
if module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.Embedding):
module.weight.data.normal_(mean=0.0, std=std)
if module.padding_idx is not None:
module.weight.data[module.padding_idx].zero_()
def _set_gradient_checkpointing(self, module, value=False):
if isinstance(module, InternLMModel):
module.gradient_checkpointing = value
INTERNLM_INPUTS_DOCSTRING = r"""
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
[`PreTrainedTokenizer.__call__`] for details.
[What are input IDs?](../glossary#input-ids)
attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
- 1 for tokens that are **not masked**,
- 0 for tokens that are **masked**.
[What are attention masks?](../glossary#attention-mask)
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
[`PreTrainedTokenizer.__call__`] for details.
If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
`past_key_values`).
If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
information on the default strategy.
- 1 indicates the head is **not masked**,
- 0 indicates the head is **masked**.
position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
config.n_positions - 1]`.
[What are position IDs?](../glossary#position-ids)
past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
`(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
`decoder_input_ids` of shape `(batch_size, sequence_length)`.
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
model's internal embedding lookup matrix.
use_cache (`bool`, *optional*):
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
`past_key_values`).
output_attentions (`bool`, *optional*):
Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
tensors for more detail.
output_hidden_states (`bool`, *optional*):
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
more detail.
return_dict (`bool`, *optional*):
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
"""
@add_start_docstrings(
"The bare InternLM Model outputting raw hidden-states without any specific head on top.",
INTERNLM_START_DOCSTRING,
)
class InternLMModel(InternLMPreTrainedModel):
"""
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`InternLMDecoderLayer`]
Args:
config: InternLMConfig
"""
_auto_class = "AutoModel"
def __init__(self, config: InternLMConfig):
super().__init__(config)
self.padding_idx = config.pad_token_id
self.vocab_size = config.vocab_size
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
self.layers = nn.ModuleList([InternLMDecoderLayer(config) for _ in range(config.num_hidden_layers)])
self.norm = InternLMRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
self.gradient_checkpointing = False
# Initialize weights and apply final processing
self.post_init()
def get_input_embeddings(self):
return self.embed_tokens
def set_input_embeddings(self, value):
self.embed_tokens = value
# Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask
def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
# create causal mask
# [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
combined_attention_mask = None
if input_shape[-1] > 1:
combined_attention_mask = _make_causal_mask(
input_shape,
inputs_embeds.dtype,
device=inputs_embeds.device,
past_key_values_length=past_key_values_length,
)
if attention_mask is not None:
# [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]).to(
inputs_embeds.device
)
combined_attention_mask = (
expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
)
return combined_attention_mask
@add_start_docstrings_to_model_forward(INTERNLM_INPUTS_DOCSTRING)
def forward(
self,
input_ids: torch.LongTensor = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[List[torch.FloatTensor]] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
use_cache: Optional[bool] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, BaseModelOutputWithPast]:
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
output_hidden_states = (
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
)
use_cache = use_cache if use_cache is not None else self.config.use_cache
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
# retrieve input_ids and inputs_embeds
if input_ids is not None and inputs_embeds is not None:
raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
elif input_ids is not None:
batch_size, seq_length = input_ids.shape
elif inputs_embeds is not None:
batch_size, seq_length, _ = inputs_embeds.shape
else:
raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds")
seq_length_with_past = seq_length
past_key_values_length = 0
if past_key_values is not None:
past_key_values_length = past_key_values[0][0].shape[2]
seq_length_with_past = seq_length_with_past + past_key_values_length
if position_ids is None:
device = input_ids.device if input_ids is not None else inputs_embeds.device
position_ids = torch.arange(
past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device
)
position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
else:
position_ids = position_ids.view(-1, seq_length).long()
if inputs_embeds is None:
inputs_embeds = self.embed_tokens(input_ids)
# embed positions
if attention_mask is None:
attention_mask = torch.ones(
(batch_size, seq_length_with_past), dtype=torch.bool, device=inputs_embeds.device
)
attention_mask = self._prepare_decoder_attention_mask(
attention_mask, (batch_size, seq_length), inputs_embeds, past_key_values_length
)
hidden_states = inputs_embeds
if self.gradient_checkpointing and self.training:
if use_cache:
logger.warning_once(
"`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
)
use_cache = False
# decoder layers
all_hidden_states = () if output_hidden_states else None
all_self_attns = () if output_attentions else None
next_decoder_cache = () if use_cache else None
for idx, decoder_layer in enumerate(self.layers):
if output_hidden_states:
all_hidden_states += (hidden_states,)
past_key_value = past_key_values[idx] if past_key_values is not None else None
if self.gradient_checkpointing and self.training:
def create_custom_forward(module):
def custom_forward(*inputs):
# None for past_key_value
return module(*inputs, output_attentions, None)
return custom_forward
layer_outputs = torch.utils.checkpoint.checkpoint(
create_custom_forward(decoder_layer),
hidden_states,
attention_mask,
position_ids,
None,
)
else:
layer_outputs = decoder_layer(
hidden_states,
attention_mask=attention_mask,
position_ids=position_ids,
past_key_value=past_key_value,
output_attentions=output_attentions,
use_cache=use_cache,
)
hidden_states = layer_outputs[0]
if use_cache:
next_decoder_cache += (layer_outputs[2 if output_attentions else 1],)
if output_attentions:
all_self_attns += (layer_outputs[1],)
hidden_states = self.norm(hidden_states)
# add hidden states from the last decoder layer
if output_hidden_states:
all_hidden_states += (hidden_states,)
next_cache = next_decoder_cache if use_cache else None
if not return_dict:
return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
return BaseModelOutputWithPast(
last_hidden_state=hidden_states,
past_key_values=next_cache,
hidden_states=all_hidden_states,
attentions=all_self_attns,
)
class InternLMForCausalLM(InternLMPreTrainedModel):
_auto_class = "AutoModelForCausalLM"
def __init__(self, config):
super().__init__(config)
self.model = InternLMModel(config)
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
# Initialize weights and apply final processing
self.post_init()
def get_input_embeddings(self):
return self.model.embed_tokens
def set_input_embeddings(self, value):
self.model.embed_tokens = value
def get_output_embeddings(self):
return self.lm_head
def set_output_embeddings(self, new_embeddings):
self.lm_head = new_embeddings
def set_decoder(self, decoder):
self.model = decoder
def get_decoder(self):
return self.model
@add_start_docstrings_to_model_forward(INTERNLM_INPUTS_DOCSTRING)
@replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
def forward(
self,
input_ids: torch.LongTensor = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[List[torch.FloatTensor]] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
labels: Optional[torch.LongTensor] = None,
use_cache: Optional[bool] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, CausalLMOutputWithPast]:
r"""
Args:
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
(masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
Returns:
Example:
```python
>>> from transformers import AutoTokenizer, InternLMForCausalLM
>>> model = InternLMForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
>>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
>>> prompt = "Hey, are you consciours? Can you talk to me?"
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> # Generate
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"Hey, are you consciours? Can you talk to me?\nI'm not consciours, but I can talk to you."
```"""
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
output_hidden_states = (
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
)
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
# decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
outputs = self.model(
input_ids=input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
past_key_values=past_key_values,
inputs_embeds=inputs_embeds,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
hidden_states = outputs[0]
logits = self.lm_head(hidden_states)
loss = None
if labels is not None:
# Shift so that tokens < n predict n
shift_logits = logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
# Flatten the tokens
loss_fct = CrossEntropyLoss()
shift_logits = shift_logits.view(-1, self.config.vocab_size)
shift_labels = shift_labels.view(-1)
# Enable model parallelism
shift_labels = shift_labels.to(shift_logits.device)
loss = loss_fct(shift_logits, shift_labels)
if not return_dict:
output = (logits,) + outputs[1:]
return (loss,) + output if loss is not None else output
return CausalLMOutputWithPast(
loss=loss,
logits=logits,
past_key_values=outputs.past_key_values,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
def prepare_inputs_for_generation(
self, input_ids, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs
):
if past_key_values:
input_ids = input_ids[:, -1:]
position_ids = kwargs.get("position_ids", None)
if attention_mask is not None and position_ids is None:
# create position_ids on the fly for batch generation
position_ids = attention_mask.long().cumsum(-1) - 1
position_ids.masked_fill_(attention_mask == 0, 1)
if past_key_values:
position_ids = position_ids[:, -1].unsqueeze(-1)
# if `inputs_embeds` are passed, we only want to use them in the 1st generation step
if inputs_embeds is not None and past_key_values is None:
model_inputs = {"inputs_embeds": inputs_embeds}
else:
model_inputs = {"input_ids": input_ids}
model_inputs.update(
{
"position_ids": position_ids,
"past_key_values": past_key_values,
"use_cache": kwargs.get("use_cache"),
"attention_mask": attention_mask,
}
)
return model_inputs
@staticmethod
def _reorder_cache(past_key_values, beam_idx):
reordered_past = ()
for layer_past in past_key_values:
reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),)
return reordered_past
def build_inputs(self, tokenizer, query: str, history: List[Tuple[str, str]] = []):
prompt = ""
for record in history:
prompt += f"""<s><|User|>:{record[0]}<eoh>\n<|Bot|>:{record[1]}<eoa>\n"""
if len(prompt) == 0:
prompt += "<s>"
prompt += f"""<|User|>:{query}<eoh>\n<|Bot|>:"""
return tokenizer([prompt], return_tensors="pt")
@torch.no_grad()
def chat(self,
tokenizer,
query: str,
history: List[Tuple[str, str]] = [],
streamer: Optional[BaseStreamer] = None,
max_new_tokens: int = 1024,
do_sample: bool = True,
temperature: float = 0.8,
top_p: float = 0.8,
eos_token_id = (2, 103028),
**kwargs):
inputs = self.build_inputs(tokenizer, query, history)
inputs = {k: v.to(self.device) for k, v in inputs.items() if torch.is_tensor(v)}
outputs = self.generate(**inputs,
streamer=streamer,
max_new_tokens=max_new_tokens,
do_sample=do_sample,
temperature=temperature,
top_p=top_p,
eos_token_id=list(eos_token_id),
**kwargs)
outputs = outputs[0].cpu().tolist()[len(inputs["input_ids"][0]):]
response = tokenizer.decode(outputs, skip_special_tokens=True)
response = response.split("<eoa>")[0]
history = history + [(query, response)]
return response, history
@torch.no_grad()
def stream_chat(self,
tokenizer,
query: str,
history: List[Tuple[str, str]] = [],
max_new_tokens: int = 1024,
do_sample: bool = True,
temperature: float = 0.8,
top_p: float = 0.8,
eos_token_id = (2, 103028),
**kwargs):
class ChatStreamer(BaseStreamer):
def __init__(self, tokenizer) -> None:
super().__init__()
self.tokenizer = tokenizer
def put(self, value):
if len(value.shape) > 1 and value.shape[0] > 1:
raise ValueError("ChatStreamer only supports batch size 1")
elif len(value.shape) > 1:
value = value[0]
token = self.tokenizer.decode([value[-1]], skip_special_tokens=True)
if token.strip() != "<eoa>":
print(token, end="")
def end(self):
print("")
return self.chat(
tokenizer=tokenizer,
query=query,
streamer=ChatStreamer(tokenizer=tokenizer),
history=history,
max_new_tokens=max_new_tokens,
do_sample=do_sample,
temperature=temperature,
top_p=top_p,
eos_token_id=eos_token_id,
**kwargs
)
@add_start_docstrings(
"""
The InternLM Model transformer with a sequence classification head on top (linear layer).
[`InternLMForSequenceClassification`] uses the last token in order to do the classification, as other causal models
(e.g. GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
""",
INTERNLM_START_DOCSTRING,
)
class InternLMForSequenceClassification(InternLMPreTrainedModel):
_keys_to_ignore_on_load_missing = [r"lm_head.weight"]
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.model = InternLMModel(config)
self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
# Initialize weights and apply final processing
self.post_init()
def get_input_embeddings(self):
return self.model.embed_tokens
def set_input_embeddings(self, value):
self.model.embed_tokens = value
@add_start_docstrings_to_model_forward(INTERNLM_INPUTS_DOCSTRING)
def forward(
self,
input_ids: torch.LongTensor = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[List[torch.FloatTensor]] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
labels: Optional[torch.LongTensor] = None,
use_cache: Optional[bool] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, SequenceClassifierOutputWithPast]:
r"""
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
transformer_outputs = self.model(
input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
past_key_values=past_key_values,
inputs_embeds=inputs_embeds,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
hidden_states = transformer_outputs[0]
logits = self.score(hidden_states)
if input_ids is not None:
batch_size = input_ids.shape[0]
else:
batch_size = inputs_embeds.shape[0]
if self.config.pad_token_id is None and batch_size != 1:
raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
if self.config.pad_token_id is None:
sequence_lengths = -1
else:
if input_ids is not None:
sequence_lengths = (torch.ne(input_ids, self.config.pad_token_id).sum(-1) - 1).to(logits.device)
else:
sequence_lengths = -1
pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
loss = None
if labels is not None:
labels = labels.to(logits.device)
if self.config.problem_type is None:
if self.num_labels == 1:
self.config.problem_type = "regression"
elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
self.config.problem_type = "single_label_classification"
else:
self.config.problem_type = "multi_label_classification"
if self.config.problem_type == "regression":
loss_fct = MSELoss()
if self.num_labels == 1:
loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
else:
loss = loss_fct(pooled_logits, labels)
elif self.config.problem_type == "single_label_classification":
loss_fct = CrossEntropyLoss()
loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1))
elif self.config.problem_type == "multi_label_classification":
loss_fct = BCEWithLogitsLoss()
loss = loss_fct(pooled_logits, labels)
if not return_dict:
output = (pooled_logits,) + transformer_outputs[1:]
return ((loss,) + output) if loss is not None else output
return SequenceClassifierOutputWithPast(
loss=loss,
logits=pooled_logits,
past_key_values=transformer_outputs.past_key_values,
hidden_states=transformer_outputs.hidden_states,
attentions=transformer_outputs.attentions,
)

BIN
pytorch_model-00001-of-00008.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00002-of-00008.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00003-of-00008.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00004-of-00008.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00005-of-00008.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00006-of-00008.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00007-of-00008.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00008-of-00008.bin (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,458 @@
{
"metadata": {
"total_size": 14643904512
},
"weight_map": {
"lm_head.weight": "pytorch_model-00008-of-00008.bin",
"model.embed_tokens.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.0.self_attn.k_proj.bias": "pytorch_model-00001-of-00008.bin",
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.0.self_attn.o_proj.bias": "pytorch_model-00001-of-00008.bin",
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.0.self_attn.q_proj.bias": "pytorch_model-00001-of-00008.bin",
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.0.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00008.bin",
"model.layers.0.self_attn.v_proj.bias": "pytorch_model-00001-of-00008.bin",
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.1.self_attn.k_proj.bias": "pytorch_model-00001-of-00008.bin",
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.1.self_attn.o_proj.bias": "pytorch_model-00001-of-00008.bin",
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.1.self_attn.q_proj.bias": "pytorch_model-00001-of-00008.bin",
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.1.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00008.bin",
"model.layers.1.self_attn.v_proj.bias": "pytorch_model-00001-of-00008.bin",
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.10.input_layernorm.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.10.self_attn.k_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.10.self_attn.o_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.10.self_attn.q_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.10.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00008.bin",
"model.layers.10.self_attn.v_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.11.input_layernorm.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.11.self_attn.k_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.11.self_attn.o_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.11.self_attn.q_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.11.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00008.bin",
"model.layers.11.self_attn.v_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.12.input_layernorm.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.12.self_attn.k_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.12.self_attn.o_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.12.self_attn.q_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.12.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00008.bin",
"model.layers.12.self_attn.v_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.13.input_layernorm.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.13.self_attn.k_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.13.self_attn.o_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.13.self_attn.q_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.13.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00008.bin",
"model.layers.13.self_attn.v_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.14.input_layernorm.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.14.self_attn.k_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.14.self_attn.o_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.14.self_attn.q_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.14.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00008.bin",
"model.layers.14.self_attn.v_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.15.input_layernorm.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.15.self_attn.k_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.15.self_attn.o_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.15.self_attn.q_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.15.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00008.bin",
"model.layers.15.self_attn.v_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.16.input_layernorm.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.16.self_attn.k_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.16.self_attn.o_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.16.self_attn.q_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.16.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00008.bin",
"model.layers.16.self_attn.v_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.17.input_layernorm.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.17.self_attn.k_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.17.self_attn.o_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.17.self_attn.q_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.17.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00008.bin",
"model.layers.17.self_attn.v_proj.bias": "pytorch_model-00004-of-00008.bin",
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00004-of-00008.bin",
"model.layers.18.input_layernorm.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.18.self_attn.k_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.18.self_attn.o_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.18.self_attn.q_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.18.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00008.bin",
"model.layers.18.self_attn.v_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.19.input_layernorm.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.19.self_attn.k_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.19.self_attn.o_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.19.self_attn.q_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.19.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00008.bin",
"model.layers.19.self_attn.v_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.2.input_layernorm.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.2.self_attn.k_proj.bias": "pytorch_model-00001-of-00008.bin",
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.2.self_attn.o_proj.bias": "pytorch_model-00001-of-00008.bin",
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.2.self_attn.q_proj.bias": "pytorch_model-00001-of-00008.bin",
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.2.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00008.bin",
"model.layers.2.self_attn.v_proj.bias": "pytorch_model-00001-of-00008.bin",
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00008.bin",
"model.layers.20.input_layernorm.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.20.self_attn.k_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.20.self_attn.o_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.20.self_attn.q_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.20.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00008.bin",
"model.layers.20.self_attn.v_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.21.input_layernorm.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.21.self_attn.k_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.21.self_attn.o_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.21.self_attn.q_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.21.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00008.bin",
"model.layers.21.self_attn.v_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.22.input_layernorm.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.22.self_attn.k_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.22.self_attn.o_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.22.self_attn.q_proj.bias": "pytorch_model-00005-of-00008.bin",
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00005-of-00008.bin",
"model.layers.22.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00008.bin",
"model.layers.22.self_attn.v_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.23.input_layernorm.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.23.self_attn.k_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.23.self_attn.o_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.23.self_attn.q_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.23.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00008.bin",
"model.layers.23.self_attn.v_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.24.input_layernorm.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.24.self_attn.k_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.24.self_attn.o_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.24.self_attn.q_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.24.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00008.bin",
"model.layers.24.self_attn.v_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.25.input_layernorm.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.25.self_attn.k_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.25.self_attn.o_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.25.self_attn.q_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.25.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00008.bin",
"model.layers.25.self_attn.v_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.26.input_layernorm.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.26.self_attn.k_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.26.self_attn.o_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.26.self_attn.q_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.26.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00008.bin",
"model.layers.26.self_attn.v_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.27.input_layernorm.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.27.self_attn.k_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.27.self_attn.o_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.27.self_attn.q_proj.bias": "pytorch_model-00006-of-00008.bin",
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00006-of-00008.bin",
"model.layers.27.self_attn.rotary_emb.inv_freq": "pytorch_model-00007-of-00008.bin",
"model.layers.27.self_attn.v_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.28.input_layernorm.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.28.self_attn.k_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.28.self_attn.o_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.28.self_attn.q_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.28.self_attn.rotary_emb.inv_freq": "pytorch_model-00007-of-00008.bin",
"model.layers.28.self_attn.v_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.29.input_layernorm.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.29.self_attn.k_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.29.self_attn.o_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.29.self_attn.q_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.29.self_attn.rotary_emb.inv_freq": "pytorch_model-00007-of-00008.bin",
"model.layers.29.self_attn.v_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.3.input_layernorm.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.3.self_attn.k_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.3.self_attn.o_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.3.self_attn.q_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.3.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00008.bin",
"model.layers.3.self_attn.v_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.30.input_layernorm.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.30.self_attn.k_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.30.self_attn.o_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.30.self_attn.q_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.30.self_attn.rotary_emb.inv_freq": "pytorch_model-00007-of-00008.bin",
"model.layers.30.self_attn.v_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.31.input_layernorm.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.31.self_attn.k_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.31.self_attn.o_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.31.self_attn.q_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.31.self_attn.rotary_emb.inv_freq": "pytorch_model-00007-of-00008.bin",
"model.layers.31.self_attn.v_proj.bias": "pytorch_model-00007-of-00008.bin",
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00007-of-00008.bin",
"model.layers.4.input_layernorm.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.4.self_attn.k_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.4.self_attn.o_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.4.self_attn.q_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.4.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00008.bin",
"model.layers.4.self_attn.v_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.5.input_layernorm.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.5.self_attn.k_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.5.self_attn.o_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.5.self_attn.q_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.5.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00008.bin",
"model.layers.5.self_attn.v_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.6.input_layernorm.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.6.self_attn.k_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.6.self_attn.o_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.6.self_attn.q_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.6.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00008.bin",
"model.layers.6.self_attn.v_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.7.input_layernorm.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.7.self_attn.k_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.7.self_attn.o_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.7.self_attn.q_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.7.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00008.bin",
"model.layers.7.self_attn.v_proj.bias": "pytorch_model-00002-of-00008.bin",
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00002-of-00008.bin",
"model.layers.8.input_layernorm.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.8.self_attn.k_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.8.self_attn.o_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.8.self_attn.q_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.8.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00008.bin",
"model.layers.8.self_attn.v_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.9.input_layernorm.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.9.self_attn.k_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.9.self_attn.o_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.9.self_attn.q_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.layers.9.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00008.bin",
"model.layers.9.self_attn.v_proj.bias": "pytorch_model-00003-of-00008.bin",
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00003-of-00008.bin",
"model.norm.weight": "pytorch_model-00007-of-00008.bin"
}
}

6
special_tokens_map.json Normal file
View File

@ -0,0 +1,6 @@
{
"bos_token": "<s>",
"eos_token": "</s>",
"pad_token": "</s>",
"unk_token": "<unk>"
}

242
tokenization_internlm.py Normal file
View File

@ -0,0 +1,242 @@
# coding=utf-8
# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
#
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
# and OPT implementations in this library. It has been modified from its
# original forms to accommodate minor architectural differences compared
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tokenization classes for IntermLM."""
import os
from shutil import copyfile
from typing import Any, Dict, List, Optional, Tuple
import sentencepiece as spm
from transformers.tokenization_utils import PreTrainedTokenizer
from transformers.utils import logging
logger = logging.get_logger(__name__)
VOCAB_FILES_NAMES = {"vocab_file": "./tokenizer.model"}
PRETRAINED_VOCAB_FILES_MAP = {}
class InternLMTokenizer(PreTrainedTokenizer):
"""
Construct a InternLM tokenizer. Based on byte-level Byte-Pair-Encoding.
Args:
vocab_file (`str`):
Path to the vocabulary file.
"""
vocab_files_names = VOCAB_FILES_NAMES
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
model_input_names = ["input_ids", "attention_mask"]
_auto_class = "AutoTokenizer"
def __init__(
self,
vocab_file,
unk_token="<unk>",
bos_token="<s>",
eos_token="</s>",
pad_token="</s>",
sp_model_kwargs: Optional[Dict[str, Any]] = None,
add_bos_token=True,
add_eos_token=False,
decode_with_prefix_space=False,
clean_up_tokenization_spaces=False,
**kwargs,
):
self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
super().__init__(
bos_token=bos_token,
eos_token=eos_token,
unk_token=unk_token,
pad_token=pad_token,
clean_up_tokenization_spaces=clean_up_tokenization_spaces,
**kwargs,
)
self.vocab_file = vocab_file
self.add_bos_token = add_bos_token
self.add_eos_token = add_eos_token
self.decode_with_prefix_space = decode_with_prefix_space
self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
self.sp_model.Load(vocab_file)
self._no_prefix_space_tokens = None
""" Initialisation"""
@property
def no_prefix_space_tokens(self):
if self._no_prefix_space_tokens is None:
vocab = self.convert_ids_to_tokens(list(range(self.vocab_size)))
self._no_prefix_space_tokens = {i for i, tok in enumerate(vocab) if not tok.startswith("")}
return self._no_prefix_space_tokens
@property
def vocab_size(self):
"""Returns vocab size"""
return self.sp_model.get_piece_size()
@property
def bos_token_id(self) -> Optional[int]:
return self.sp_model.bos_id()
@property
def eos_token_id(self) -> Optional[int]:
return self.sp_model.eos_id()
def get_vocab(self):
"""Returns vocab as a dict"""
vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
vocab.update(self.added_tokens_encoder)
return vocab
def _tokenize(self, text):
"""Returns a tokenized string."""
return self.sp_model.encode(text, out_type=str)
def _convert_token_to_id(self, token):
"""Converts a token (str) in an id using the vocab."""
return self.sp_model.piece_to_id(token)
def _convert_id_to_token(self, index):
"""Converts an index (integer) in a token (str) using the vocab."""
token = self.sp_model.IdToPiece(index)
return token
def _maybe_add_prefix_space(self, tokens, decoded):
if tokens and tokens[0] not in self.no_prefix_space_tokens:
return " " + decoded
else:
return decoded
def convert_tokens_to_string(self, tokens):
"""Converts a sequence of tokens (string) in a single string."""
current_sub_tokens = []
out_string = ""
prev_is_special = False
for token in tokens:
# make sure that special tokens are not decoded using sentencepiece model
if token in self.all_special_tokens:
if not prev_is_special:
out_string += " "
out_string += self.sp_model.decode(current_sub_tokens) + token
prev_is_special = True
current_sub_tokens = []
else:
current_sub_tokens.append(token)
prev_is_special = False
out_string += self.sp_model.decode(current_sub_tokens)
out_string = self.clean_up_tokenization(out_string)
out_string = self._maybe_add_prefix_space(tokens=tokens, decoded=out_string)
return out_string[1:]
def save_vocabulary(self, save_directory, filename_prefix: Optional[str] = None) -> Tuple[str]:
"""
Save the vocabulary and special tokens file to a directory.
Args:
save_directory (`str`):
The directory in which to save the vocabulary.
Returns:
`Tuple(str)`: Paths to the files saved.
"""
if not os.path.isdir(save_directory):
logger.error(f"Vocabulary path ({save_directory}) should be a directory")
return
out_vocab_file = os.path.join(
save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
)
if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file):
copyfile(self.vocab_file, out_vocab_file)
elif not os.path.isfile(self.vocab_file):
with open(out_vocab_file, "wb") as fi:
content_spiece_model = self.sp_model.serialized_model_proto()
fi.write(content_spiece_model)
return (out_vocab_file,)
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
if self.add_bos_token:
bos_token_ids = [self.bos_token_id]
else:
bos_token_ids = []
output = bos_token_ids + token_ids_0
if token_ids_1 is not None:
output = output + token_ids_1
if self.add_eos_token:
output = output + [self.eos_token_id]
return output
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*):
Optional second list of IDs for sequence pairs.
already_has_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not the token list is already formatted with special tokens for the model.
Returns:
`List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
"""
if already_has_special_tokens:
return super().get_special_tokens_mask(
token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
)
if token_ids_1 is None:
return [1] + ([0] * len(token_ids_0)) + [1]
return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1]
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
use of token type ids, therefore a list of zeros is returned.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*):
Optional second list of IDs for sequence pairs.
Returns:
`List[int]`: List of zeros.
"""
eos = [self.eos_token_id]
if token_ids_1 is None:
return len(token_ids_0 + eos) * [0]
return len(token_ids_0 + eos + token_ids_1 + eos) * [0]

BIN
tokenizer.model Normal file

Binary file not shown.

15
tokenizer_config.json Normal file
View File

@ -0,0 +1,15 @@
{
"auto_map": {
"AutoTokenizer": [
"tokenization_internlm.InternLMTokenizer",
null
]
},
"bos_token": "<s>",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"model_max_length": 1000000000000000019884624838656,
"pad_token": "</s>",
"tokenizer_class": "InternLMTokenizer",
"unk_token": "<unk>"
}