microsoft/codebert-base is a forked repo from huggingface. License: None
Go to file
Julien Chaumond aa14b80177 Migrate model card from transformers-repo
Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/microsoft/codebert-base/README.md
2020-12-11 22:50:44 +01:00
.gitattributes initial commit 2020-07-10 11:17:52 +00:00
README.md Migrate model card from transformers-repo 2020-12-11 22:50:44 +01:00
config.json Update config.json 2020-07-10 11:18:02 +00:00
merges.txt Update merges.txt 2020-07-10 11:18:06 +00:00
pytorch_model.bin Update pytorch_model.bin 2020-07-10 11:25:59 +00:00
special_tokens_map.json Update special_tokens_map.json 2020-07-10 11:17:58 +00:00
tf_model.h5 Update tf_model.h5 2020-07-10 11:33:29 +00:00
tokenizer_config.json Update tokenizer_config.json 2020-07-10 11:17:52 +00:00
vocab.json Update vocab.json 2020-07-10 11:33:10 +00:00

README.md

CodeBERT-base

Pretrained weights for CodeBERT: A Pre-Trained Model for Programming and Natural Languages.

Training Data

The model is trained on bi-modal data (documents & code) of CodeSearchNet

Training Objective

This model is initialized with Roberta-base and trained with MLM+RTD objective (cf. the paper).

Usage

Please see the official repository for scripts that support "code search" and "code-to-document generation".

Reference

  1. CodeBERT trained with Masked LM objective (suitable for code completion)
  2. 🤗 Hugging Face's CodeBERTa (small size, 6 layers)

Citation

@misc{feng2020codebert,
    title={CodeBERT: A Pre-Trained Model for Programming and Natural Languages},
    author={Zhangyin Feng and Daya Guo and Duyu Tang and Nan Duan and Xiaocheng Feng and Ming Gong and Linjun Shou and Bing Qin and Ting Liu and Daxin Jiang and Ming Zhou},
    year={2020},
    eprint={2002.08155},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}