Compare commits

..

No commits in common. "81916d20eef75766aeae71b9487fd615017b0413" and "4e134ddf6f3b0065014c45e6c864cc7cf4bf34ad" have entirely different histories.

3 changed files with 61 additions and 85 deletions

View File

@ -2,10 +2,10 @@
language: en language: en
tags: tags:
- tapas - tapas
- table-question-answering - question-answering
license: apache-2.0 license: apache-2.0
datasets: datasets:
- msr_sqa - sqa
--- ---
# TAPAS base model fine-tuned on Sequential Question Answering (SQA) # TAPAS base model fine-tuned on Sequential Question Answering (SQA)
@ -19,23 +19,6 @@ The other (non-default) version which can be used is:
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors. the Hugging Face team and contributors.
## Results on SQA - Dev Accuracy
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
LARGE | noreset | 0.7223 | [tapas-large-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/no_reset)
LARGE | reset | 0.7289 | [tapas-large-finetuned-sqa](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/main)
**BASE** | **noreset** | **0.6737** | [tapas-base-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/no_reset)
**BASE** | **reset** | **0.6874** | [tapas-base-finetuned-sqa](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/main)
MEDIUM | noreset | 0.6464 | [tapas-medium-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/no_reset)
MEDIUM | reset | 0.6561 | [tapas-medium-finetuned-sqa](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/main)
SMALL | noreset | 0.5876 | [tapas-small-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/no_reset)
SMALL | reset | 0.6155 | [tapas-small-finetuned-sqa](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/main)
MINI | noreset | 0.4574 | [tapas-mini-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/no_reset)
MINI | reset | 0.5148 | [tapas-mini-finetuned-sqa](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/main))
TINY | noreset | 0.2004 | [tapas-tiny-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/no_reset)
TINY | reset | 0.2375 | [tapas-tiny-finetuned-sqa](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/main)
## Model description ## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.

View File

@ -1,6 +1,4 @@
{ {
"_name_or_path": "google/tapas-base-finetuned-sqa",
"aggregation_labels": null,
"aggregation_loss_weight": 1.0, "aggregation_loss_weight": 1.0,
"aggregation_temperature": 1.0, "aggregation_temperature": 1.0,
"allow_empty_column_selection": false, "allow_empty_column_selection": false,
@ -27,7 +25,6 @@
"max_num_rows": 64, "max_num_rows": 64,
"max_position_embeddings": 1024, "max_position_embeddings": 1024,
"model_type": "tapas", "model_type": "tapas",
"no_aggregation_label_index": null,
"num_aggregation_labels": 0, "num_aggregation_labels": 0,
"num_attention_heads": 12, "num_attention_heads": 12,
"num_hidden_layers": 12, "num_hidden_layers": 12,
@ -36,7 +33,6 @@
"reset_position_index_per_cell": true, "reset_position_index_per_cell": true,
"select_one_column": true, "select_one_column": true,
"softmax_temperature": 1.0, "softmax_temperature": 1.0,
"transformers_version": "4.13.0.dev0",
"type_vocab_size": [ "type_vocab_size": [
3, 3,
256, 256,

BIN
tf_model.h5 (Stored with Git LFS)

Binary file not shown.