In this work, we introduce TableLlama and TableInstruct, the first large open-source generalist model and instruction tuning dataset for tables. Everything is open-source right now!

TableInstruct is a large-scale instruction tuning dataset with diverse, realistic tasks based on real-world tables. TableInstruct boasts a collection of 14 datasets of 11 tasks in total, which is curated from 1.24M tables containing 2.6M instances:

  • All data items are unified into an instruction tuning manner for further LLM training;
  • All data items in TableInstruct are collected from real tables and real tasks;
  • TableInstruct provides both in-domain training tasks to empower the model with fundamental table understanding abilities, and in-domain & out-of-domain evaluation tasks to test the model’s generalization and high-level reasoning ability.

TableLlama is a large generalist model for tables based on Llama 2 (7B) and LongLoRA, which can:

  • Support long input context (up to 8k);
  • Achieve comparable or even better performance than the SOTA on almost all of the in-domain tasks;
  • Achieve 5-44 absolute point gains on 6 out-of-domain datasets compared with the base model, demonstrating that TableInstruct can substantially enhance model generalizability.
An overview of TableInstruct and TableLlama

Figure 1: An overview of TableInstruct and TableLlama. TableInstruct includes a wide variety of realistic tables and tasks with instructions. We make the first step towards developing open-source generalist models for tables with TableInstruct and TableLlama.

Abstract

Semi-structured tables are ubiquitous. There has been a variety of tasks that aim to automatically interpret, augment, and query tables. Current methods often require pretraining on tables or special model architecture design, are restricted to specific table types, or have simplifying assumptions about tables and tasks. This paper makes the first step towards developing open-source large language models (LLMs) as generalists for a diversity of table-based tasks. Towards that end, we construct TableInstruct, a new dataset with a variety of realistic tables and tasks, for instruction tuning and evaluating LLMs. We further develop the first open-source generalist model for tables, TableLlama, by fine-tuning Llama 2 (7B) with LongLoRA to address the long context challenge. We experiment under both in-domain setting and out-of-domain setting. On 7 out of 8 in-domain tasks, TableLlama achieves comparable or better performance than the SOTA for each task, despite the latter often has task-specific design. On 6 out-of-domain datasets, it achieves 5-44 absolute point gains compared with the base model, showing that training on TableInstruct enhances the model’s generalizability. We open-source our dataset and trained model to boost future work on developing open generalist models for tables.

Updates

  • 2024/03/13: Our paper has been accepted by NAACL 2024 as a long paper!
  • 2024/03/21: We refine the prompts of 4 out-of-domain evaluation datasets: FEVEROUS, HybridQA, WikiSQL and WikiTQ of TableInstruct and update the results. Check the new results!
  • 2024/03/21: We add the results of closed-source LLMs: GPT-3.5 and GPT-4.
  • 2023/11/20: We have released the dataset, model, and codebase for the paper. Check it out!

Note:

  • The evaluation results can vary largely along with the prompts. Check the detailed guidelines about how to get the suitable prompt for TableLlama and do inference!

Our Dataset: TableInstruct

We construct TableInstruct, a comprehensive table-based instruction tuning dataset that covers a variety of real-world tables and realistic tasks. We include 14 datasets of 11 tasks in total, with both in-domain and out-of-domain evaluation settings. Some examples can be found in the figure below:

The illustration of three exemplary tasks from TableInstruct

Figure 2: Illustration of three exemplary tasks: (a) Column type annotation. This task is to annotate the selected column with the correct semantic types. (b) Row population. This task is to populate rows given table metadata and partial row entities. (c) Hierarchical table QA. For subfigures (a) and (b), we mark candidates with red color in the "task instruction" part. The candidate set size can be hundreds to thousands in TableInstruct.

Data Statistics

TableInstruct includes various kinds of table tasks to comprehensively represent different table-related applications in real-world scenarios. These tasks belong to several categories: table interpretation, table augmentation, question answering, fact verification, dialogue generation, and data-to-text. The table below shows the statistics of TableInstruct:

In-domain Evaluation

We first evaluate TableLlama on 8 in-domain test sets. Due to the special semi-structured nature of tables, for most table-based tasks, existing work achieves SOTA results by using pretraining on large-scale tables and/or special model architecture design tailored for tables. Surprisingly, with a unified instruction tuning format without extra special design, TableLlama can achieve comparable or even better performance on almost all the tasks. The table below shows the results:


Specifically, we observed the following takeaways:
  1. By simply fine-tuning a large language model on TableInstruct, TableLlama can achieve comparable or even better performance on almost all the tasks without any table pretraining or special table model architecture design;
  2. TableLlama displays advantages in table QA tasks: TableLlama can surpass the SOTA by 5.61 points for highlighted cell based table QA task (i.e., FeTaQA) and 17.71 points for hierarchical table QA (i.e., HiTab), which is full of numerical reasoning on tables. As LLMs have been shown superior in interacting with humans and answering questions, this indicates that the existing underlying strong language understanding ability of LLMs may be beneficial for such table QA tasks, despite with semi-structured tables;
  3. For the entity linking task, which requires the model to link the mention in a table cell to the correct referent entity in Wikidata, TableLlama also presents superior performance with 8 points gain over the SOTA performance. Since the candidates are composed of their referent entity name and description, we hypothesize LLMs have certain abilities to understand the description which help identify the correct entities;
  4. Row population is the only task where TableLlama has a large performance gap compared to the SOTA. We observe that, in order to correctly populate the entities from the given large number of candidates, the model needs to fully understand the inherent relation between the enquiried entity and each given candidate, which is still challenging for the current model. Detailed analysis and case study can be found in our paper's Section 4.1 and Table 5 in Appendix D.
  5. TableLlama achieves better performance on indomain tasks compared with closed-source LLMs. It shows that even if closed-source LLMs have demonstrated strong performance in general, finetuning open-source LLMs on task-specific tablebased data still has better performance.

Out-of-domain Evaluation

To show the model's generalizability on unseen data and unseen tasks, we evaluate TableLlama on several out-of-domain datasets. Overall, TableLlama shows a remarkable generalizability on different out-of-domain tasks, by outperforming the baselines from 6 to 48 absolute points. The table below shows the results:

Specifically, we observed the following takeaways:
  1. By learning from the table-based training tasks, the model has acquired essential underlying table understanding ability, which can be transferred to other table-based tasks/datasets and facilitate their performance;
  2. FEVEROUS exhibits the largest gain over the other 5 datasets. This is likely because the fact verification task is an in-domain training task, although the dataset is unseen during training. Compared with cross-task generalization, it may be easier to generalize to different datasets belonging to the same tasks;
  3. Although there's a gap between TableLlama results and SOTA performances, those SOTAs were achieved under full-dataset training while TableLlama is zero-shot. Nevertheless, we hope our work can inspire future work to further improve the zero-shot performance.
  4. TableLlama shows less gap or even better zeroshot performance than closed-source LLMs on 4 out of 6 out-of-domain datasets (i.e., FEVEROUS, KVRET, ToTTo and WikiSQL), which shows TableLlama has gained generalization ability. But closed-source LLMs are still stronger at table-based QA tasks that require more complex reasoning.

Reference

Please kindly cite our paper if you use our code, data, models or results:

@misc{zhang2023tablellama,
  title={TableLlama: Towards Open Large Generalist Models for Tables}, 
  author={Tianshu Zhang and Xiang Yue and Yifei Li and Huan Sun},
  year={2023},
  eprint={2311.09206},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
}