Skip to content

Latest commit

 

History

History
31 lines (22 loc) · 2.38 KB

README.md

File metadata and controls

31 lines (22 loc) · 2.38 KB

🧑‍🔬 Tabby Registry

Completion models (--model)

We recommend using

  • For 1B to 3B models, it's advisable to have at least NVIDIA T4, 10 Series, or 20 Series GPUs.
  • For 7B to 13B models, we recommend using NVIDIA V100, A100, 30 Series, or 40 Series GPUs.

We have published benchmarks for these models on https://leaderboard.tabbyml.com for Tabby's users to consider when making trade-offs between quality, licensing, and model size.

Model ID License
/StarCoder-1B BigCode-OpenRAIL-M
TabbyML/StarCoder-3B BigCode-OpenRAIL-M
TabbyML/StarCoder-7B BigCode-OpenRAIL-M
TabbyML/CodeLlama-7B Llama 2
TabbyML/CodeLlama-13B Llama 2
TabbyML/DeepseekCoder-1.3B Deepseek License
TabbyML/DeepseekCoder-6.7B Deepseek License
anoldguy/DeepseekCoder-33B Deepseek License

Chat models (--chat-model)

To ensure optimal response quality, and given that latency requirements are not stringent in this scenario, we recommend using a model with at least 3B parameters.

Model ID License
TabbyML/WizardCoder-3B BigCode-OpenRAIL-M
TabbyML/Mistral-7B Apache 2.0