Fine-tuning

📘

Note:

Twelve Labs currently provides fine-tuning for selected customers only. To enroll in a fine-tuning project, contact us at [email protected]

Fine-tuning is adapting a base model to a specific task or domain by training it on a smaller, domain-focused dataset. This efficiently tailors a base model to your unique requirements, improving its accuracy and performance.

Fine-tuning a base model provides the following key benefits:

  • Increased ROI: Fine-tuning eliminates the need to train dedicated models from scratch, saving time and resources.
  • Faster time to value: Twelve Labs' automated fine-tuning pipeline allows you to train and deploy a fine-tuned model within days instead of weeks or months.
  • Improved accuracy: You can focus on areas where a base model falls short or incorporate your specific taxonomies, resulting in more accurate results.

Note the following about fine-tuning a base model:

  • Fine-tuning is currently only available with the Marengo 2.7 model. See the Video understanding models page for details about its capabilities.
  • Small or background objects and actions might not work well with fine-tuning.
  • Long-term or time-based actions, such as identifying a 5-minute run in a video, may not be suitable for fine-tuning.
  • Generalization is not guaranteed, as a model fine-tuned on specific objects or actions might not distinguish similar objects or actions in different contexts. For example, a model fine-tuned to identify "writing" and "drawing" actions on paper might not accurately distinguish between these actions when performed on a digital tablet, as the input method and the appearance of the strokes may differ from the training examples.

As a best practice, test the desired taxonomies on the base model before starting a fine-tuning project. This testing helps identify performance gaps and define the project scope.