Create embeddings
This quickstart guide provides a simplified introduction to creating embeddings using the TwelveLabs Video Understanding Platform. It includes the following:
- A basic working example
- Minimal implementation details
- Core parameters for common use cases
For a comprehensive guide, see the Create embeddings section.
Key concepts
This section explains the key concepts and terminology used in this guide:
- Asset: Your uploaded content
- Embedding: Vector representation of your content.
Workflow
The platform generates embeddings for video, audio, image, text, and combined text and image content. For video, audio, and image embeddings, upload your files and create assets first. For text embeddings, provide your text directly. The platform processes your content and returns vector representations you can use for anomaly detection, diversity sorting, sentiment analysis, recommendations, building Retrieval-Augmented Generation (RAG) systems, or other tasks.
Prerequisites
-
To use the platform, you need an API key:
-
Depending on the programming language you are using, install the TwelveLabs SDK by entering one of the following commands:
-
Your media files must meet the following requirements:
- For this guide: Video and audio files up to 10 minutes, images up to 5 MB
- Model capabilities: See the complete requirements for video files, audio files, and image files
For other upload methods with different limits, see the Upload methods page.
Starter code
Copy and paste the code below, replacing the placeholders surrounded by <> with your values.