Create embeddings
This quickstart guide provides a simplified introduction to creating embeddings using the TwelveLabs Video Understanding Platform. It includes the following:
- A basic working example
- Minimal implementation details
- Core parameters for common use cases
For a comprehensive guide, see the Create embeddings section.
Key concepts
This section explains the key concepts and terminology used in this guide:
- Asset: Your uploaded content. Once created, you can reference the same asset across multiple operations without uploading the file again.
- Embedding: Vector representation of your content.
Workflow
The platform generates embeddings for video, audio, image, text, and combined content. The input method depends on the content type:
- File-based content (video, audio, single image): Upload your file to create an asset, then pass the asset ID to generate embeddings.
- Text content: Provide your text directly.
- Text and image: Upload images as assets and pass their IDs along with your text.
Use these embeddings for similarity search, content classification, clustering, recommendations, or Retrieval-Augmented Generation (RAG).
Prerequisites
-
To use the platform, you need an API key:
-
Depending on the programming language you are using, install the TwelveLabs SDK by entering one of the following commands:
-
Your media files must meet the following requirements:
- For this guide: Video and audio files up to 10 minutes, images up to 5 MB. This guide uses the synchronous approach, which returns results immediately. For longer files (up to 4 hours), use the asynchronous approach in the complete guide.
- Model capabilities: See the complete requirements for video files, audio files, and image files
For upload size limits and processing modes, see the Upload and processing methods page.
Starter code
Copy and paste the code below, replacing the placeholders surrounded by <> with your values.