Create embeddings

This quickstart guide provides a simplified introduction to creating embeddings using the TwelveLabs Video Understanding Platform. It includes the following:

  • A basic working example
  • Minimal implementation details
  • Core parameters for common use cases

For a comprehensive guide, see the Create embeddings section.

Key concepts

This section explains the key concepts and terminology used in this guide:

  • Asset: Your uploaded content. Once created, you can reference the same asset across multiple operations without uploading the file again.
  • Embedding: Vector representation of your content.

Workflow

The platform generates embeddings for video, audio, image, text, and combined content. The input method depends on the content type:

  • File-based content (video, audio, single image): Upload your file to create an asset, then pass the asset ID to generate embeddings.
  • Text content: Provide your text directly.
  • Text and image: Upload images as assets and pass their IDs along with your text.

Use these embeddings for similarity search, content classification, clustering, recommendations, or Retrieval-Augmented Generation (RAG).

Prerequisites

  • To use the platform, you need an API key:

    1

    If you don’t have an account, sign up for a free account.

    2

    Go to the API Keys page.

    3

    If you need to create a new key, select the Create API Key button. Enter a name and set the expiration period. The default is 12 months.

    4

    Select the Copy icon next to your key to copy it to your clipboard.

  • Depending on the programming language you are using, install the TwelveLabs SDK by entering one of the following commands:

    $pip install twelvelabs
  • Your media files must meet the following requirements:

    • For this guide: Video and audio files up to 10 minutes, images up to 5 MB. This guide uses the synchronous approach, which returns results immediately. For longer files (up to 4 hours), use the asynchronous approach in the complete guide.
    • Model capabilities: See the complete requirements for video files, audio files, and image files

    For upload size limits and processing modes, see the Upload and processing methods page.

Starter code

Copy and paste the code below, replacing the placeholders surrounded by <> with your values.

1import time
2from twelvelabs import TwelveLabs, VideoInputRequest, MediaSource
3
4# 1. Initialize the client
5client = TwelveLabs(api_key="<YOUR_API_KEY>")
6
7# 2. Upload a video
8asset = client.assets.create(
9 method="url",
10 url="<YOUR_VIDEO_URL>" # Use direct links to raw media files. Video hosting platforms and cloud storage sharing links are not supported
11 # Or use method="direct" and file=open("<PATH_TO_VIDEO_FILE>", "rb") to upload a file from the local file system
12)
13print(f"Created asset: id={asset.id}")
14
15# 3. Create video embeddings
16response = client.embed.v_2.create(
17 input_type="video",
18 model_name="marengo3.0",
19 video=VideoInputRequest(
20 media_source=MediaSource(
21 asset_id=asset.id,
22 ),
23 ),
24)
25
26# 4. Process the results
27print(f"\n{'='*80}")
28print(f"EMBEDDINGS SUMMARY: {len(response.data)} total embeddings")
29print(f"{'='*80}\n")
30
31for idx, embedding_data in enumerate(response.data, 1):
32 print(f"[{idx}/{len(response.data)}] {embedding_data.embedding_option.upper()} | {embedding_data.embedding_scope.upper()}")
33 print(f"├─ Time range: {embedding_data.start_sec}s - {embedding_data.end_sec}s")
34 print(f"├─ Dimensions: {len(embedding_data.embedding)}")
35 print(f"└─ First 10 values: {embedding_data.embedding[:10]}")
36 print()

Code explanation

1

Import the SDK and initialize the client

Create a client instance to interact with the TwelveLabs Video Understanding Platform.

2

Upload a video

Upload a video to create an asset.

3

Create embeddings

Create video embeddings using the asset ID.

4

Process the results

Process and display the embeddings. This example prints the embedding dimensions and first 10 values to the standard output.