Embeddings for new videos

This guide shows how you can create video embeddings using the Marengo video understanding model. For a list of available versions, complete specifications and input requirements for each version, see the Marengo page.

The Marengo video understanding model generates embeddings for all modalities in the same latent space. This shared space enables any-to-any searches across different types of content.

For details on how your usage is measured and billed, see the Pricing page.

Key concepts

This section explains the key concepts and terminology used in this guide:

  • Asset: Your uploaded content
  • Embedding: Vector representation of your content.
  • Embedding task: An asynchronous operation for processing your content and creating embeddings. Contains a status and the resulting embeddings when complete.

Workflow

To create video embeddings, provide your video content to the platform. You can upload video files as assets, provide a publicly accessible URL, or use base64-encoded data. The platform processes your video and returns vector representations of your content. Use these embeddings for similarity search, content classification, clustering, recommendations, or building Retrieval-Augmented Generation (RAG) systems.

For videos shorter than 10 minutes, you can provide a publicly accessible URL or base64-encoded video data inline. This method skips the upload step but limits reusability for subsequent operations. See the Short videos (synchronous) section for an example implementation.

This guide demonstrates how to create embeddings by uploading your video file as an asset. This approach is the most flexible because you can reuse assets across multiple operations.

Customize your embeddings

You can customize your embeddings in the following ways:

  • Specify the types of embeddings you wish to generate:
    • Visual: Based on visual content
    • Audio: Based on audio content, excluding spoken words
    • Transcription: Based on spoken words extracted from the audio track
  • Choose the embedding scope: clip (per segment) or asset (entire video)
  • Define how the platform divides your video into segments: dynamic (scene-based) or fixed (time-based)

Prerequisites

  • To use the platform, you need an API key:

    1

    If you don’t have an account, sign up for a free account.

    2

    Go to the API Keys page.

    3

    Select the Copy icon next to your key.

  • Depending on the programming language you are using, install the TwelveLabs SDK by entering one of the following commands:

    $pip install twelvelabs
  • Your video files must meet the following requirements:

    • For this guide: Files up to 4 hours
    • Model capabilities: See the complete requirements for resolution, aspect ratio, and supported formats.

    For other upload methods with different limits, see the Upload methods page.

Complete example

Copy and paste the code below, replacing the placeholders surrounded by <> with your values.

1import time
2from twelvelabs import (
3 TwelveLabs,
4 VideoInputRequest,
5 MediaSource,
6 # For dynamic segmentation uncomment the next two lines:
7 # VideoSegmentation_Dynamic,
8 # VideoSegmentationDynamicDynamic,
9 # For fixed segmentation uncomment the next two lines:
10 # VideoSegmentation_Fixed,
11 # VideoSegmentationFixedFixed,
12)
13
14# 1. Initialize the client
15client = TwelveLabs(api_key="<YOUR_API_KEY>")
16
17# 2. Upload a video
18asset = client.assets.create(
19 method="url",
20 url="<YOUR_VIDEO_URL>" # Use direct links to raw media files. Video hosting platforms and cloud storage sharing links are not supported
21 # Or use method="direct" and file=open("<PATH_TO_VIDEO_FILE>", "rb") to upload a file from the local file system
22)
23print(f"Created asset: id={asset.id}")
24
25# 3. Process your video
26task = client.embed.v_2.tasks.create(
27 input_type="video",
28 model_name="marengo3.0",
29 video=VideoInputRequest(
30 media_source=MediaSource(
31 asset_id=asset.id,
32 # url="<YOUR_VIDEO_URL>", # Use direct links to raw media files. Video hosting platforms and cloud storage sharing links are not supported
33 # base_64_string="<BASE_64_ENCODED_DATA>",
34 ),
35 # start_sec=0,
36 # end_sec=10,
37 # embedding_option=["visual", "audio", "transcription"],
38 # embedding_scope=["clip","asset"]
39 # For dynamic segmentation:
40 # segmentation=VideoSegmentation_Dynamic(
41 # dynamic=VideoSegmentationDynamicDynamic(
42 # min_duration_sec=3 # Minimum segment duration in seconds
43 # )
44 # ),
45 # For fixed segmentation:
46 # segmentation=VideoSegmentation_Fixed(
47 # fixed=VideoSegmentationFixedFixed(
48 # duration_sec=5 # Exact segment duration in seconds
49 # )
50 # ),
51 ),
52)
53print(f"Task ID: {task.id}")
54
55# 4. Monitor the status
56while True:
57 task = client.embed.v_2.tasks.retrieve(task_id=task.id)
58
59 if task.status == "ready":
60 print(f"Task completed")
61 break
62 elif task.status == "failed":
63 print("Task failed")
64 break
65 else:
66 print("Task still processing...")
67 time.sleep(5)
68
69# 5. Process the results
70print(f"\n{'='*80}")
71print(f"EMBEDDINGS SUMMARY: {len(task.data)} total embeddings")
72print(f"{'='*80}\n")
73
74for idx, embedding_data in enumerate(task.data, 1):
75 print(f"[{idx}/{len(task.data)}] {embedding_data.embedding_option.upper()} | {embedding_data.embedding_scope.upper()}")
76 print(f"├─ Time range: {embedding_data.start_sec}s - {embedding_data.end_sec}s")
77 print(f"├─ Dimensions: {len(embedding_data.embedding)}")
78 print(f"└─ First 10 values: {embedding_data.embedding[:10]}")
79 print()

Code explanation

1

Import the SDK and initialize the client

Create a client instance to interact with the TwelveLabs Video Understanding Platform.
Function call: You call the constructor of the TwelveLabs class.
Parameters:

  • api_key: The API key to authenticate your requests to the platform.

Return value: An object of type TwelveLabs configured for making API calls.

2

Upload a video

Upload a video to create an asset. For details about the available upload methods and the corresponding limits, see the Upload methods page.
Function call: You call the assets.create function.
Parameters:

  • method: The upload method for your asset. Use url for a publicly accessible or direct to upload a local file. This example uses url.
  • url or file: The publicly accessible URL of your video or an opened file object in binary read mode. This example uses url.

Return value: An object of type Asset. This object contains, among other information, a field named id representing the unique identifier of your asset.

3

Process your video

Create an embedding task to start processing your video. This operation is asynchronous.
Function call: You call the embed.v_2.tasks.create function.
Parameters:

  • input_type: The type of content. Set this parameter to video.
  • model_name: The model you want to use. This example uses marengo3.0.
  • video: An object containing the following properties:
    • media_source: An object specifying the source of the video file. You can specify one of the following:

      • asset_id: The unique identifier of an asset from a previous upload.

      • url: The publicly accessible URL of the video file.

      • base_64_string: The base64-encoded video data.

        This example uses the asset ID from the previous step.

    • (Optional) start_sec: The start time in seconds for processing the video file. By default, the platform processes videos from the beginning.

    • (Optional) end_sec: The end time in seconds for processing the video file. By default, the platform processes videos to the end of the video file.

    • (Optional) embedding_option: The types of embeddings to generate. Valid values are visual, audio, and transcription. You can specify multiple options to generate different types of embeddings. The default value is ["visual", "audio", "transcription"].

    • (Optional) embedding_scope: The scope for which to generate embeddings. Valid values are the following:

      • clip: Generates one embedding for each segment.
      • asset: Generates one embedding for the entire video file. Use this scope for videos up to 10-30 seconds to maintain optimal performance.

      You can specify multiple scopes to generate embeddings at different levels. The default value is ["clip", "asset"].

    • (Optional) segmentation: An object that specifies how the platform divides the video into segments. You can use one of the following strategies:

      • VideoSegmentation_Dynamic: Divides the video into segments that adapt to scene changes. Requires a property named dynamic with a min_duration_sec field specifying the minimum duration in seconds for each segment.
      • VideoSegmentation_Fixed: Divides the video into segments of a fixed length. Requires a property named fixed with a duration_sec field specifying the exact duration in seconds for each segment.

Return value: An object of type TasksCreateResponse containing, among other information, a field named id, which represents the unique identifier of your embedding task. You can use this identifier to track the status of your embedding task.

4

Monitor the status

The platform requires some time to process videos. Poll the status of the embedding task until processing completes. This example uses a loop to check the status every 5 seconds.
Function call: You repeatedly call the embed.v_2.tasks.retrieve function until the task completes.

Parameters:

  • task_id: The unique identifier of your embedding task.

Return value: An object of type EmbeddingTaskResponse containing, among other information, the following fields:

  • status: The current status of the task. The possible values are:
    • processing: The platform is creating the embeddings.
    • ready: Processing is complete. Embeddings are available in the data field.
    • failed: The task failed.
  • data: When the status is ready, this field contains a list of embedding objects. Each embedding object includes:
    • embedding: The embedding vector (a list of floats).
    • embedding_option: The type of embedding (visual, audio, or transcription).
    • embedding_scope: The scope of the embedding (clip or asset).
    • start_sec: The start time of the segment in seconds.
    • end_sec: The end time of the segment in seconds.
5

Process the results

This example iterates through the embeddings in the data field and prints the embedding type, scope, time range, dimensions, and the first 10 vector values for each segment

Short videos (synchronous)

For videos shorter than 10 minutes, you can use a synchronous approach that returns embeddings immediately without requiring polling.

1import time
2from twelvelabs import (
3 TwelveLabs,
4 VideoInputRequest,
5 MediaSource,
6 # For dynamic segmentation uncomment the next two lines:
7 # VideoSegmentation_Dynamic,
8 # VideoSegmentationDynamicDynamic,
9 # For fixed segmentation uncomment the next two lines:
10 # VideoSegmentation_Fixed,
11 # VideoSegmentationFixedFixed,
12)
13
14# 1. Initialize the client
15client = TwelveLabs(api_key="<YOUR_API_KEY>")
16
17# 2. Upload a file
18asset = client.assets.create(
19 method="url",
20 url="<YOUR_VIDEO_URL>" # Use direct links to raw media files. Video hosting platforms and cloud storage sharing links are not supported
21 # Or use method="direct" and file=open("<PATH_TO_VIDEO_FILE>", "rb") to upload a file from the local file system
22)
23print(f"Created asset: id={asset.id}")
24
25# 3. Create video embeddings
26response = client.embed.v_2.create(
27 input_type="video",
28 model_name="marengo3.0",
29 video=VideoInputRequest(
30 media_source=MediaSource(
31 asset_id=asset.id,
32 # url="<YOUR_VIDEO_URL>", # Use direct links to raw media files. Video hosting platforms and cloud storage sharing links are not supported
33 # base_64_string="<BASE_64_ENCODED_DATA>",
34 ),
35 # start_sec=0,
36 # end_sec=10,
37 # embedding_option=["visual", "audio", "transcription"],
38 # embedding_scope=["clip","asset"]
39 # For dynamic segmentation:
40 # segmentation=VideoSegmentation_Dynamic(
41 # dynamic=VideoSegmentationDynamicDynamic(
42 # min_duration_sec=3 # Minimum segment duration in seconds
43 # )
44 # ),
45 # For fixed segmentation:
46 # segmentation=VideoSegmentation_Fixed(
47 # fixed=VideoSegmentationFixedFixed(
48 # duration_sec=5 # Exact segment duration in seconds
49 # )
50 # ),
51 ),
52)
53
54# 4. Process the results
55print(f"\n{'='*80}")
56print(f"EMBEDDINGS SUMMARY: {len(response.data)} total embeddings")
57print(f"{'='*80}\n")
58
59for idx, embedding_data in enumerate(response.data, 1):
60 print(f"[{idx}/{len(response.data)}] {embedding_data.embedding_option.upper()} | {embedding_data.embedding_scope.upper()}")
61 print(f"├─ Time range: {embedding_data.start_sec}s - {embedding_data.end_sec}s")
62 print(f"├─ Dimensions: {len(embedding_data.embedding)}")
63 print(f"└─ First 10 values: {embedding_data.embedding[:10]}")
64 print()

All the fields of the video object function similarly to the asynchronous approach.