Text and image embeddings

This guide shows how you can create text and image embeddings using the Marengo video understanding model. For a list of available versions, complete specifications and input requirements for each version, see the Marengo page.

The Marengo video understanding model generates embeddings for all modalities in the same latent space. This shared space enables any-to-any searches across different types of content.

For details on how your usage is measured and billed, see the Pricing page.

Key concepts

This section explains the key concepts and terminology used in this guide:

  • Asset: Your uploaded content
  • Embedding: Vector representation of your content.

Workflow

To create text and image embeddings, provide your image and text content to the platform. You can upload your image files as assets, provide a publicly accessible URL, or use base64-encoded data. The platform combines both the visual content from your image and the semantic meaning from your text into a single vector representation. Use these embeddings for similarity search, content classification, clustering, recommendations, or building Retrieval-Augmented Generation (RAG) systems.

This guide demonstrates how to create embeddings by uploading your image file as an asset. This approach is the most flexible because you can reuse assets across multiple operations. Alternatively, you can provide a publicly accessible URL or base64-encoded image data inline to skip the upload step.

Prerequisites

  • To use the platform, you need an API key:

    1

    If you don’t have an account, sign up for a free account.

    2

    Go to the API Keys page.

    3

    Select the Copy icon next to your key.

  • Depending on the programming language you are using, install the TwelveLabs SDK by entering one of the following commands:

    $pip install twelvelabs
  • Your image files must meet the requirements.

Complete example

Copy and paste the code below, replacing the placeholders surrounded by <> with your values.

1from twelvelabs import TwelveLabs, TextImageInputRequest, MediaSource
2
3# 1. Initialize the client
4client = TwelveLabs(api_key="<YOUR_API_KEY>")
5
6# 2. Upload an image
7asset = client.assets.create(
8 method="url",
9 url="<YOUR_IMAGE_URL>" # Use direct links to raw media files
10 # Or use method="direct" and file=open("<PATH_TO_IMAGE_FILE>", "rb") to upload a file from the local file system
11)
12print(f"Created asset: id={asset.id}")
13
14# 3. Create text and image embeddings
15response = client.embed.v_2.create(
16 input_type="text_image",
17 model_name="marengo3.0",
18 text_image=TextImageInputRequest(
19 media_source=MediaSource(
20 asset_id=asset.id,
21 # url="<YOUR_IMAGE_URL>", # Use direct links to raw media files
22 # base_64_string="<BASE_64_ENCODED_IMAGE_DATA>",
23 ),
24 input_text="<YOUR_TEXT>"
25 ),
26)
27
28# 4. Process the results
29print(f"Number of embeddings: {len(response.data)}")
30for embedding_data in response.data:
31 print(f"Embedding dimensions: {len(embedding_data.embedding)}")
32 print(f"First 10 values: {embedding_data.embedding[:10]}")

Code explanation

1

Import the SDK and initialize the client

Create a client instance to interact with the TwelveLabs Video Understanding Platform.
Function call: You call the constructor of the TwelveLabs class.
Parameters:

  • api_key: The API key to authenticate your requests to the platform.

Return value: An object of type TwelveLabs configured for making API calls.

2

Upload an image

Upload an image file to create an asset. For details about the available upload methods and the corresponding limits, see the Upload methods page.
Function call: You call the assets.create function.
Parameters:

  • method: The upload method for your asset. Use url for a publicly accessible or direct to upload a local file. This example uses url.
  • url or file: The publicly accessible URL of your image file or an opened file object in binary read mode. This example uses url.

Return value: An object of type Asset. This object contains, among other information, a field named id representing the unique identifier of your asset.

3

Create text and image embeddings

Function call: You call the embed.v_2.create function.
Parameters:

  • input_type: The type of content. Set this parameter to text_image.
  • model_name: The model you want to use. This example uses marengo3.0.
  • text_image: A TextImageInputRequest object containing the following properties:
    • media_source: An object specifying the source of the image file. Specify one of the following:

      • asset_id: The unique identifier of an asset from a previous upload.

      • url: The publicly accessible URL of the image file.

      • base_64_string: The base64-encoded image data.

        This example uses the asset ID from the previous step.

    • input_text: The text for which you wish to create an embedding.

Return value: An object of type EmbeddingSuccessResponse containing a field named data, which is a list of embedding objects. Each embedding object includes the following fields:

  • embedding: An array of floats representing the embedding vector.
  • embedding_option: The type of embedding generated.
4

Process the results

This example prints the number of embeddings, their dimensions, and the first 10 values of each embedding.