This guide shows how you can create image embeddings.
The following table lists the available models for generating image embeddings and their key characteristics:
The Marengo video understanding model generates embeddings for all modalities in the same latent space. This shared space enables any-to-any searches across different types of content.
To use the platform, you need an API key:
Ensure the TwelveLabs SDK is installed on your computer:
The images you wish to use must meet the following requirements:
This complete example shows how you can create image embeddings. Ensure you replace the placeholders surrounded by <>
with your values.
Create a client instance to interact with the TwelveLabs Video Understanding Platform.
Function call: You call the constructor of the TwelveLabs
class.
Parameters:
api_key
: The API key to authenticate your requests to the platform.Return value: An object of type TwelveLabs
configured for making API calls.
Function call: You call the embed.create
function.
Parameters:
model_name
: The name of the model you want to use (“Marengo-retrieval-2.7”).image_url
or image_file
: The publicly accessible URL or the path of your image file. Return value: The response contains the following fields:
image_embedding
: An object that contains the embedding data for your image file. It includes the following fields:
segments
: An object that contains the following:
float_
: An array of floats representing the embeddingmetadata
: An object that contains metadata about the embedding.model_name
: The name of the video understanding model the platform has used to create this embedding.