This guide shows how you can create text embeddings.
The following table lists the available models for generating text embeddings and their key characteristics:
The Marengo video understanding model generates embeddings for all modalities in the same latent space. This shared space enables any-to-any searches across different types of content.
To use the platform, you need an API key:
Ensure the TwelveLabs SDK is installed on your computer:
This complete example shows how you can create text embeddings. Ensure you replace the placeholders surrounded by <>
with your values.
Create a client instance to interact with the TwelveLabs Video Understanding Platform.
Function call: You call the constructor of the TwelveLabs
class.
Parameters:
api_key
: The API key to authenticate your requests to the platform.Return value: An object of type TwelveLabs
configured for making API calls.
Function call: You call the embed.create
function.
Parameters:
model_name
: The name of the model you want to use (“Marengo-retrieval-2.7”).text
: The text for which you wish to create an embedding.text_truncate
: A string that specifies how the platform truncates text that exceeds 77 tokens to fit the maximum length allowed for an embedding. This parameter can take one of the following values:
start
: The platform will truncate the start of the provided text.end
: The platform will truncate the end of the provided text. This is the default value.none
: The platform will return an error if the text is longer than the maximum token limit.Return value: The response contains the following fields:
text_embedding
: An object that contains the embedding data for your text. It includes the following fields:
segments
: An object that contains the following:
float_
: An array of floats representing the embeddingmetadata
: An object that contains metadata about the embedding.model_name
: The name of the video understanding model the platform has used to create this embedding.