Create text embeddings
This guide shows how you can create text embeddings.
The following table lists the available models for generating text embeddings and their key characteristics:
The “Marengo-retrieval-2.7” video understanding model generates embeddings for all modalities in the same latent space. This shared space enables any-to-any searches across different types of content.
To create text embeddings, invoke the create
method of the embed
class specifying the following parameters:
model_name
: The name of the video understanding model you want to use.text
: The text for which you want to create an embedding.- (Optional)
text_truncate
: Specifies the behavior for text that exceeds 77 tokens. It can take one of the following values:start
: Truncate the beginning of the text.end
: Truncate the end of the text (default).none
: Return an error if the text exceeds the token limit.
The response is an object containing the following fields:
model_name
: The name of the model the platform has used to create this embedding.text_embedding
: An object that contains the embedding.
For a description of each field in the request and response, see the Create embeddings for text, image, and audio page.
Prerequisites
- You’re familiar with the concepts that are described on the Platform overview page.
- You have an API key. To retrieve your API key, navigate to the API Key page and log in with your credentials. Then, select the Copy icon to the right of your API key to copy it to your clipboard.
Example
The example code below creates a text embedding using the default behavior for handling text that is too long. Ensure you replace the placeholders surrounded by <>
with your values.
The output should look similar to the following one: