Create text embeddings
This guide shows how you can create text embeddings.
The following table lists the available models for generating text embeddings and their key characteristics:
The “Marengo-retrieval-2.6” video understanding engine generates embeddings for all modalities in the same latent space. This shared space enables any-to-any searches across different types of content.
To create text embeddings, invoke the create
method of the embed
class specifying the following parameters:
engine_name
: The name of the video understanding engine you want to use.text
: The text for which you want to create an embedding.- (Optional)
text_truncate
: Specifies the behavior for text that exceeds 77 tokens. It can take one of the following values:start
: Truncate the beginning of the text.end
: Truncate the end of the text (default).none
: Return an error if the text exceeds the token limit.
The response is an object containing the following fields:
engine_name
: The name of the engine the platform has used to create this embedding.text_embedding
: An object that contains the embedding.
For a description of each field in the request and response, see the Create embeddings for text, image, and audio page.
Prerequisites
- You’re familiar with the concepts that are described on the Platform overview page.
- To use the platform, you need an API key:
Example
The example code below creates a text embedding using the default behavior for handling text that is too long. Ensure you replace the placeholders surrounded by <>
with your values.
The output should look similar to the following one: