This method creates embeddings for text, image, and audio content.
Before you create an embedding, ensure that the following prerequisites are met:
Parameters for embeddings:
model_name
: The video understanding model you want to use. Example: “Marengo-retrieval-2.7”.text
: Text for which to create an embedding.image_url
: Publicly accessible URL of your image file.image_file
: Local image file.audio_url
: Publicly accessible URL of your audio file.audio_file
: Local audio file.The name of the model you want to use. The following models are available:
Marengo-retrieval-2.7
The text for which you wish to create an embedding.
Text embeddings are limited to 77 tokens. If the text exceeds this limit, the platform truncates it according to the value of the text_truncate
parameter described below.
Example: “Man with a dog crossing the street”
Specifies how the platform truncates text that exceeds 77 tokens to fit the maximum length allowed for an embedding. This parameter can take one of the following values:
start
: The platform will truncate the start of the provided text.end
: The platform will truncate the end of the provided text.none
: The platform will return an error if the text is longer than the maximum token limit.Default: end
The publicly accessible URL of the image for which you wish to create an embedding. This parameter is required for image embeddings if image_file
is not provided.
The publicly accessible URL of the audio file for which you wish to creae an emebdding. This parameter is required for audio embeddings if audio_file
is not provided.
Specifies the start time, in seconds, from which the platform generates the audio embeddings. This parameter allows you to skip the initial portion of the audio during processing.
Default: 0
.
A text embedding has successfully been created.