Create sync embeddings

The EmbedClient.V2Client class provides methods to create embeddings synchronously for multimodal content, returning embeddings immediately in the response.

Note

This class only supports Marengo version 3.0 or newer.

When to use this class:

  • Create embeddings for text, images, audio, or video content
  • Retrieve immediate results without waiting for background processing
  • Process audio or video content up to 10 minutes in duration

Do not use this class for:

Methods

Create sync embeddings

Description: This method synchronously creates embeddings for multimodal content and returns the results immediately in the response.

Text:

  • Maximum length: 500 tokens

Images:

  • Formats: JPEG, PNG
  • Minimum size: 128x128 pixels
  • Maximum file size: 5 MB

Audio and video:

  • Maximum duration: 10 minutes
  • Maximum file size for base64 encoded strings: 36 MB
  • Audio formats: WAV (uncompressed), MP3 (lossy), FLAC (lossless)
  • Video formats: FFmpeg supported formats
  • Video resolution: 360x360 to 5184x2160 pixels
  • Aspect ratio: Between 1:1 and 1:2.4, or between 2.4:1 and 1:1

Function signature and example

1def create(
2 self,
3 *,
4 input_type: CreateEmbeddingsRequestInputType,
5 model_name: CreateEmbeddingsRequestModelName,
6 text: typing.Optional[TextInputRequest] = OMIT,
7 image: typing.Optional[ImageInputRequest] = OMIT,
8 text_image: typing.Optional[TextImageInputRequest] = OMIT,
9 audio: typing.Optional[AudioInputRequest] = OMIT,
10 video: typing.Optional[VideoInputRequest] = OMIT,
11 multi_input: typing.Optional[MultiInputRequest] = OMIT,
12 request_options: typing.Optional[RequestOptions] = None,
13) -> EmbeddingSuccessResponse

Parameters

NameTypeRequiredDescription
input_typeCreateEmbeddingsRequestInputTypeYesThe type of content for the embeddings. Values: text, image, text_image, audio, video, multi_input.
model_nameCreateEmbeddingsRequestModelNameYesThe video understanding model you wish to use. Value: marengo3.0.
textTextInputRequestNoText input configuration. Required when input_type is text. See TextInputRequest for details.
imageImageInputRequestNoImage input configuration. Required when input_type is image. See ImageInputRequest for details.
text_imageTextImageInputRequestNoCombined text and image input configuration. Required when input_type is text_image. See TextImageInputRequest for details.
audioAudioInputRequestNoAudio input configuration. Required when input_type is audio. See AudioInputRequest for details.
videoVideoInputRequestNoVideo input configuration. Required when input_type is video. See VideoInputRequest for details.
multi_inputMultiInputRequestNoMultiple images and optional text configuration. Required when input_type is multi_input. See MultiInputRequest for details.
request_optionsRequestOptionsNoRequest-specific configuration.

TextInputRequest

The TextInputRequest class specifies the configuration for processing text content. Required when input_type is text.

NameTypeRequiredDescription
input_textstrYesThe text for which you wish to create an embedding. The maximum length is 500 tokens.

ImageInputRequest

The ImageInputRequest class specifies the configuration for processing image content. Required when input_type is image.

NameTypeRequiredDescription
media_sourceMediaSourceYesSpecifies the source of the image file. See MediaSource for details.

TextImageInputRequest

The TextImageInputRequest class specifies the configuration for processing combined text and image content. Required when input_type is text_image.

NameTypeRequiredDescription
media_sourceMediaSourceYesSpecifies the source of the image file. See MediaSource for details.
input_textstrYesThe text for which you wish to create an embedding. The maximum length is 500 tokens.

AudioInputRequest

The AudioInputRequest class specifies the configuration for processing audio content. Required when input_type is audio.

NameTypeRequiredDescription
media_sourceMediaSourceYesSpecifies the source of the audio file. See MediaSource for details.
start_secfloatNoThe start time in seconds for processing the audio file. Use this parameter to process a portion of the audio file starting from a specific time. Default: 0 (start from the beginning).
end_secfloatNoThe end time in seconds for processing the audio file. Use this parameter to process a portion of the audio file ending at a specific time. The end time must be greater than the start time. Default: End of the audio file.
segmentationAudioSegmentationNoSpecifies how the platform divides the audio into segments. When combined with embedding_scope=["clip"], creates separate embeddings for each segment. Use this to generate embeddings for specific portions of your audio. See AudioSegmentation for details.
embedding_optionList[str]NoThe types of embeddings you wish to generate. Values:
- audio: Generates embeddings based on audio content (sounds, music, effects)
- transcription: Generates embeddings based on transcribed speech

You can specify multiple options to generate different types of embeddings for the same audio.
embedding_scopeList[str]NoThe scope for which you wish to generate embeddings. Values:
- clip: Generates one embedding for each segment
- asset: Generates one embedding for the entire audio file

You can specify multiple scopes to generate embeddings at different levels.
embedding_typeList[str]NoSpecifies how to structure the embedding. Use this parameter only when embedding_option specifies two or more values. Values:
- separate_embedding: Returns separate embeddings per modality specified in embedding_option.
- fused_embedding: Returns a single embedding that combines all modalities into one vector.
Specify both values to receive separate and fused embeddings in the same response.

Default: separate_embedding

VideoInputRequest

The VideoInputRequest class specifies the configuration for processing video content. Required when input_type is video.

NameTypeRequiredDescription
media_sourceMediaSourceYesSpecifies the source of the video file. See MediaSource for details.
start_secfloatNoThe start time in seconds for processing the video file. Use this parameter to process a portion of the video file starting from a specific time. Default: 0 (start from the beginning).
end_secfloatNoThe end time in seconds for processing the video file. Use this parameter to process a portion of the video file ending at a specific time. The end time must be greater than the start time. Default: End of the video file.
segmentationVideoSegmentationNoSpecifies how the platform divides the video into segments. When combined with embedding_scope=["clip"], creates separate embeddings for each segment. Supports fixed-duration segments or dynamic segmentation that adapts to scene changes. See VideoSegmentation for details.
embedding_optionList[str]NoThe types of embeddings to generate for the video. Values:
- visual: Generates embeddings based on visual content (scenes, objects, actions)
- audio: Generates embeddings based on audio content (sounds, music, effects)
- transcription: Generates embeddings based on transcribed speech

You can specify multiple options to generate different types of embeddings for the same video.

Default: ["visual", "audio", "transcription"].
embedding_scopeList[str]NoThe scope for which you wish to generate embeddings. Values:
- clip: Generates one embedding for each segment
- asset: Generates one embedding for the entire video file. Use this scope for videos up to 10-30 seconds to maintain optimal performance.

You can specify multiple scopes to generate embeddings at different levels.

Default: ["clip", "asset"].
embedding_typeList[str]NoSpecifies how to structure the embedding. Include this parameter only when embedding_option contains at least two values. Values:
- separate_embedding: Returns separate embeddings per modality specified in embedding_option.
- fused_embedding: Returns a single embedding that combines all modalities into one vector.
Specify both values to receive separate and fused embeddings in the same response.

Default: separate_embedding

MultiInputRequest

The MultiInputRequest class specifies the configuration for processing multiple images and optional text. Required when input_type is multi_input.

NameTypeRequiredDescription
input_textstrNoText to combine with the images when generating the embedding. Usage options:
- Omit this field to create an embedding from images only.
- Provide plain text to add context. Example: “A person cooking.”
- Use image references to describe relationships between specific images. The format is <@name>, where name matches the name field of a media source. Example: “A person wearing <@outfit> and holding <@accessory>.”
media_sourcesList[MultiInputMediaSource]YesAn array of up to 10 images to include in the embedding. The platform processes images in the order they appear in the array. If you use image references in the input_text parameter, each must have a corresponding image with a matching name field. If an image reference has no match, the request fails.

MediaSource

The MediaSource class specifies the source of the media file. Provide exactly one of the following:

NameTypeRequiredDescription
base64_stringstrNoThe base64-encoded media data.
urlstrNoThe publicly accessible URL of the media file. Use direct links to raw media files. Video hosting platforms and cloud storage sharing links are not supported.
asset_idstrNoThe unique identifier of an asset from a direct or multipart upload.

MultiInputMediaSource

A class specifying an image source for multi-input embeddings. You must provide exactly one of url, base64_string, or asset_id.

NameTypeRequiredDescription
namestrNoThe unique identifier for this media source. This field is required when input_type references this image.
media_typeMultiInputMediaSourceMediaTypeNoThe type of media. Value: image
urlstrNoThe publicly accessible URL of the image file Use direct links to raw image files. Image hosting platforms and cloud storage sharing links are not supported.
base_64_stringstrNoThe base64-encoded image data.
asset_idstrNoThe unique identifier of an asset from a direct or multipart upload.

AudioSegmentation

The AudioSegmentation class specifies how the platform divides the audio into segments using fixed-length intervals.

NameTypeRequiredDescription
strategyLiteral["fixed"]YesThe segmentation strategy. Value: fixed.
fixedAudioSegmentationFixedYesConfiguration for fixed segmentation. See AudioSegmentationFixed for details.

AudioSegmentationFixed

The AudioSegmentationFixed class configures fixed-length segmentation for audio.

NameTypeRequiredDescription
duration_secintYesThe duration in seconds for each segment. The platform divides the audio into segments of this exact length. The final segment may be shorter if the audio duration is not evenly divisible.

Example: With duration_sec: 5, a 12-second audio file produces segments: [0-5s], [5-10s], [10-12s].

VideoSegmentation

The VideoSegmentation type specifies how the platform divides the video into segments. Use one of the following:

Fixed segmentation: Divides the video into equal-length segments:

NameTypeRequiredDescription
strategyLiteral["fixed"]YesThe segmentation strategy. Value: fixed.
fixedVideoSegmentationFixedFixedYesConfiguration for fixed segmentation. See VideoSegmentationFixedFixed for details.

Dynamic segmentation: Divides the video into adaptive segments based on scene changes:

NameTypeRequiredDescription
strategyLiteral["dynamic"]YesThe segmentation strategy. Value: dynamic.
dynamicVideoSegmentationDynamicDynamicYesConfiguration for dynamic segmentation. See VideoSegmentationDynamicDynamic for details.

VideoSegmentationFixedFixed

The VideoSegmentationFixedFixed class configures fixed-length segmentation for video.

NameTypeRequiredDescription
duration_secintYesThe duration in seconds for each segment. The platform divides the video into segments of this exact length. The final segment may be shorter if the video duration is not evenly divisible.

Example: With duration_sec: 5, a 12-second video produces segments: [0-5s], [5-10s], [10-12s].

VideoSegmentationDynamicDynamic

The VideoSegmentationDynamicDynamic class configures dynamic segmentation for video based on scene changes.

NameTypeRequiredDescription
min_duration_secintYesThe minimum duration in seconds for each segment. The platform divides the video into segments that are at least this long. Segments adapt to scene changes and content boundaries and may be longer than the minimum.

Example: With min_duration_sec: 3, segments might be: [0-3.2s], [3.2-7.8s], [7.8-12.1s].

Return value

Returns an EmbeddingSuccessResponse object containing the embedding results.

The EmbeddingSuccessResponse class contains the following properties:

NameTypeDescription
dataList[EmbeddingData]Array of embedding results.
metadataOptional[EmbeddingMediaMetadata]Metadata about the media content.

The EmbeddingData class contains the following properties:

NameTypeDescription
embeddingList[float]The embedding vector for the content.
embedding_optionOptional[EmbeddingDataEmbeddingOption]The type of embedding. Values:
- visual: Embedding based on visual content (video only)
-audio: Embedding based on audio content
-transcription: Embedding based on transcribed speech
- fused: Embedding based on a combination of the modalities specified in the request. The platform returns this embedding only for video and audio content, and only when the embedding_type parameter in the request includes fused_embedding
- null: For text and image embeddings
embedding_scopeOptional[EmbeddingDataEmbeddingScope]The scope of the embedding. Values: clip, asset.
start_secOptional[float]The start time in seconds for this embedding segment.
end_secOptional[float]The end time in seconds for this embedding segment.

API Reference

Create sync embeddings