Async analysis

The AnalyzeAsyncClient.TasksClient class provides methods to analyze videos asynchronously, generate text, and extract structured, timestamped segments. The platform supports two analysis modes: general analysis (prompt-based text generation) with both Pegasus 1.2 and Pegasus 1.5, and video segmentation with Pegasus 1.5.

When to use this class:

  • Generate custom text from your video using a prompt (Pegasus 1.2 only)
  • Extract timestamped metadata with custom fields from your video (Pegasus 1.5 only)
  • Analyze videos longer than 1 hour
  • Process videos up to 2 hours
  • Avoid blocking your application during video processing

Do not use this class for:

  • Videos under 1 hour for which you need immediate results or real-time streaming. Use the analyze method instead.
  • Minimum duration: 4 seconds
  • Maximum duration: 2 hours
  • Formats: FFmpeg supported formats
  • Resolution: 360x360 to 5184x2160 pixels
  • Aspect ratio: Between 1:1 and 1:2.4, or between 2.4:1 and 1:1

On the Free plan, analysis hours count toward a shared limit that also covers indexing - the number of segment definitions does not affect this limit. On paid plans, you pay based on how much video you process and how many segment definitions you include - see the Frequently asked questions page for examples.

Analyzing videos asynchronously requires three steps:

  1. Create an analysis task using the create method. The platform returns a task ID.
  2. Poll the status of the task using the retrieve method. Wait until the status is ready.
  3. Retrieve the results from the response when the status is ready using the retrieve method.

Methods

List analysis tasks

Description: This method returns a list of the analysis tasks in your account. The platform returns your analysis tasks sorted by creation date, with the newest at the top of the list.

Function signature and example:

1def list(
2 self,
3 *,
4 page: typing.Optional[int] = None,
5 page_limit: typing.Optional[int] = None,
6 status: typing.Optional[AnalyzeTaskStatus] = None,
7 video_url: typing.Optional[str] = None,
8 asset_id: typing.Optional[str] = None,
9 video_id: typing.Optional[str] = None,
10 analysis_mode: typing.Optional[TasksListRequestAnalysisMode] = None,
11 request_options: typing.Optional[RequestOptions] = None,
12) -> TasksListResponse:

Parameters:

NameTypeRequiredDescription
pageintNoA number that identifies the page to retrieve. Default: 1.
page_limitintNoThe number of items to return on each page. Default: 10. Max: 50.
statusAnalyzeTaskStatusNoFilter analysis tasks by status. Values: queued, pending, processing, ready, failed.
video_urlstrNoFilter tasks by exact video source URL.
asset_idstrNoFilter tasks by asset ID.
video_idstrNoFilter tasks by video ID for pre-indexed videos (Pegasus 1.2 only). Deprecated — use asset_id instead.
analysis_modeTasksListRequestAnalysisModeNoFilter tasks by the analysis mode used when creating the task. Values: "general", "time_based_metadata".
request_optionsRequestOptionsNoRequest-specific configuration.

Return value: Returns a TasksListResponse object.

The TasksListResponse class contains the following properties:

NameTypeDescription
dataList[AnalyzeTaskResponse]An array that contains up to page_limit analysis tasks.
page_infoPageInfoAn object that provides information about pagination.

The PageInfo class contains the following properties:

NameTypeDescription
limit_per_pageOptional[int]The number of items returned per page.
pageOptional[int]The current page number.
total_pageOptional[int]The total number of pages.
total_resultsOptional[int]The total number of analysis tasks in your account.

For details about AnalyzeTaskResponse, see Retrieve task status and results.

API Reference: List async analysis tasks

Create an async analysis task

Description: This method asynchronously analyzes your videos. It supports two modes: general analysis (prompt-based text generation) with both Pegasus 1.2 and Pegasus 1.5, and video segmentation with Pegasus 1.5.

Note

This method is rate-limited. For details, see the Rate limits page.

Function signature and example:

1def create(
2 self,
3 *,
4 video: VideoContext,
5 model_name: typing.Optional[CreateAsyncAnalyzeRequestModelName] = OMIT,
6 custom_id: typing.Optional[str] = OMIT,
7 prompt: typing.Optional[AnalyzeTextPrompt] = OMIT,
8 prompt_v_2: typing.Optional[AnalyzePromptV2] = OMIT,
9 analysis_mode: typing.Optional[CreateAsyncAnalyzeRequestAnalysisMode] = OMIT,
10 temperature: typing.Optional[AnalyzeTemperature] = OMIT,
11 max_tokens: typing.Optional[int] = OMIT,
12 response_format: typing.Optional[AsyncResponseFormat] = OMIT,
13 min_segment_duration: typing.Optional[float] = OMIT,
14 max_segment_duration: typing.Optional[float] = OMIT,
15 start_time: typing.Optional[float] = OMIT,
16 end_time: typing.Optional[float] = OMIT,
17 request_options: typing.Optional[RequestOptions] = None,
18) -> CreateAnalyzeTaskResponse:

Parameters:

NameTypeRequiredDescription
model_nameCreateAsyncAnalyzeRequestModelNameNoThe video understanding model to use for analysis. Values:
- "pegasus1.2": General analysis (prompt-based text generation).
- "pegasus1.5": General analysis (prompt-based text generation) with video clipping, structured prompts with reference images, extended token limits, and video segmentation.
Default: "pegasus1.2"
custom_idstrNoAn optional identifier that you set when you create the task. Use this field to correlate tasks across responses, for example, to distinguish tasks by type or environment. Must match the pattern ^[a-zA-Z0-9_-]{1,64}$.
videoVideoContextYesAn object that specifies the source of the video. See VideoContext for details.
promptstrNoA text prompt that guides the model on the desired format or content. Works with both Pegasus 1.2 and Pegasus 1.5. To include reference images in your prompt, use the prompt_v_2 parameter instead (Pegasus 1.5 only). Mutually exclusive with the prompt_v_2 parameter.
prompt_v_2AnalyzePromptV2NoA structured prompt with <@name> placeholders for referencing images. Requires the model_name parameter set to "pegasus1.5". Not supported when the analysis_mode parameter is "time_based_metadata". Mutually exclusive with the prompt parameter. See AnalyzePromptV2.
analysis_modeCreateAsyncAnalyzeRequestAnalysisModeNoThe analysis pipeline to use. Value: "time_based_metadata" (segments the video into time-based intervals and extracts metadata for each segment). Requires model_name set to "pegasus1.5" and response_format.type set to "segment_definitions".
temperaturefloatNoControls the randomness of the text output. Default: 0.2, Min: 0, Max: 1
max_tokensintNoThe maximum number of tokens to generate. For "pegasus1.2": Min: 1, Max: 4,096. For "pegasus1.5" general mode: Min: 512, Max: 65,536, Default: 4,096. For "pegasus1.5" time_based_metadata mode: Min: 2,048, Max: 65,536, Default: 32,768.
response_formatAsyncResponseFormatNoControls the response format. When you omit this parameter, you receive unstructured text.
min_segment_durationfloatNoMinimum duration for each extracted segment, in seconds. Prevents the model from creating very short segments. Requires model_name set to "pegasus1.5" and analysis_mode set to "time_based_metadata". Min: 2.
max_segment_durationfloatNoMaximum duration for each extracted segment, in seconds. Breaks long continuous sections into shorter segments. Must be greater than or equal to min_segment_duration. Requires model_name set to "pegasus1.5" and analysis_mode set to "time_based_metadata". Min: 2.
start_timefloatNoStart of the analysis window, in seconds. Use with end_time to analyze only a portion of the video. Requires model_name set to "pegasus1.5". If omitted, defaults to 0. The clip (end_timestart_time) must be at least 4 seconds. Mutually exclusive with response_format.segment_definitions[].time_ranges.
end_timefloatNoEnd of the analysis window, in seconds. Use with start_time to analyze only a portion of the video. Requires model_name set to "pegasus1.5". If omitted, defaults to the video duration. The clip (end_timestart_time) must be at least 4 seconds. Mutually exclusive with response_format.segment_definitions[].time_ranges.
request_optionsRequestOptionsNoRequest-specific configuration.

VideoContext

The VideoContext type specifies the source of the video. Provide exactly one of the following:

ClassFieldTypeDescription
VideoContext_UrlurlstrThe publicly accessible URL of the video file. Use direct links to raw media files. Video hosting platforms and cloud storage sharing links are not supported.
VideoContext_AssetIdasset_idstrThe unique identifier of an asset from a direct or multipart upload. The asset status must be ready. Use assets.retrieve to check the status.
VideoContext_Base64Stringbase64_stringstrThe base64-encoded video data. The maximum size is 30 MB.

The AsyncResponseFormat class contains the following properties:

NameTypeDescription
typeAsyncResponseFormatTypeThe response format to use. Values: "json_schema" (structured JSON conforming to a provided schema), "segment_definitions" (timestamped metadata with custom fields, requires model_name set to "pegasus1.5" and analysis_mode set to "time_based_metadata").
json_schemaOptional[Dict[str, Optional[Any]]]Contains the JSON schema that defines the response structure. The schema must adhere to the JSON Schema Draft 2020-12 specification. For details, see the json_schema parameter in the API Reference section.
segment_definitionsOptional[List[SegmentDefinition]]Define the types of segments to extract from your video. Required when type is "segment_definitions". Minimum 1, maximum 10 definitions. See SegmentDefinition for details.

AnalyzePromptV2

The AnalyzePromptV2 class defines a structured prompt with image references.

NameTypeRequiredDescription
input_textstrYesThe text of the prompt. Use <@name> placeholders to reference images declared in media_sources (Example: "Is there a <@tiger-1> in the video?"). The maximum length is 2,000 tokens.
media_sourcesOptional[List[SmeMediaSource]]NoReference images for the <@name> placeholders in the prompt. Maximum 4 sources. See SmeMediaSource.

SegmentDefinition

The SegmentDefinition class defines a type of segment to extract from the video.

NameTypeRequiredDescription
idstrYesA unique identifier for this segment definition.
descriptionstrYesDescribe what this type of segment looks like in the video. The model uses this text to identify matching segments.
fieldsOptional[List[SegmentField]]NoCustom fields to extract for each segment instance. Maximum 20 fields. See SegmentField for details.
media_sourcesOptional[List[SmeMediaSource]]NoReference images that help the model identify segments. Maximum 4 sources. See SmeMediaSource for details.

SegmentField

The SegmentField class defines a custom field to extract for each segment.

NameTypeRequiredDescription
namestrYesThe name of the field.
typeSegmentFieldTypeYesThe data type of the field. Values: "string", "boolean", "number", "integer", "array".
descriptionOptional[str]NoInstructions that guide the model on what this field should contain and how to extract it from the video.
enumOptional[List[str]]NoAllowed values for this field. Maximum 50 values.
itemsOptional[SegmentFieldItems]NoRequired when type is "array". Specifies the type of array elements. The items object has a single property type with values: "string", "number", "boolean", "integer".

SmeMediaSource

The SmeMediaSource class defines a reference image that provides visual context for segment identification. Provide exactly one of url, asset_id, or base64_string.

NameTypeRequiredDescription
namestrYesA descriptive name for this media source.
media_typeSmeMediaSourceMediaTypeYesThe media type. Value: "image".
urlOptional[str]NoA publicly accessible HTTPS URL of the image.
asset_idOptional[str]NoThe unique identifier of an uploaded asset.
base64_stringOptional[str]NoBase64-encoded image data. The maximum size is 30MB.

Return value: Returns a CreateAnalyzeTaskResponse object containing the task details.

The CreateAnalyzeTaskResponse class contains the following properties:

NameTypeDescription
task_idstrThe unique identifier of the analysis task.
statusAnalyzeTaskStatusThe initial status of the task. Value: queued.

API Reference: Create an async analysis task

Retrieve task status and results

Description: This method retrieves the status and results of an analysis task.

Task statuses:

  • queued: The task is waiting to be processed.
  • pending: The task is queued and waiting to start.
  • processing: The platform is analyzing the video.
  • ready: Processing is complete. Results are available in the response.
  • failed: The task failed. No results were generated.

Call this method repeatedly until status is ready or failed. When status is ready, use the results from the response.

Function signature and example:

1def retrieve(
2 self,
3 task_id: str,
4 *,
5 request_options: typing.Optional[RequestOptions] = None,
6) -> AnalyzeTaskResponse:

Parameters:

NameTypeRequiredDescription
task_idstrYesThe unique identifier of the analysis task.
request_optionsRequestOptionsNoRequest-specific configuration.

Return value: Returns an AnalyzeTaskResponse object containing the task status and results.

The AnalyzeTaskResponse class contains the following properties:

NameTypeDescription
task_idstrThe unique identifier of the analysis task.
video_sourceOptional[AnalyzeTaskResponseVideoSource]The video source you provided. Only present for tasks that use direct video input (url, base64_string, or asset_id).
request_paramsOptional[AnalyzeTaskResponseRequestParams]The parameters you sent when creating this task. Only present for tasks created with model_name set to "pegasus1.5".
statusAnalyzeTaskStatusThe current status of the task. Values: queued, pending, processing, ready, failed.
created_atdatetimeThe date and time when the task was created, in RFC 3339 format.
completed_atOptional[datetime]The date and time when the task completed or failed, in RFC 3339 format. The platform returns this field only when status is ready or failed.
resultOptional[AnalyzeTaskResult]An object that contains the generated text and additional information. The platform returns this object only when status is ready.
errorOptional[AnalyzeTaskError]Details about why the task failed. The platform returns this object only when status is failed.
webhooksOptional[List[AnalyzeTaskWebhookInfo]]The delivery status of each webhook endpoint. The platform omits this field when there are no webhooks configured.

The AnalyzeTaskResponseVideoSource class contains the following properties:

NameTypeDescription
typeOptional[AnalyzeTaskResponseVideoSourceType]The type of video source. Values: "url", "base64_string", "asset_id", "video_id".
urlOptional[str]The video URL. Present when type is "url".
asset_idOptional[str]The asset ID. Present when type is "asset_id".
video_idOptional[str]The video ID. Present when type is "video_id". Deprecated — use asset_id instead.
system_metadataOptional[AnalyzeTaskResponseVideoSourceSystemMetadata]System-extracted video metadata. Present on a best-effort basis once the video has been processed.

The AnalyzeTaskResponseVideoSourceSystemMetadata class contains the following properties:

NameTypeDescription
durationOptional[float]The video duration in seconds.

The AnalyzeTaskResponseRequestParams class contains the following properties:

NameTypeDescription
analysis_modeOptional[AnalyzeTaskResponseRequestParamsAnalysisMode]The analysis pipeline used for this task. Values: "general", "time_based_metadata".
response_formatOptional[AnalyzeTaskResponseRequestParamsResponseFormat]The response format you configured. Present only when you included it in the request.
temperatureOptional[float]The temperature value used for the analysis.
max_tokensOptional[int]The maximum number of tokens for the response.
min_segment_durationOptional[float]The minimum segment duration you set, in seconds. Present when analysis_mode is "time_based_metadata".
max_segment_durationOptional[float]The maximum segment duration you set, in seconds. Present when analysis_mode is "time_based_metadata".

The AnalyzeTaskResult class contains the following properties:

NameTypeDescription
generation_idstrThe unique identifier for the generation session.
datastrThe generated text for this analysis task. When analysis_mode is not set, a plain-text string based on the prompt you provided. When analysis_mode is "time_based_metadata", a JSON-encoded string keyed by segment definition id. Each key maps to an array of segment objects with start_time (number), end_time (number), and metadata (object with your custom fields).
finish_reasonFinishReasonThe reason the generation stopped. Values: null (generation has not finished), stop (the model reached the end of the response).
usageAnalyzeTaskResultUsageThe number of tokens used in the generation.

The AnalyzeTaskResultUsage class contains the following properties:

NameTypeDescription
output_tokensintThe number of tokens in the generated text.
input_tokensOptional[int]The number of tokens consumed by the input (prompt and video). Omitted for Pegasus 1.5.

The AnalyzeTaskError class contains the following properties:

NameTypeDescription
messagestrA message that describes why the task failed.

API Reference: Retrieve analysis task status and results

Delete an analysis task

Description: This method deletes an analysis task. You can only delete tasks that are not currently being processed.

Function signature and example:

1def delete(
2 self,
3 task_id: str,
4 *,
5 request_options: typing.Optional[RequestOptions] = None,
6) -> None:

Parameters:

NameTypeRequiredDescription
task_idstrYesThe unique identifier of the analysis task.
request_optionsRequestOptionsNoRequest-specific configuration.

Return value: Returns None. If successful, the platform returns a 204 No Content response.

API Reference: Delete an analysis task