Async analysis

The AnalyzeAsyncClient.TasksClient class provides methods to analyze videos asynchronously, generate text, and extract structured, timestamped segments. The platform supports two analysis modes: general analysis (prompt-based text generation) with Pegasus 1.2 and video segmentation with Pegasus 1.5.

When to use this class:

  • Generate custom text from your video using a prompt (Pegasus 1.2 only)
  • Extract timestamped metadata with custom fields from your video (Pegasus 1.5 only)
  • Analyze videos longer than 1 hour
  • Process videos up to 2 hours
  • Avoid blocking your application during video processing

Do not use this class for:

  • Videos under 1 hour for which you need immediate results or real-time streaming. Use the analyze method instead.
  • Minimum duration: 4 seconds
  • Maximum duration: 2 hours
  • Formats: FFmpeg supported formats
  • Resolution: 360x360 to 5184x2160 pixels
  • Aspect ratio: Between 1:1 and 1:2.4, or between 2.4:1 and 1:1

Analyzing videos asynchronously requires three steps:

  1. Create an analysis task using the create method. The platform returns a task ID.
  2. Poll the status of the task using the retrieve method. Wait until the status is ready.
  3. Retrieve the results from the response when the status is ready using the retrieve method.

Methods

List analysis tasks

Description: This method returns a list of the analysis tasks in your account. The platform returns your analysis tasks sorted by creation date, with the newest at the top of the list.

Function signature and example:

1def list(
2 self,
3 *,
4 page: typing.Optional[int] = None,
5 page_limit: typing.Optional[int] = None,
6 status: typing.Optional[AnalyzeTaskStatus] = None,
7 video_url: typing.Optional[str] = None,
8 asset_id: typing.Optional[str] = None,
9 analysis_mode: typing.Optional[TasksListRequestAnalysisMode] = None,
10 request_options: typing.Optional[RequestOptions] = None,
11) -> TasksListResponse:

Parameters:

NameTypeRequiredDescription
pageintNoA number that identifies the page to retrieve. Default: 1.
page_limitintNoThe number of items to return on each page. Default: 10. Max: 50.
statusAnalyzeTaskStatusNoFilter analysis tasks by status. Values: queued, pending, processing, ready, failed.
video_urlstrNoFilter tasks by exact video source URL.
asset_idstrNoFilter tasks by asset ID.
analysis_modeTasksListRequestAnalysisModeNoFilter tasks by the analysis mode used when creating the task. Value: "time_based_metadata".
request_optionsRequestOptionsNoRequest-specific configuration.

Return value: Returns a TasksListResponse object.

The TasksListResponse class contains the following properties:

NameTypeDescription
dataList[AnalyzeTaskResponse]An array that contains up to page_limit analysis tasks.
page_infoPageInfoAn object that provides information about pagination.

The PageInfo class contains the following properties:

NameTypeDescription
limit_per_pageOptional[int]The number of items returned per page.
pageOptional[int]The current page number.
total_pageOptional[int]The total number of pages.
total_resultsOptional[int]The total number of analysis tasks in your account.

For details about AnalyzeTaskResponse, see Retrieve task status and results.

API Reference: List async analysis tasks

Create an async analysis task

Description: This method asynchronously analyzes your videos. It supports two modes: general analysis (prompt-based text generation) with Pegasus 1.2 and video segmentation with Pegasus 1.5.

Note

This method is rate-limited. For details, see the Rate limits page.

Function signature and example:

1def create(
2 self,
3 *,
4 video: VideoContext,
5 model_name: typing.Optional[CreateAsyncAnalyzeRequestModelName] = OMIT,
6 prompt: typing.Optional[AnalyzeTextPrompt] = OMIT,
7 analysis_mode: typing.Optional[CreateAsyncAnalyzeRequestAnalysisMode] = OMIT,
8 temperature: typing.Optional[AnalyzeTemperature] = OMIT,
9 max_tokens: typing.Optional[AnalyzeMaxTokens] = OMIT,
10 response_format: typing.Optional[AsyncResponseFormat] = OMIT,
11 min_segment_duration: typing.Optional[float] = OMIT,
12 max_segment_duration: typing.Optional[float] = OMIT,
13 request_options: typing.Optional[RequestOptions] = None,
14) -> CreateAnalyzeTaskResponse:

Parameters:

NameTypeRequiredDescription
model_nameCreateAsyncAnalyzeRequestModelNameNoThe video understanding model to use for analysis. Values: "pegasus1.2" (default, uses pre-indexed videos via video_id), "pegasus1.5" (uses url, asset_id, or base64_string for direct video analysis).
videoVideoContextYesAn object that specifies the source of the video. See VideoContext for details.
promptstrNoA natural-language text that provides instructions for analyzing the video. Required for general-mode analysis. Not supported when analysis_mode is "time_based_metadata". The maximum length is 2,000 tokens.
analysis_modeCreateAsyncAnalyzeRequestAnalysisModeNoThe analysis pipeline to use. Value: "time_based_metadata" (segments the video into time-based intervals and extracts metadata for each segment). Requires model_name set to "pegasus1.5" and response_format.type set to "segment_definitions".
temperaturefloatNoControls the randomness of the text output. A higher value generates more creative text, while a lower value produces more deterministic output. Default: 0.2. Min: 0. Max: 1.
max_tokensintNoThe maximum number of tokens to generate. For "pegasus1.2": Min: 1, Max: 4,096. For "pegasus1.5": Min: 2,048, Max: 32,768, Default: 32,768.
response_formatAsyncResponseFormatNoControls the response format. When you omit this parameter, you receive unstructured text.
min_segment_durationfloatNoMinimum duration for each extracted segment, in seconds. Prevents the model from creating very short segments. Requires model_name set to "pegasus1.5" and analysis_mode set to "time_based_metadata". Min: 2.
max_segment_durationfloatNoMaximum duration for each extracted segment, in seconds. Breaks long continuous sections into shorter segments. Must be greater than or equal to min_segment_duration. Requires model_name set to "pegasus1.5" and analysis_mode set to "time_based_metadata". Min: 2.
request_optionsRequestOptionsNoRequest-specific configuration.

VideoContext

The VideoContext type specifies the source of the video. Provide exactly one of the following:

ClassFieldTypeDescription
VideoContext_UrlurlstrThe publicly accessible URL of the video file. Use direct links to raw media files. Video hosting platforms and cloud storage sharing links are not supported.
VideoContext_AssetIdasset_idstrThe unique identifier of an asset from a direct or multipart upload. The asset status must be ready. Use assets.retrieve to check the status.
VideoContext_Base64Stringbase64_stringstrThe base64-encoded video data. The maximum size is 30 MB.

The AsyncResponseFormat class contains the following properties:

NameTypeDescription
typeAsyncResponseFormatTypeThe response format to use. Values: "json_schema" (structured JSON conforming to a provided schema), "segment_definitions" (timestamped metadata with custom fields, requires model_name set to "pegasus1.5" and analysis_mode set to "time_based_metadata").
json_schemaOptional[Dict[str, Optional[Any]]]Contains the JSON schema that defines the response structure. The schema must adhere to the JSON Schema Draft 2020-12 specification. For details, see the json_schema parameter in the API Reference section.
segment_definitionsOptional[List[SegmentDefinition]]Define the types of segments to extract from your video. Required when type is "segment_definitions". Minimum 1, maximum 10 definitions. See SegmentDefinition for details.

SegmentDefinition

The SegmentDefinition class defines a type of segment to extract from the video.

NameTypeRequiredDescription
idstrYesA unique identifier for this segment definition.
descriptionstrYesDescribe what this type of segment looks like in the video. The model uses this text to identify matching segments.
fieldsOptional[List[SegmentField]]NoCustom fields to extract for each segment instance. Maximum 20 fields. See SegmentField for details.
media_sourcesOptional[List[SmeMediaSource]]NoReference images that help the model identify segments. Maximum 4 sources. See SmeMediaSource for details.

SegmentField

The SegmentField class defines a custom field to extract for each segment.

NameTypeRequiredDescription
namestrYesThe name of the field.
typeSegmentFieldTypeYesThe data type of the field. Values: "string", "boolean", "number", "integer", "array".
descriptionOptional[str]NoInstructions that guide the model on what this field should contain and how to extract it from the video.
enumOptional[List[str]]NoAllowed values for this field. Maximum 50 values.
itemsOptional[SegmentFieldItems]NoRequired when type is "array". Specifies the type of array elements. The items object has a single property type with values: "string", "number", "boolean", "integer".

SmeMediaSource

The SmeMediaSource class defines a reference image that provides visual context for segment identification. Provide exactly one of url, asset_id, or base64_string.

NameTypeRequiredDescription
namestrYesA descriptive name for this media source.
media_typeSmeMediaSourceMediaTypeYesThe media type. Value: "image".
urlOptional[str]NoA publicly accessible HTTPS URL of the image.
asset_idOptional[str]NoThe unique identifier of an uploaded asset.
base64_stringOptional[str]NoBase64-encoded image data. The maximum size is 30MB.

Return value: Returns a CreateAnalyzeTaskResponse object containing the task details.

The CreateAnalyzeTaskResponse class contains the following properties:

NameTypeDescription
task_idstrThe unique identifier of the analysis task.
statusAnalyzeTaskStatusThe initial status of the task. Value: queued.

API Reference: Create an async analysis task

Retrieve task status and results

Description: This method retrieves the status and results of an analysis task.

Task statuses:

  • queued: The task is waiting to be processed.
  • pending: The task is queued and waiting to start.
  • processing: The platform is analyzing the video.
  • ready: Processing is complete. Results are available in the response.
  • failed: The task failed. No results were generated.

Call this method repeatedly until status is ready or failed. When status is ready, use the results from the response.

Function signature and example:

1def retrieve(
2 self,
3 task_id: str,
4 *,
5 request_options: typing.Optional[RequestOptions] = None,
6) -> AnalyzeTaskResponse:

Parameters:

NameTypeRequiredDescription
task_idstrYesThe unique identifier of the analysis task.
request_optionsRequestOptionsNoRequest-specific configuration.

Return value: Returns an AnalyzeTaskResponse object containing the task status and results.

The AnalyzeTaskResponse class contains the following properties:

NameTypeDescription
task_idstrThe unique identifier of the analysis task.
video_sourceOptional[AnalyzeTaskResponseVideoSource]The video source you provided. Only present for tasks that use direct video input (url, base64_string, or asset_id).
request_paramsOptional[AnalyzeTaskResponseRequestParams]The parameters you sent when creating this task. Only present for tasks created with model_name set to "pegasus1.5".
statusAnalyzeTaskStatusThe current status of the task. Values: queued, pending, processing, ready, failed.
created_atdatetimeThe date and time when the task was created, in RFC 3339 format.
completed_atOptional[datetime]The date and time when the task completed or failed, in RFC 3339 format. The platform returns this field only when status is ready or failed.
resultOptional[AnalyzeTaskResult]An object that contains the generated text and additional information. The platform returns this object only when status is ready.
errorOptional[AnalyzeTaskError]Details about why the task failed. The platform returns this object only when status is failed.
webhooksOptional[List[AnalyzeTaskWebhookInfo]]The delivery status of each webhook endpoint. The platform omits this field when there are no webhooks configured.

The AnalyzeTaskResponseVideoSource class contains the following properties:

NameTypeDescription
typeOptional[AnalyzeTaskResponseVideoSourceType]The type of video source. Values: "url", "base64_string", "asset_id", "video_id".
urlOptional[str]The video URL. Present when type is "url".
asset_idOptional[str]The asset ID. Present when type is "asset_id".
video_idOptional[str]The video ID. Present when type is "video_id". Deprecated — use asset_id instead.
system_metadataOptional[AnalyzeTaskResponseVideoSourceSystemMetadata]System-extracted video metadata. Present on a best-effort basis once the video has been processed.

The AnalyzeTaskResponseVideoSourceSystemMetadata class contains the following properties:

NameTypeDescription
durationOptional[float]The video duration in seconds.

The AnalyzeTaskResponseRequestParams class contains the following properties:

NameTypeDescription
analysis_modeOptional[AnalyzeTaskResponseRequestParamsAnalysisMode]The analysis pipeline used for this task. Values: "general", "time_based_metadata".
response_formatOptional[AnalyzeTaskResponseRequestParamsResponseFormat]The response format you configured. Present only when you included it in the request.
temperatureOptional[float]The temperature value used for the analysis.
max_tokensOptional[int]The maximum number of tokens for the response.
min_segment_durationOptional[float]The minimum segment duration you set, in seconds. Present when analysis_mode is "time_based_metadata".
max_segment_durationOptional[float]The maximum segment duration you set, in seconds. Present when analysis_mode is "time_based_metadata".

The AnalyzeTaskResult class contains the following properties:

NameTypeDescription
generation_idstrThe unique identifier for the generation session.
datastrThe generated text for this analysis task. When analysis_mode is not set, a plain-text string based on the prompt you provided. When analysis_mode is "time_based_metadata", a JSON-encoded string keyed by segment definition id. Each key maps to an array of segment objects with start_time (number), end_time (number), and metadata (object with your custom fields).
finish_reasonFinishReasonThe reason the generation stopped. Values: null (generation has not finished), stop (the model reached the end of the response).
usageAnalyzeTaskResultUsageThe number of tokens used in the generation.

The AnalyzeTaskResultUsage class contains the following properties:

NameTypeDescription
output_tokensintThe number of tokens in the generated text.
input_tokensOptional[int]The number of tokens in the input prompt.

The AnalyzeTaskError class contains the following properties:

NameTypeDescription
messagestrA message that describes why the task failed.

API Reference: Retrieve analysis task status and results

Delete an analysis task

Description: This method deletes an analysis task. You can only delete tasks that are not currently being processed.

Function signature and example:

1def delete(
2 self,
3 task_id: str,
4 *,
5 request_options: typing.Optional[RequestOptions] = None,
6) -> None:

Parameters:

NameTypeRequiredDescription
task_idstrYesThe unique identifier of the analysis task.
request_optionsRequestOptionsNoRequest-specific configuration.

Return value: Returns None. If successful, the platform returns a 204 No Content response.

API Reference: Delete an analysis task