For AI agents: a documentation index is available at the root level at /llms.txt and /llms-full.txt. Append /llms.txt to any URL for a page-level index, or .md for the markdown version of any page.
This method analyzes your videos and returns the results directly in the response. It generates text based on your prompts and supports both Pegasus 1.2 and Pegasus 1.5 for general analysis (prompt-based text generation).
<Accordion title="Input requirements">
- Minimum duration: 4 seconds
- Maximum duration: 1 hour
- Formats: [FFmpeg supported formats](https://ffmpeg.org/ffmpeg-formats.html)
- Resolution: 360x360 to 5184x2160 pixels
- Aspect ratio: Between 1:1 and 1:2.4, or between 2.4:1 and 1:1.
</Accordion>
**When to use this method**:
- Analyze videos up to 1 hour
- Retrieve immediate results without polling for task completion
- Stream text fragments in real time for immediate processing and feedback
**Do not use this method for**:
- Videos longer than 1 hour. Use the [`POST`](/v1.3/api-reference/analyze-videos/create-async-analysis-task) method of the `/analyze/tasks` endpoint instead.
- Video segmentation with custom segment definitions. Use the [`POST`](/v1.3/api-reference/analyze-videos/create-async-analysis-task) method of the `/analyze/tasks` endpoint with the `model_name` parameter set to `pegasus1.5` instead.
<Note title="Note">
This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page.
</Note>
Authentication
x-api-keystring
Your API key.
<Note title="Note">
You can find your API key on the <a href="https://playground.twelvelabs.io/dashboard/api-keys" target="_blank">API Keys</a> page.
</Note>
Request
This endpoint expects an object.
model_nameenumOptionalDefaults to pegasus1.2
The video understanding model to use for analysis.
- `pegasus1.2`: General analysis (prompt-based text generation).
- `pegasus1.5`: General analysis (prompt-based text generation) with video clipping, structured prompts with reference images, extended token limits, and video segmentation (async only). Does not support `analysis_mode=time_based_metadata` or `response_format.type=segment_definitions` — use the [`POST`](/v1.3/api-reference/analyze-videos/create-async-analysis-task) method of the `/analyze/tasks` endpoint instead.
**Default:** `pegasus1.2`
Allowed values:
video_idstringOptional
The unique identifier of the video to analyze. Use this parameter when the `model_name` parameter is `pegasus1.2`. Not supported with `pegasus1.5`.
<Info> This parameter will be deprecated and removed in a future version. Use the [`video`](/v1.3/api-reference/analyze-videos/sync-analysis#request.body.video) parameter instead.</Info>
videoobjectOptional
An object specifying the source of the video content. Include exactly one source.
promptstringOptional
A text prompt that guides the model on the desired format or content. Works with both Pegasus 1.2 and Pegasus 1.5. To include reference images in your prompt, use the prompt_v2 parameter instead (Pegasus 1.5 only). Mutually exclusive with the prompt_v2 parameter.
prompt_v2objectOptional
A structured prompt with <@name> placeholders for referencing images. Requires the model_name parameter set to pegasus1.5. Mutually exclusive with the prompt parameter.
temperaturedoubleOptional
Controls the randomness of the text output.
Default: 0.2 Min: 0 Max: 1
streambooleanOptionalDefaults to true
Set this parameter to `true` to enable streaming responses in the <a href="https://github.com/ndjson/ndjson-spec" target="_blank">NDJSON</a> format.
**Default:** `true`
response_formatobjectOptional
Specifies the format of the response. When you omit this parameter, the platform returns unstructured text. Only the json_schema type is supported for synchronous analysis.
max_tokensintegerOptional1-65536
The maximum number of tokens to generate. The allowed range depends on the model:
| Model | Min | Max | Default |
|-------|-----|-----|---------|
| Pegasus 1.2 | 1 | 4,096 | 4,096 |
| Pegasus 1.5 | 512 | 65,536 | 4,096 |
start_timedoubleOptional
Start of the analysis window, in seconds. Use with `end_time` to analyze only a portion of the video. Requires `model_name` set to `pegasus1.5`.
<Note title="Notes">
- If omitted, defaults to `0`.
- Must be less than `end_time` and less than the video duration. The clip (`end_time - start_time`) must be at least `4` seconds.
</Note>
end_timedoubleOptional
End of the analysis window, in seconds. Use with `start_time` to analyze only a portion of the video. Requires `model_name` set to `pegasus1.5`.
<Note title="Notes">
- If omitted, defaults to the video duration.
- Must be greater than `start_time` and less than or equal to the video duration. The clip (`end_time - start_time`) must be at least `4` seconds.
</Note>
Response
The specified video has successfully been analyzed.
<Note title="Note">
The maximum response length is 4,096 tokens for Pegasus 1.2 and up to 65,536 tokens for Pegasus 1.5. Set the `max_tokens` parameter to change this limit.
</Note>
This method analyzes your videos and returns the results directly in the response. It generates text based on your prompts and supports both Pegasus 1.2 and Pegasus 1.5 for general analysis (prompt-based text generation).
Aspect ratio: Between 1:1 and 1:2.4, or between 2.4:1 and 1:1.
When to use this method:
Analyze videos up to 1 hour
Retrieve immediate results without polling for task completion
Stream text fragments in real time for immediate processing and feedback
Do not use this method for:
Videos longer than 1 hour. Use the POST method of the /analyze/tasks endpoint instead.
Video segmentation with custom segment definitions. Use the POST method of the /analyze/tasks endpoint with the model_name parameter set to pegasus1.5 instead.
Note
This endpoint is rate-limited. For details, see the Rate limits page.
The video understanding model to use for analysis.
pegasus1.2: General analysis (prompt-based text generation).
pegasus1.5: General analysis (prompt-based text generation) with video clipping, structured prompts with reference images, extended token limits, and video segmentation (async only). Does not support analysis_mode=time_based_metadata or response_format.type=segment_definitions — use the POST method of the /analyze/tasks endpoint instead.
Default:pegasus1.2
The unique identifier of the video to analyze. Use this parameter when the model_name parameter is pegasus1.2. Not supported with pegasus1.5.
This parameter will be deprecated and removed in a future version. Use the video parameter instead.
Set this parameter to true to enable streaming responses in the NDJSON format.
Default:true
The maximum number of tokens to generate. The allowed range depends on the model:
Model
Min
Max
Default
Pegasus 1.2
1
4,096
4,096
Pegasus 1.5
512
65,536
4,096
Start of the analysis window, in seconds. Use with end_time to analyze only a portion of the video. Requires model_name set to pegasus1.5.
Notes
If omitted, defaults to 0.
Must be less than end_time and less than the video duration. The clip (end_time - start_time) must be at least 4 seconds.
End of the analysis window, in seconds. Use with start_time to analyze only a portion of the video. Requires model_name set to pegasus1.5.
Notes
If omitted, defaults to the video duration.
Must be greater than start_time and less than or equal to the video duration. The clip (end_time - start_time) must be at least 4 seconds.
The specified video has successfully been analyzed.
Note
The maximum response length is 4,096 tokens for Pegasus 1.2 and up to 65,536 tokens for Pegasus 1.5. Set the max_tokens parameter to change this limit.