Summaries, chapters, and highlights

Deprecation notice

The method described on this page will be deprecated on February 15, 2026. Use structured JSON responses instead. For migration instructions, see the Release notes page.

This guide shows how you can use the Analyze API to generate summaries, chapters, and highlights from videos using pre-defined formats and optional prompts for customization.

  • Summaries are concise overviews capturing the key points, adaptable into formats like paragraphs, emails, or bullet points based on your prompt.
  • Chapters offer a chronological breakdown of the video, with timestamps, headlines, and summaries for each section.
  • Highlights list the most significant events chronologically, including timestamps and brief descriptions.

Below are some examples of how to guide the platform in generating content tailored to your needs.

Content typePrompt example
Specify the target audienceGenerate a summary suitable for a high school audience studying environmental science.
Adjust the toneGenerate a light-hearted and humorous chapter breakdown of this documentary.
Indicate length constraintsProvide a summary fit for a Twitter post under 280 characters.
Customize text formatGenerate a summary in no more than 5 bullet points.
Specify the purposeSummarize this video from a marketer’s perspective, focusing on brand mentions and product placements.

Prerequisites

  • To use the platform, you need an API key:

    1

    If you don’t have an account, sign up for a free account.

    2

    Go to the API Keys page.

    3

    Select the Copy icon next to your key.

  • Depending on the programming language you are using, install the TwelveLabs SDK by entering one of the following commands:

    $pip install twelvelabs
  • Your video files meet the requirements.

Complete example

This complete example shows how to create an index, upload a video, and analyze videos to generate summaries, chapters, and highlights. Ensure you replace the placeholders surrounded by <> with your values.

1from twelvelabs import TwelveLabs
2from twelvelabs.indexes import IndexesCreateRequestModelsItem
3from twelvelabs.tasks import TasksRetrieveResponse
4
5# 1. Initialize the client
6client = TwelveLabs(api_key="<YOUR_API_KEY>")
7
8# 2. Create an index
9# An index is a container for organizing your video content
10index = client.indexes.create(
11 index_name="<YOUR_INDEX_NAME>",
12 models=[
13 IndexesCreateRequestModelsItem(
14 model_name="pegasus1.2", model_options=["visual", "audio"]
15 )
16 ]
17)
18print(f"Created index: id={index.id}")
19
20# 3. Upload a video
21task = client.tasks.create(
22 index_id=index.id,
23 video_url="<YOUR_VIDEO_URL>" # Use direct links to raw media files. Video hosting platforms and cloud storage sharing links are not supported
24 # Or for a local file: video_file=open("<PATH_TO_VIDEO_FILE>", "rb")
25 )
26print(f"Created task: id={task.id}")
27
28# 4. Monitor the indexing process
29def on_task_update(task: TasksRetrieveResponse):
30 print(f" Status={task.status}")
31
32task = client.tasks.wait_for_done(sleep_interval= 5, task_id=task.id, callback=on_task_update)
33if task.status != "ready":
34 raise RuntimeError(f"Indexing failed with status {task.status}")
35print(
36 f"Upload complete. The unique identifier of your video is {task.video_id}.")
37
38# 5. Generate summaries, chapters, and highlights
39res_summary = client.summarize(
40 video_id=task.video_id,
41 type="summary",
42 # prompt="<YOUR_PROMPT>",
43 # temperature= 0.2
44)
45res_chapters = client.summarize(
46 video_id=task.video_id,
47 type="chapter",
48 # prompt="<YOUR_PROMPT>",
49 # temperature= 0.2
50)
51res_highlights = client.summarize(
52 video_id=task.video_id,
53 type="highlight",
54 # prompt="<YOUR_PROMPT>",
55 # temperature= 0.2
56)
57
58# 6. Process the results
59print(f"Summary: {res_summary.summary}")
60for chapter in res_chapters.chapters:
61 print(
62 f"""Chapter {chapter.chapter_number},
63start={chapter.start_sec},
64end={chapter.end_sec}
65Title: {chapter.chapter_title}
66Summary: {chapter.chapter_summary}
67"""
68 )
69for highlight in res_highlights.highlights:
70 print(
71 f"Highlight: {highlight.highlight}, start: {highlight.start_sec}, end: {highlight.end_sec}")

Code explanation

1

Import the SDK and initialize the client

Create a client instance to interact with the TwelveLabs Video Understanding Platform.
Function call: You call the constructor of the TwelveLabs class.
Parameters:

  • api_key: The API key to authenticate your requests to the platform.

Return value: An object of type TwelveLabs configured for making API calls.

2

Create an index

Indexes store and organize your video data, allowing you to group related videos. This guide shows how to create one, but you can also use an existing index.
Function call: You call the indexes.create function.
Parameters:

  • index_name: The name of the index.
  • models: An array specifying your model configuration. This example enables the Pegasus video understanding model and specifies that it analyzes visual and audio modalities.

See the Indexes page for more details on creating an index and specifying the model configuration.

Return value: An object of type IndexesCreateResponse containing a field named id representing the unique identifier of the newly created index.

3

Upload a video

Upload a video to create an asset. For details about the available upload methods and the corresponding limits, see the Upload methods page.
Function call: You call the assets.create function.
Parameters:

  • method: The upload method for your asset. Use url for a publicly accessible or direct to upload a local file. This example uses url.
  • url or file: The publicly accessible URL of your video or an opened file object in binary read mode. This example uses url.

Return value: An object of type Asset. This object contains, among other information, a field named id representing the unique identifier of your asset.

4

Monitor the indexing process

The platform requires some time to index videos. Check the status of the indexing process until it’s completed.
Function call: You call the indexes.indexed_assets.retrieve function.
Parameters:

  • index_id: The unique identifier of your video index.
  • indexed_asset_id: The unique identifier of your indexed asset.

Return value: An object of type IndexedAssetDetailed containing, among other information, a field named status representing the status of the indexing process. Wait until the value of this field is ready.

5

Generate summaries, chapters, and highlights

Function call: You call the summarize method.
Parameters:

  • video_id: The unique identifier of the video for which you want to generate text.
  • type: The type of text you want to generate. It can take one of the following values: “summary”, “chapter”, or “highlight”.
  • (Optional) prompt: A string you can use to provide context for the summarization task. The maximum length of a prompt is 2,000 tokens. Example: “Generate chapters using casual and conversational language to match the vlogging style of the video.”
  • (Optional) temperature: A number that controls the randomness of the text. A higher value generates more creative text, while a lower value produces more deterministic text.

Return value: An object containing the generated content. The response type varies based on the type parameter:

  • When type is summary: Returns a Summary object with an id, summary text, and usage information
  • When type is chapter: Returns a Chapter object with an id, array of chapters, and usage information
  • When type is highlight: Returns a Highlight object with an id, array of highlights, and usage information
6

Process the results

For summaries, you can directly print the result. You must iterate over the list and print each item individually for chapters, and highlights.