Summaries, chapters, and highlights

This guide shows how you can generate summaries, chapters, and highlights from videos using pre-defined formats and optional prompts for customization.

  • Summaries are concise overviews capturing the key points, adaptable into formats like paragraphs, emails, or bullet points based on your prompt.
  • Chapters offer a chronological breakdown of the video, with timestamps, headlines, and summaries for each section.
  • Highlights list the most significant events chronologically, including timestamps and brief descriptions.

Below are some examples of how to guide the platform in generating content tailored to your needs.

Content typePrompt example
Specify the target audienceGenerate a summary suitable for a high school audience studying environmental science.
Adjust the toneGenerate a light-hearted and humorous chapter breakdown of this documentary.
Indicate length constraintsProvide a summary fit for a Twitter post under 280 characters.
Customize text formatGenerate a summary in no more than 5 bullet points.
Specify the purposeSummarize this video from a marketer’s perspective, focusing on brand mentions and product placements.

Prerequisites

  • To use the platform, you need an API key:

    1

    If you don’t have an account, sign up for a free account.

    2

    Go to the API Key page.

    3

    Select the Copy icon next to your key.

  • Ensure the TwelveLabs SDK is installed on your computer:

    $pip install twelvelabs
  • The videos you wish to use must meet the following requirements:

    • Video resolution: Must be at least 360x360 and must not exceed 3840x2160.
    • Aspect ratio: Must be one of 1:1, 4:3, 4:5, 5:4, 16:9, or 9:16.
    • Video and audio formats: Your video files must be encoded in the video and audio formats listed on the FFmpeg Formats Documentation page. For videos in other formats, contact us at support@twelvelabs.io.
    • Duration: Must be between 4 seconds and 60 minutes (3600s). In a future release, the maximum duration will be 2 hours (7,200 seconds).
    • File size: Must not exceed 2 GB.
      If you require different options, contact us at support@twelvelabs.io.

Complete example

This complete example shows how to create an index, upload a video, and generate summaries chapters and highlights. Ensure you replace the placeholders surrounded by <> with your values.

1from twelvelabs import TwelveLabs
2from twelvelabs.models.task import Task
3
4# 1. Initialize the client
5client = TwelveLabs(api_key="<YOUR_API_KEY>")
6
7# 2. Create an index
8models = [
9 {
10 "name": "pegasus1.2",
11 "options": ["visual", "audio"]
12 }
13]
14index = client.index.create(name="<YOUR_INDEX_NAME>", models=models)
15print(f"Index created: id={index.id}, name={index.name}")
16
17# 3. Upload a video
18task = client.task.create(index_id=index.id, file="<YOUR_VIDEO_FILE>")
19print(f"Task id={task.id}, Video id={task.video_id}")
20
21# 4. Monitor the indexing process
22def on_task_update(task: Task):
23 print(f" Status={task.status}")
24task.wait_for_done(sleep_interval=5, callback=on_task_update)
25if task.status != "ready":
26 raise RuntimeError(f"Indexing failed with status {task.status}")
27print(f"The unique identifier of your video is {task.video_id}.")
28
29# 5. Generate summaries, chapters, and highlights
30res_summary = client.generate.summarize(
31 video_id=task.video_id, type="summary", prompt="<YOUR_PROMPT>")
32res_chapters = client.generate.summarize(
33 video_id=task.video_id, type="chapter", prompt="<YOUR_PROMPT>")
34res_highlights = client.generate.summarize(
35 video_id=task.video_id, type="highlight", prompt="<YOUR_PROMPT>")
36
37
38#6. Process the results
39print(f"Summary= {res_summary.summary}")
40
41for chapter in res_chapters.chapters:
42 print(
43 f"""Chapter {chapter.chapter_number},
44start={chapter.start},
45end={chapter.end}
46Title: {chapter.chapter_title}
47Summary: {chapter.chapter_summary}
48"""
49 )
50
51for highlight in res_highlights.highlights:
52 print(
53 f"Highlight: {highlight.highlight}, start: {highlight.start}, end: {highlight.end}")

Step-by-step guide

1

Import the SDK and initialize the client

Create a client instance to interact with the TwelveLabs Video Understanding platform.
Function call: You call the constructor of the TwelveLabs class.
Parameters:

  • api_key: The API key to authenticate your requests to the platform.

Return value: An object of type TwelveLabs configured for making API calls.

2

Specify the index containing your videos

Indexes help you organize and search through related videos efficiently. This example creates a new index, but you can also use an existing index by specifying its unique identifier. See the Indexes page for more details on creating an index.
Function call: You call the index.create function.
Parameters:

  • name: The name of the index.
  • models: An object specifying your model configuration. This example enables the Pegasus video understanding model and the visual and audio model options.

Return value: An object containing, among other information, a field named id representing the unique identifier of the newly created index.

3

Upload videos

To perform any downstream tasks, you must first upload your videos, and the platform must finish processing them.
Function call: You call the task.create function. This starts a video indexing task, which is an object of type Task that tracks the status of your video upload and indexing process.
Parameters:

  • index_id: The unique identifier of your index.
  • file or url: The path or the publicly accessible URL of your video file.

Return value: An object of type Task containing, among other information, the following fields:

  • video_id: The unique identifier of your video
  • status: The status of your video indexing task.
Note

You can also upload multiple videos in a single API call. For details, see the Cloud-to-cloud integrations page.

4

Monitor the indexing process

The platform requires some time to index videos. Check the status of the video indexing task until it’s completed.
Function call: You call the task.wait_for_done function.
Parameters:

  • sleep_interval: The time interval, in seconds, between successive status checks. In this example, the method checks the status every five seconds.
  • callback: A callback function that the SDK executes each time it checks the status. Note that the callback function takes a parameter of type Task representig the video indexing task you’ve created in the previous step. Use it to display the status of your video indexing task.

Return value: An object containing, among other information, a field named status representing the status of your task. Wait until the value of this field is ready.

5

Generate summaries, chapters, and highlights

Function call: You call the generate.summarize method.
Parameters:

  • video_id: The unique identifier of the video for which you want to generate text.
  • type: The type of text you want to generate. It can take one of the following values: “summary”, “chapter”, or “highlight”.
  • (Optional) prompt: A string you can use to provide context for the summarization task. The maximum length of a prompt is 2,000 tokens. Example: “Generate chapters using casual and conversational language to match the vlogging style of the video.”
  • (Optional) temperature: A number that controls the randomness of the text. A higher value generates more creative text, while a lower value produces more deterministic text.

Return value: A string for summaries and a list of objects for chapters and highlights.

6

Process the results

For summaries, you can directly print the result. You must iterate over the list and print each item individually for chapters, and highlights.