Summaries, chapters, and highlights
Deprecation notice
The method described on this page will be deprecated on February 15, 2026. Use structured JSON responses instead. For migration instructions, see the Release notes page.
This guide shows how you can use the Analyze API to generate summaries, chapters, and highlights from videos using pre-defined formats and optional prompts for customization.
- Summaries are concise overviews capturing the key points, adaptable into formats like paragraphs, emails, or bullet points based on your prompt.
- Chapters offer a chronological breakdown of the video, with timestamps, headlines, and summaries for each section.
- Highlights list the most significant events chronologically, including timestamps and brief descriptions.
Below are some examples of how to guide the platform in generating content tailored to your needs.
Prerequisites
-
To use the platform, you need an API key:
-
Depending on the programming language you are using, install the TwelveLabs SDK by entering one of the following commands:
-
Your video files meet the requirements.
Complete example
This complete example shows how to create an index, upload a video, and analyze videos to generate summaries, chapters, and highlights. Ensure you replace the placeholders surrounded by <> with your values.
Code explanation
Python
Node.js
Import the SDK and initialize the client
Create a client instance to interact with the TwelveLabs Video Understanding Platform.
Function call: You call the constructor of the TwelveLabs class.
Parameters:
api_key: The API key to authenticate your requests to the platform.
Return value: An object of type TwelveLabs configured for making API calls.
Create an index
Indexes store and organize your video data, allowing you to group related videos. This guide shows how to create one, but you can also use an existing index.
Function call: You call the indexes.create function.
Parameters:
index_name: The name of the index.models: An array specifying your model configuration. This example enables the Pegasus video understanding model and specifies that it analyzes visual and audio modalities.
See the Indexes page for more details on creating an index and specifying the model configuration.
Return value: An object of type IndexesCreateResponse containing a field named id representing the unique identifier of the newly created index.
Upload a video
Upload a video to create an asset. For details about the available upload methods and the corresponding limits, see the Upload methods page.
Function call: You call the assets.create function.
Parameters:
method: The upload method for your asset. Useurlfor a publicly accessible ordirectto upload a local file. This example usesurl.urlorfile: The publicly accessible URL of your video or an opened file object in binary read mode. This example usesurl.
Return value: An object of type Asset. This object contains, among other information, a field named id representing the unique identifier of your asset.
Monitor the indexing process
The platform requires some time to index videos. Check the status of the indexing process until it’s completed.
Function call: You call the indexes.indexed_assets.retrieve function.
Parameters:
index_id: The unique identifier of your video index.indexed_asset_id: The unique identifier of your indexed asset.
Return value: An object of type IndexedAssetDetailed containing, among other information, a field named status representing the status of the indexing process. Wait until the value of this field is ready.
Generate summaries, chapters, and highlights
Function call: You call the summarize method.
Parameters:
video_id: The unique identifier of the video for which you want to generate text.type: The type of text you want to generate. It can take one of the following values: “summary”, “chapter”, or “highlight”.- (Optional)
prompt: A string you can use to provide context for the summarization task. The maximum length of a prompt is 2,000 tokens. Example: “Generate chapters using casual and conversational language to match the vlogging style of the video.” - (Optional)
temperature: A number that controls the randomness of the text. A higher value generates more creative text, while a lower value produces more deterministic text.
Return value: An object containing the generated content. The response type varies based on the type parameter:
- When
typeissummary: Returns aSummaryobject with an id,summarytext, andusageinformation - When
typeischapter: Returns aChapterobject with anid, array ofchapters, andusageinformation - When type is
highlight: Returns aHighlightobject with anid, array ofhighlights, andusageinformation