Analyze videos

This quickstart guide provides a simplified introduction to analyzing videos to generate text using the TwelveLabs Video Understanding Platform. It includes the following:

  • A basic working example
  • Minimal implementation details
  • Core parameters for common use cases

For a comprehensive guide, see the Analyze videos section.

Key concepts

This section explains the key concepts and terminology used in this guide:

  • Asset: Your uploaded content. Once created, you can reference the same asset across multiple operations without uploading the file again.

Workflow

This guide shows how to upload your video as an asset and analyze it synchronously using streaming responses.

For videos over 1 hour, use the asynchronous approach in the complete guide.

Prerequisites

  • To use the platform, you need an API key:

    1

    If you don’t have an account, sign up for a free account.

    2

    Go to the API Keys page.

    3

    If you need to create a new key, select the Create API Key button. Enter a name and set the expiration period. The default is 12 months.

    4

    Select the Copy icon next to your key to copy it to your clipboard.

  • Depending on the programming language you are using, install the TwelveLabs SDK by entering one of the following commands:

    $pip install twelvelabs
  • Your video files must meet the following requirements:

    • For this guide: Video files up to 1 hour. This guide uses the synchronous approach, which returns results immediately. For longer videos (up to 2 hours), use the asynchronous approach in the complete guide.
    • Model capabilities: See the complete requirements for video files

    For upload size limits and processing modes, see the Upload and processing methods page.

Starter code

Copy and paste the code below, replacing the placeholders surrounded by <> with your values.

1from twelvelabs import TwelveLabs
2from twelvelabs.types import VideoContext_AssetId
3
4# 1. Initialize the client
5client = TwelveLabs(api_key="<YOUR_API_KEY>")
6
7# 2. Upload a video
8asset = client.assets.create(
9 method="url",
10 url="<YOUR_VIDEO_URL>" # Use direct links to raw media files. Video hosting platforms and cloud storage sharing links are not supported
11 # Or use method="direct" and file=open("<PATH_TO_VIDEO_FILE>", "rb") to upload a file from the local file system
12)
13print(f"Created asset: id={asset.id}")
14
15# 3. Analyze your video
16video = VideoContext_AssetId(asset_id=asset.id)
17text_stream = client.analyze_stream(video=video, prompt="<YOUR_PROMPT>")
18
19# 4. Process the results
20for text in text_stream:
21 if text.event_type == "text_generation":
22 print(text.text)

Code explanation

1

Import the SDK and initialize the client

Create a client instance to interact with the TwelveLabs Video Understanding Platform.

2

Upload a video

Upload a video to create an asset.

3

Analyze your video

Use the unique identifier of your asset to analyze it with a custom prompt. The platform streams the generated text as it becomes available.

4

Process the results

Process and display the generated text. This example prints the results to the standard output.