Analyze videos

This quickstart guide provides a simplified introduction to analyzing videos to generate text using the TwelveLabs Video Understanding Platform. It includes the following:

  • A basic working example
  • Minimal implementation details
  • Core parameters for common use cases

For a comprehensive guide, see the Analyze videos section.

Key concepts

This section explains the key concepts and terminology used in this guide:

  • Asset: Your uploaded content. Once created, you can reference the same asset across multiple operations without uploading the file again.

Workflow

This guide shows how to upload your video as an asset and analyze it synchronously using streaming responses.

Prerequisites

  • To use the platform, you need an API key:

    1

    If you don’t have an account, sign up for a free account.

    2

    Go to the API Keys page.

    3

    If you need to create a new key, select the Create API Key button. Enter a name and set the expiration period. The default is 12 months.

    4

    Select the Copy icon next to your key to copy it to your clipboard.

  • Depending on the programming language you are using, install the TwelveLabs SDK by entering one of the following commands:

    $pip install twelvelabs
  • Your video files must meet the following requirements:

    • For this guide: Video files up to 1 hour. For longer videos (up to 2 hours), use the asynchronous approach in the complete guide.
    • Model capabilities: See the complete requirements for video files

    For upload size limits and processing modes, see the Upload and processing methods page.

Starter code

Copy and paste the code below, replacing the placeholders surrounded by <> with your values.

1import time
2from twelvelabs import TwelveLabs
3from twelvelabs.types import VideoContext_AssetId
4
5# 1. Initialize the client
6client = TwelveLabs(api_key="<YOUR_API_KEY>")
7
8# 2. Upload a video
9asset = client.assets.create(
10 method="url",
11 url="<YOUR_VIDEO_URL>" # Use direct links to raw media files. Video hosting platforms and cloud storage sharing links are not supported
12 # Or use method="direct" and file=open("<PATH_TO_VIDEO_FILE>", "rb") to upload a file from the local file system
13)
14print(f"Created asset: id={asset.id}")
15
16# 3. Check the status of the asset
17print("Waiting for asset to be ready...")
18while True:
19 asset = client.assets.retrieve(asset.id)
20 if asset.status == "ready":
21 print("Asset is ready")
22 break
23 if asset.status == "failed":
24 raise RuntimeError(f"Asset processing failed: id={asset.id}")
25 time.sleep(5)
26
27# 4. Analyze your video
28video = VideoContext_AssetId(asset_id=asset.id)
29text_stream = client.analyze_stream(video=video, prompt="<YOUR_PROMPT>")
30
31# 5. Process the results
32for text in text_stream:
33 if text.event_type == "text_generation":
34 print(text.text)

Code explanation

1

Import the SDK and initialize the client

Create a client instance to interact with the TwelveLabs Video Understanding Platform.

2

Upload a video

Upload a video to create an asset.

3

Check the status of the asset

You only need this step for files larger than 200 MB. The platform processes files up to 200 MB synchronously and sets the asset status to ready. For larger files, check the asset status until it is ready.

4

Analyze your video

Use the unique identifier of your asset to analyze it with a custom prompt. The platform streams the generated text as it becomes available.

5

Process the results

Process and display the generated text. This example prints the results to the standard output.