This page provides an overview of common workflows for using the Twelve Labs Python SDK. Each workflow consists of a series of steps, with links to detailed documentation for each step.

All workflows involving uploading video content to the platform require asynchronous processing. You must wait for the video processing to complete before proceeding with the subsequent steps.

Prerequisites

Ensure that the following prerequisites are met before using the Python SDK:

  • Python 3.7 or newer must be installed on your machine.
  • You have an API key. If you don't have an account, please sign up for a free account. Then, to retrieve your API key, go to the API Key page, and select the Copy icon to the right of the key to copy it to your clipboard.

Install the SDK

Install the twelvelabs package:

pip install twelvelabs

Initialize the SDK

  1. Import the required packages:
from twelvelabs import TwelveLabs
  1. Instantiate the SDK client with your API key.:
client = TwelveLabs(api_key="<YOUR_API_KEY>")

Search

Follow the steps in this section to search your video content for specific moments, scenes, or information.

Steps:

  1. Create an index, enabling the Marengo video understanding model.
  2. Upload videos and monitor the processing.
  3. Perform a search request, using text or images as queries.

Notes:

  • The search scope is an individual index.
  • Results support pagination, filtering, sorting, and grouping.

Create text, image, and audio embeddings

This workflow guides you through creating embeddings for text.

Steps:

  1. Create text, image, and audio embeddings

Note:

  • Creating text, image, and audio embeddings is a synchronous process.

Create video embeddings

This workflow guides you through creating embeddings for videos.

Steps:

  1. Upload a video and monitor the processing.
  2. Retrieve the embeddings.

Note:

  • Creating video embeddings is an asynchronous process.

Generate text from videos

Follow the steps in this section to generate texts based on your videos.

Steps:

  1. Create an index, enabling the Pegasus video understanding model.
  2. Upload videos and monitor the processing.
  3. Depending on your use case, generate one of the following: