Twelve Labs Video Understanding Platform, currently in beta, offers an API suite for integrating a state-of-the-art (“SOTA”) foundation model that understands contextual information from your videos, making it accessible to your applications. The API is organized around REST and is compatible with most programming languages. You can also use Postman or other REST clients to send requests and view responses.
The following diagram illustrates the architecture of the Twelve Labs Video Understanding Platform and how different parts interact:
An index is a basic unit for organizing and storing your video data (video embeddings and metadata). Indexes facilitate information retrieval and processing.
A video understanding engine consists of a family of deep neural networks built on top of our multimodal foundation model for video understanding, offering search, classification, and summarization capabilities. For each index, you must configure the engines you want to enable. See the Video understanding engines page for more details about the available engines and their capabilities.
The engine options define the types of information that a specific engine will process. Currently, Twelve Labs provides the following engine options:
- Text in video
For more details, see the Engine options page.
This component processes the following user inputs and returns the corresponding results to your application:
- Search queries
- Classification queries
- Prompts for generating text from video
Updated about 2 months ago