Analyze videos
This quickstart guide provides a simplified introduction to analyzing videos to generate text using the TwelveLabs Video Understanding Platform. It includes the following:
- A basic working example
- Minimal implementation details
- Core parameters for common use cases
For a comprehensive guide, see the Analyze videos section.
Key concepts
This section explains the key concepts and terminology used in this guide:
- Asset: Your uploaded content. Once created, you can reference the same asset across multiple operations without uploading the file again.
Workflow
This guide shows how to upload your video as an asset and analyze it synchronously using streaming responses.
Prerequisites
-
To use the platform, you need an API key:
-
Depending on the programming language you are using, install the TwelveLabs SDK by entering one of the following commands:
-
Your video files must meet the following requirements:
- For this guide: Video files up to 1 hour. For longer videos (up to 2 hours), use the asynchronous approach in the complete guide.
- Model capabilities: See the complete requirements for video files
For upload size limits and processing modes, see the Upload and processing methods page.
Starter code
Copy and paste the code below, replacing the placeholders surrounded by <> with your values.
Code explanation
Import the SDK and initialize the client
Create a client instance to interact with the TwelveLabs Video Understanding Platform.
Check the status of the asset
You only need this step for files larger than 200 MB. The platform processes files up to 200 MB synchronously and sets the asset status to ready. For larger files, check the asset status until it is ready.