Search
Use the TwelveLabs Video Understanding Platform to find specific moments in your video content using natural language queries or reference images. The platform analyzes videos by integrating images, audio, speech, and text, offering a deeper understanding than single-modal methods. It captures complex relationships between these elements, detects subtle details, and supports natural language queries and images for intuitive and precise use.
Key features:
- Improved accuracy: Multimodal integration enhances accuracy.
- Easy interaction: Natural language queries simplify searches.
- Advanced search: Enables image-based queries for precise results.
- Fewer errors: Multi-faceted analysis reduces misinterpretation.
- Time savings: Quickly finds relevant clips without manual review.
Use cases:
- Spoken word search: Find video segments where specific words or phrases are spoken.
- Visual element search: Locate video segments that match descriptions of visual elements or scenes.
- Action or event search: Identify video segments that depict specific actions or events.
- Image similarity search: Find video segments that visually resemble a provided image.
- Entity search: Locate video segments containing specific people, car models, animal species, or branded objects with improved accuracy (Marengo 3.0 only).
For details on how your usage is measured and billed, see the Pricing page.
Key concepts
This section explains the key concepts and terminology used in this guide:
- Index: A container that organizes your video content
- Asset: Your uploaded file
- Indexed asset: A video that has been indexed and is ready for downstream tasks
Workflow
Upload and index your videos before you search them. The platform indexes videos asynchronously. You can search your videos after indexing completes. Search results show video segments that match your search terms.
Types of search queries
The platform supports three types of search queries:
- Text queries: Search using natural language descriptions of visual elements, actions, sounds, or spoken words
- Image queries: Search using images to find visually similar content in your videos
- Composed queries: Combine text descriptions with images for more precise results (Marengo 3.0 only)
For guidance on choosing the correct query type, see the Search with text, image, and composed queries page.
Search scope
You can search within a single index per request. You cannot search at the video level or across multiple indexes simultaneously.
Customize your search
You can customize your search in the following ways:
- Specify which modalities to use: visual, audio, or transcription (spoken words)
- Choose how to combine modalities: use the
ororandoperators - For searches within spoken words, select the match type: lexical, semantic, or both
Prerequisites
-
To use the platform, you need an API key:
-
Depending on the programming language you are using, install the TwelveLabs SDK by entering one of the following commands:
-
Your video files must meet the following requirements:
- For this guide: Files up to 4 GB when using publicly accessible URLs or 200 MB for local files
- Model capabilities: See the complete requirements for resolution, aspect ratio, and supported formats.
For other upload methods with different limits, see the Upload methods page.
-
If you wish to use images as queries, ensure that your image file meet the requirements.
Complete example
Copy and paste the code below, replacing the placeholders surrounded by <> with your values.
Text queries
Image queries
Composed text and image queries
Code explanation
Python
Node.js
Import the SDK and initialize the client
Create a client instance to interact with the TwelveLabs Video Understanding Platform.
Function call: You call the constructor of the TwelveLabs class.
Parameters:
api_key: The API key to authenticate your requests to the platform.
Return value: An object of type TwelveLabs configured for making API calls.
Create an index
Indexes store and organize your video data, allowing you to group related videos. This guide shows how to create one, but you can also use an existing index.
Function call: You call the indexes.create function.
Parameters:
index_name: The name of the index.models: An array specifying your model configuration. This example enables the Marengo video understanding model and specifies that it analyzes visual and audio modalities.
See the Indexes page for more details on creating an index and specifying the model configuration.
Return value: An object of type IndexesCreateResponse containing a field named id representing the unique identifier of the newly created index.
Upload a video
Upload a video to create an asset. For details about the available upload methods and the corresponding limits, see the Upload methods page.
Function call: You call the assets.create function.
Parameters:
method: The upload method for your asset. Useurlfor a publicly accessible ordirectto upload a local file. This example usesurl.urlorfile: The publicly accessible URL of your video or an opened file object in binary read mode. This example usesurl.
Return value: An object of type Asset. This object contains, among other information, a field named id representing the unique identifier of your asset.
Index your video
Index your video by adding the asset created in the previous step to an index. This operation is asynchronous.
Function call: You call the indexes.indexed_assets.create function.
Parameters:
index_id: The unique identifier of the index to which the asset will be indexed.asset_id: The unique identifier of your asset.- (Optional):
enable_video_stream: Specifies whether the platform stores the video for streaming. When set toTrue, you can retrieve its URL by calling theindexes.indexed_assets.retrievemethod.
Return value: An object of type IndexedAssetsCreateResponse. This object contains a field named id representing the unique identifier of your indexed asset.
Monitor the indexing process
The platform requires some time to index videos. Check the status of the indexing process until it’s completed.
Function call: You call the indexes.indexed_assets.retrieve function.
Parameters:
index_id: The unique identifier of your video index.indexed_asset_id: The unique identifier of your indexed asset.
Return value: An object of type IndexedAssetDetailed containing, among other information, a field named status representing the status of the indexing process. Wait until the value of this field is ready.
Perform a search request
Perform a search within your index using a text or image query or a combination of both.
Text queries
Image queries
Composed text and image queries
Function call: You call the search.query method.
Parameters:
-
index_id: The unique identifier of the index. -
query_text: Your search query. Note that the platform supports full natural language-based search. The maximum query length varies by model. Marengo 3.0 supports up to 500 tokens per query, while Marengo 2.7 supports up to 77 tokens per query. -
search_options: The modalities the platform uses when performing a search. This example searches using visual and audio cues. For details, see the Search options section. -
(Optional)
operator: Combines multiple search options usingor(default) orand. Useandto find segments matching all search options. Useorto find segments matching any search option. -
(Optional)
transcription_options: Specifies how the platform matches your query against spoken words. This parameter applies only whentranscriptionis included insearch_optionsand when Marengo 3.0 is enabled for your index. Available options arelexical,semantic, or both (default). For details, see the Transcription options section.
Return value: An object of type SyncPager[SearchItem] that can be iterated to access search results. Each item contains the following fields, among other information:
video_id: The unique identifier of the video that matched your search terms.start: The start time of the matching video clip, expressed in seconds.end: The end time of the matching video clip, expressed in seconds.rank: The relevance ranking assigned by the model. Lower numbers indicate higher relevance, starting with 1 for the most relevant result.Only Marengo 3.0 and newer versions return this field. Earlier versions return
scoreandconfidenceinstead.
Next steps
- Learn more about searching with text, image, and composed queries for best practices and advanced techniques.
- Explore entity search to find specific people in your videos.
- Learn query engineering techniques to refine your search queries.
- Use sorting to organize your search results.
- Apply grouping to cluster search results from the same video together.
- Implement filtering to narrow down results based on specific criteria.