Video understanding engines

Twelve Labs' video understanding engines consist of a family of deep neural networks built on our multimodal foundation model for video understanding that you can use for the following downstream tasks:

  • Search using natural language queries
  • Zero-shot classification
  • Generate text from video.

Videos contain multiple types of information, including visuals, sounds, spoken words, and texts. The human brain combines all types of information and their relations with each other to comprehend the overall meaning of a scene. For example, you're watching a video of a person jumping and clapping, both visual cues, but the sound is muted. You might realize they're happy, but you can't understand why they're happy without the sound. However, if the sound is unmuted, you could realize they're cheering for a soccer team that scored a goal.

Thus, an application that analyzes a single type of information can't provide a comprehensive understanding of a video. Twelve Labs' video understanding engines, however, analyze and combine information from all the modalities to accurately interpret the meaning of a video holistically, similar to how humans watch, listen, and read simultaneously to understand videos.

Our video understanding engines have the ability to identify, analyze, and interpret a variety of elements, including but not limited to the following:

People, including famous individualsVisualMichael Jordan, Steve Jobs
ActionsVisualRunning, dancing, kickboxing
ObjectsVisualCars, computers, stadiums
Animals or petsVisualMonkeys, cats, horses
NatureVisualMountains, lakes, forests
Sounds (excluding human speech)VisualChirping (birds), applause, fireworks popping or exploding
Human speechConversation"Good morning. How may I help you?"
Text displayed on the screen (OCR)Text in videoLicense plates, handwritten words, number on a player's jersey
Brand logosLogoNike, Starbucks, Mercedes

Engine Types

Twelve Labs provides two distinct engine types - embedding and generative, each serving unique purposes in multimodal video understanding.

  • Embedding engines (Marengo) : These engines are proficient at performing tasks such as search and classification, enabling enhanced video understanding.
  • Generative engines (Pegasus): These engines generate text based on your videos.

The following engines are available:

Marengo2.5EmbeddingThe latest and best-performing multimodal video understanding engine.
Marengo2EmbeddingThis model introduced significant performance improvements.
MarengoEmbeddingThis version was available when the platform launched and enabled multimodal video understanding. However, Twelve Labs no longer offers support for this model.
Pegasus1.0GenerativeThis engine generates text based on videos.



When selecting an engine, consider the specific requirements of your task. To generate text from video, use Pegasus. For search and classification, use Marengo.


The screenshots in this section are from the Playground. However, the principles demonstrated are similar when invoking the API programmatically.

Steve Jobs introducing the iPhone

In the example screenshot below, the query was "How did Steve Jobs introduce the iPhone?". The Marengo video understanding engine used information found in the visual and conversation modalities to perform the following tasks:

  • Visual recognition of a famous person (Steve Jobs)
  • Joint speech and visual recognition to semantically search for the moment when Steve Jobs introduced the iPhone. Note that semantic search finds information based on the intended meaning of the query rather than the literal words you used, meaning that the platform identified the matching video fragments even if Steve Jobs didn't explicitly say the words in the query.

To see this example in the Playground, ensure you're logged in, and then open this URL in your browser.

Polar bear holding a Coca-Cola bottle

In the example screenshot below, the query was "Polar bear holding a Coca-Cola bottle." The Marengo video understanding engine used information found in the visual and logo modalities to perform the following tasks:

  • Recognition of a cartoon character (polar bear)
  • Identification of an object (bottle)
  • Detection of a specific brand logo (Coca-Cola)
  • Identification of an action (polar bear holding a bottle)

To see this example in the Playground, ensure you're logged in, and then open this URL in your browser.