Milvus - Advanced video search
Summary: This integration creates an efficient semantic video search solution by combining Twelve Labs' Embed API, which generates multimodal embeddings from video content, with Milvus, an open-source vector database that provides efficient storage and retrieval for your embeddings. This integration enables you to incorporate video content analysis capabilities into your applications.
Key use cases include:
- Content-based video retrieval
- Recommendation systems
- Search engines that understand the nuances of video data.
Description: The process of performing a semantic video search using the Embed API and Milvus involves two main steps:
- Create multimodal embeddings for your video content using the Embed API.
- Use the embeddings created in the previous step to perform similarity searches in Milvus.
Step-by-step guide: Our blog post, Advanced Video Search: Leveraging Twelve Labs and Milvus for Semantic Search, guides you through the process of creating a video search application, from setup to performing searches.
Colab Notebook: TwelveLabs-EmbedAPI-Milvus
Integration with Twelve Labs
This section describes how you can use the Twelve Labs Python SDK to create video embeddings for semantic video search. The generate_embedding
function returns a list of dictionaries, each containing an embedding vector and the associated metadata:
def generate_embedding(video_url):
"""
Generate embeddings for a given video URL using the Twelve Labs API.
This function creates an embedding task for the specified video URL using
the Marengo-retrieval-2.6 engine. It monitors the task progress and waits
for completion. Once done, it retrieves the task result and extracts the
embeddings along with their associated metadata.
Args:
video_url (str): The URL of the video to generate embeddings for.
Returns:
tuple: A tuple containing two elements:
1. list: A list of dictionaries, where each dictionary contains:
- 'embedding': The embedding vector as a list of floats.
- 'start_offset_sec': The start time of the segment in seconds.
- 'end_offset_sec': The end time of the segment in seconds.
- 'embedding_scope': The scope of the embedding (e.g., 'shot', 'scene').
2. EmbeddingsTaskResult: The complete task result object from Twelve Labs API.
Raises:
Any exceptions raised by the Twelve Labs API during task creation,
execution, or retrieval.
"""
# Create an embedding task
task = twelvelabs_client.embed.task.create(
engine_name="Marengo-retrieval-2.6",
video_url=video_url
)
print(f"Created task: id={task.id} engine_name={task.engine_name} status={task.status}")
# Define a callback function to monitor task progress
def on_task_update(task: EmbeddingsTask):
print(f" Status={task.status}")
# Wait for the task to complete
status = task.wait_for_done(
sleep_interval=2,
callback=on_task_update
)
print(f"Embedding done: {status}")
# Retrieve the task result
task_result = twelvelabs_client.embed.task.retrieve(task.id)
# Extract and return the embeddings
embeddings = []
for v in task_result.video_embeddings:
embeddings.append({
'embedding': v.embedding.float,
'start_offset_sec': v.start_offset_sec,
'end_offset_sec': v.end_offset_sec,
'embedding_scope': v.embedding_scope
})
return embeddings, task_result
For more details on how to create and customize video embeddings, see the Create video embeddings page.
Next steps
After reading this page, you have the following options:
- Customize and use the example: Use the TwelveLabs-EmbedAPI-Milvus notebook to understand how the integration works. You can make changes and add more functionalities to suit your specific use case. Some notable examples include:
- Combine text and video queries for hybrid search: Use the Embed API to create text embeddings for your queries. Then, you can perform weighted searches in Milvus using both text and video embeddings.
- Search within specific parts of videos: Break long videos into smaller segments. Create an embedding for each segment to find specific moments in videos. To adjust the timing and length of your embeddings, see Customize your embeddings.
- Analyze video content: Use embeddings to group similar video segments. This helps you detect trends or find unusual content in large video collections.
- Explore further: Try the applications built by the community or our sample applications to get more insights into the Twelve Labs Video Understanding Platform's diverse capabilities and learn more about integrating the platform into your applications.
Updated 3 months ago