Interactive content
The applications on this page utilize the Twelve Labs Video Understanding Platform to enhance the interactivity of content. This enhancement can lead to higher engagement rates, a crucial metric in successful marketing campaigns.
Shadow Clone
Summary: The "Shadow Clone" application uses a chatbot powered by the Twelve Labs Video Understanding Platform to enhance interaction between content creators and their audience.
Description: The application builds a virtual engagement pipeline to enhance two-way communication between content creators and their followers, fostering improved engagement. A chatbot powered by the Twelve Labs Video Understanding Platform summarizes videos, transforming the audience from passive viewers to active participants. The application was developed by Vaibhav Chhajed.
GitHub: Shadow Clone
Demo: Streamlit application
Integration with Twelve Labs
The /generate
endpoint generates open-ended texts based on your prompts, including, but not limited to, tables of content, action items, memos, reports, and comprehensive analyses. The example code below shows how the application generates these texts:
if prompt := st.chat_input("What is up?"):
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.markdown(prompt)
with st.chat_message("assistant"):
url = "https://api.twelvelabs.io/v1.2/generate"
payload = {
"video_id": st.session_state["video_id"],
"prompt": prompt
}
headers = {
"accept": "application/json",
"x-api-key": twelve_labs_key,
"Content-Type": "application/json"
}
response = requests.post(url, json=payload, headers=headers)
st.markdown(response.json().get("data"))
st.session_state.messages.append(
{"role": "assistant", "content": response.json().get("data")}
)
VideoBrain
Summary: VideoBrain is an AI chatbot that enhances video content accessibility through semantic search, video summarization, and a Q&A chatbot feature.
Description: The application is an AI chatbot designed for summarizing, searching, and conducting conversations based on your video content. This tool is helpful for content owners, as it enables them to upload their video libraries to a personalized space, enhancing accessibility for their audience. The application was developed by Konrad Gnat.
Key features include:
- Semantic Search: Users can execute semantic searches across a content creator's entire video library, locating specific topics or discussions within videos. This type of search interprets the meaning of your search queries, focusing on relevance rather than just matching your exact search terms to video content.
- Video Summarization: VideoBrain offers concise summaries of videos, emphasizing main points and highlights, ideal for users seeking a quick understanding of a video's content.
- Q&A Chatbot: This feature allows users to ask questions and receive answers based on the video content. The chatbot uses the voice of the YouTuber or the talk's speaker, fostering a personalized and engaging experience.
GitHub: VideoBrain
Integration with Twelve Labs
Highlights are lists of the most important events within a video, presented in the order they occur. The example code below shows how the application invokes the /summarize
endpoint to generate highlights for videos:
/** Call generate API */
static async generateSummary(data, videoId) {
data["video_id"] = videoId;
const config = {
method: "POST",
url: `${API_URL}/summarize`,
headers: this.headers,
data: data,
};
console.log("🚀 > TwelveLabsApi > generateSummary > config=", config);
try {
const response = await axios.request(config);
return response.data;
} catch (error) {
console.error(error);
}
}
The application utilizes the search
endpoint to search across all the videos in a specific index:
const searchVideo = async () => {
const SEARCH_URL = `${API_URL}/search`;
const data = JSON.stringify({
query: searchQuery,
index_id: INDEX_ID,
search_options: ["visual"],
});
const config = {
method: "post",
url: SEARCH_URL,
// headers: headers,
data: data,
};
const resp = await axios(config);
const response = await resp.data;
console.log(`Status code: ${resp.status}`);
console.log(response);
};
13 and Up Labs
Summary: The "13 and Up" application utilizes the Twelve Labs Video Understanding Platform to create family-friendly versions of videos.
Description: The application uses a process comprised of the following main steps:
- Invoke the
/indexes/{index-id}/videos/{video-id}/transcription
endpoint to generate a transcript of the video. - Identify any profane language in the transcript.
- Retrieve the timestamps for these segments using the
/search
endpoint. - Replace sentences containing profane language with PG-rated alternatives using voiceover technology, making the video suitable for all audiences.
The application was developed by Shai Unterslak and Zach Eisenhauer.
Demo: Thirteen and Up
Time Heist
Summary: The "Time Heist" application combines the Twelve Labs Video Understanding Platform and videogammetry to allow you to search videos, select objects, and create detailed AI-generated 3D models.
Description: The application utilizes an innovative approach to 3D modeling. Using the Twelve Labs Video Understanding Platform allows you to search through videos to identify and select objects. You can then generate AI 3D models of these objects, which you can refine into detailed 3D meshes using videogammetry. The application, built with a React front-end and integrated with Unity for iOS and Android, is designed for ease of use and accessibility. The application was developed by Yosun Chang.
Demo: Time Heist
Groq Pot
Summary: Groq Pot simplifies cooking by providing clear, concise recipe information from video content. It provides direct access to ingredients and instructions, cutting out unnecessary video content.
Description: Groq Pot simplifies recipe discovery by processing YouTube cooking videos.
Key features include:
- Extracts recipes from YouTube video links
- Customizes meals based on calories and dietary restrictions
- Saves recipes for future reference
- Enables recipe searches and modifications
- Provides clear, concise recipe information from video content
Groq Pot utilizes:
- TwelveLabs Python SDK to process video content into structured JSON
- FastAPI for backend operations
- Groq for natural language processing
- Langchain for implementing a Retrieval-Augmented Generation (RAG) system
The application was developed by Alexander Castillo and David Nelson.
GitHub: ai-tinkerers-hackathon
Integration with Twelve Labs
The twelvelabs_generate_text_from_video function uses a specific prompt to extract ingredients and instructions from a video. It returns data in a predefined JSON format, including the recipe title, dish name, a list of ingredients with amounts and units, and sequentially ordered instructions.
def twelvelabs_generate_text_from_video(video_id):
# video_id = "66ae779d7b2deac81dd1284c"
text_stream = client.generate.text_stream(
video_id=video_id, prompt="""
Extract the ingredients and the instructions from this video.
Make sure instructions are in sequential order from start to finish
Use this json format as Structured Data.
{
"title": "title",
"dishName": "dishName",
"ingredients": [
{
"item": "item",
"amount": "2",
"unit": "unit"
}
],
"instructions": [
{
"number": "1",
"instruction": "instruction"
}
]
}
If there is no information leave blank.
DO NOT RETURN ANYTHING OTHER THAN THE JSON."""
)
for text in text_stream:
print(text)
return text_stream.aggregated_text
Updated about 2 months ago