Interactive content

The example projects on this page utilize the TwelveLabs Video Understanding Platform to create social and public goods. These projects demonstrate how multimodal AI can drive positive changes, exemplifying its transformative power.

Israel Palestine Video Understanding

Summary: The “Israel Palestine Video Understanding” application addresses misinformation and promotes empathy regarding the Israel-Palestine conflict.

Description: The application aggregates and summarizes content from YouTube and Reddit, presenting diverse viewpoints on the issue. These summaries, covering a range of opinions, are then visualized using an algorithm similar to T-SNE , offering a comprehensive understanding of the conflict’s various perspectives. The application was developed by Sasha Sheng.

GitHub repo: Israel Palestine Video Understanding

Integration with TwelveLabs

This application invokes the /summarize endpoint to create summaries for videos based on their content, specifically focusing on their stance regarding the Israel-Palestine conflict and the level of violence depicted:

1def generate_summary(videoID, videoID_to_filename):
2 SUMMARIZE_URL = f"{API_URL}/summarize"
3 headers = {
4 "x-api-key": API_KEY
5 }
6
7 data = {
8 "video_id": videoID,
9 "type": "summary",
10 "prompt": "Summarize if this video is pro-israel or pro-palestine or else and how violent it is."
11 }
12
13 response = requests.post(SUMMARIZE_URL, headers=headers, json=data)
14 print(f"{videoID}: status code - {response.status_code}")
15
16 summary_data = response.json()
17 print(summary_data)
18
19 with open(filename, 'a') as f:
20 writer = csv.writer(f, delimiter='\t')
21 writer.writerow([videoID, videoID_to_filename[videoID][0], videoID_to_filename[videoID][1], summary_data.get('summary')])

Accelerate SF Notifications

Summary: The “Accelerate SF Notifications” application simplifies public hearings for residents and special interest groups, particularly those focused on San Francisco housing developments.

Description: The application addresses the challenge of keeping up with numerous and lengthy public hearings, where the critical issue is identifying relevant discussions without watching entire meetings. The application was developed by Rahul Pal, Lloyd Chang, and Haonan Chen.

Key features include:

  • Data scraping: Extract information from public agendas, live-streamed hearings, and sources like San Francisco Gov TV.
  • Issue tracking: Utilize algorithms to pinpoint and extract discussions about housing projects and specific issues within hearings.
  • Automated notifications: Implement a system that sends real-time alerts.

GitHub repo: Accelerate SF Notifications

Integration with TwelveLabs

The application uses the /summarize endpoint to perform the following main functions: summarize videos and generate lists of chapters.

  • A summary encapsulates the key points of a video clearly. The code below shows how the application generates summaries:

    1data = {
    2 "video_id": "6545f931195730422cc38329",
    3 "type": "summary"
    4}
    5
    6# Send request
    7response = requests.post(f"{BASE_URL}/summarize", json=data, headers={"x-api-key": api_key})
  • A list of chapters provides a chronological breakdown of all the parts in a video. The following code shows how the application generates lists of chapters:

    1data = {
    2 "video_id": "6545f931195730422cc38329",
    3 "type": "chapter"
    4}
    5
    6# Send request
    7response = requests.post(f"{BASE_URL}/summarize", json=data, headers={"x-api-key": api_key})

The /gist endpoint generates swift breakdowns of the essence of your videos in the form of titles, topics, and hashtags. The following code shows how the application invokes this endpoint:

1data = {
2 "video_id": "6545f931195730422cc38329",
3 "types": [
4 "title",
5 "hashtag",
6 "topic"
7 ]
8}
9
10# Send request
11response = requests.post(f"{BASE_URL}/gist", json=data, headers={"x-api-key": api_key})

Deep Green

Summary: The “Dep Green” application uses the TwelveLabs Video Understanding Platform to accurately detect and map ocean trash using aerial and satellite imagery.

Description: The application offers a solution to the problem of plastic pollution in the oceans. It detects different types of ocean trash with over 90% accuracy and can scan over 500 hours of video daily. Trash is timestamped and geographically pinpointed, allowing easy data analysis and export. The application was developed by Shalini Ananda and Hans Walker.

GitHub: Deep Green

Integration with TwelveLabs

The search_trash function searches for videos containing specific types of trash, returning a list of such videos with key information about each:

1def search_trash(query, API_KEY):
2
3 data = {
4 "query": query,
5 "index_id": INDEX_ID,
6 "search_options": ["visual"]
7 }
8
9 response = requests.post(f"{API_URL}/search", headers={"x-api-key": API_KEY}, json=data)
10
11 response = response.json()
12
13
14 results = []
15
16 # Getting thumbnail and relevant data
17 for i in range(len(response['data'])):
18 score = response['data'][i]['score']
19 video_id = response['data'][i]['video_id']
20 video_location = Get_Video_Metadata(response['data'][i]['video_id'],API_KEY)['Location Type']
21 thumbnail_url = response['data'][i]['thumbnail_url']
22 results.append({"score": score,"video_location": video_location, "video_id": video_id, "thumbnail_url": thumbnail_url})
23
24 return results

The search_video_single function finds specific content within a single video:

1def search_video_single(video_id, query, API_KEY):
2
3
4 headers = {
5 "accept": "application/json",
6 "x-api-key": API_KEY,
7 "Content-Type": "application/json"}
8
9 data = {
10 "query": query,
11 "search_options": ["visual", "conversation", "text_in_video", "logo"],
12 "threshold": "high",
13 "filter": { "id": [video_id] },
14 "index_id": INDEX_ID }
15
16
17 response = requests.post(f"{API_URL}/search", headers=headers, json=data)
18
19 results = []
20
21 for i in range(len(response.json()['data'])):
22 score = response.json()['data'][i]['score']
23 video_id = response.json()['data'][i]['video_id']
24 results.append({"score": score, 'start_time':response.json()['data'][i]['start'],
25 'end_time':response.json()['data'][i]['end']})
26

The classify_latest_video function classifies videos into specific environmental categories:

1def classify_latest_video(id, file_name, API_KEY):
2 classify_url = f"{API_URL}/classify"
3 file_name = file_name.split('.')[0]
4
5 video_list = get_video_list(API_KEY)
6 time_initiated = time.time()
7 video_uploaded=True
8 video_index = 0
9 for i, next_video in enumerate(video_list):
10 if(next_video['metadata']['filename']==file_name):
11 video_uploaded=False
12 video_index = i
13 break
14 while(video_uploaded):
15 time.sleep(60)
16 video_list = get_video_list(API_KEY)
17 for i, next_video in enumerate(video_list):
18 print(next_video['metadata']['filename']," ",file_name)
19 if(next_video['metadata']['filename']==file_name):
20 video_uploaded=False
21 video_index = i
22 break
23
24 id = video_list[video_index]["_id"]
25
26 meta_url = f"{API_URL}/indexes/{INDEX_ID}/videos/{id}"
27
28 print("\n\nStarting Metadata",time.time()-time_initiated,"\n\n", file=sys.stderr)
29 payload = {
30 "page_limit": 10,
31 "include_clips": False,
32 "threshold": {
33 "min_video_score": 15,
34 "min_clip_score": 15,
35 "min_duration_ratio": 0.5
36 },
37 "show_detailed_score": False,
38 "options": ["conversation"],
39 "conversation_option": "semantic",
40 "classes": [
41 {
42 "prompts": ["This video is taken in an urban enviorment", "This means a dense environment", "Lots of people, cars and buildings"],
43 "options": ["visual"],
44 "conversation_option": "semantic",
45 "name": "Urban"
46 },
47 {
48 "prompts": ["This video is taken in a suburban enviorment", "There should be buildings, roads", "Everything should be a lot more spread out", "The majority of the space should be developed"],
49 "options": ["visual"],
50 "conversation_option": "semantic",
51 "name": "Suburban"
52 },
53 {
54 "prompts": ["This video was taken in a rural enviorment", "There shouldn't be a ton of human development", "Buildings should be extremly spread out", "Should mostly be nature", "Very few humans around"],
55 "options": ["visual"],
56 "conversation_option": "semantic",
57 "name": "Rural"
58 }
59 ],
60 "video_ids": [id]
61 }
62 headers = {
63 "accept": "application/json",
64 "x-api-key": API_KEY,
65 "Content-Type": "application/json"
66 }
67
68 response = requests.post(classify_url, json=payload, headers=headers)
69 response = response.json()
70
71 print(response, file=sys.stderr)
72 video_class = response['data'][0]['classes'][0]['name']
73
74 payload = { "metadata": { "Location Type": video_class } }
75 headers = {
76 "accept": "application/json",
77 "x-api-key": API_KEY,
78 "Content-Type": "application/json"
79 }
80
81 response = requests.put(meta_url, json=payload, headers=headers)

RememberMe - Dementia Assistant

Summary: The project addresses the critical challenge of assisting individuals with dementia in retaining their independence and enhancing their quality of life.

Description: The application is a comprehensive digital support system with a home screen displaying the current date, important reminders, and action buttons. It also has a chatbot that users can use to ask questions about their lives. The application collects data such as video, audio, and personal notes, and it utilizes the TwelveLabs Video Understanding Platform to convert multimedia information into text for the chatbot database’s organizational and storage purposes. The objective is to provide a seamless and intuitive platform that enables users to recall important details about their lives, manage daily tasks, and maintain connections with people and places that matter to them. The application was developed by Tatiane Wu Li, Pedro Goncalves de Paiva, Aleksei (Alex) Korablev, and Na Le.

GitHub: RememberMe.

Presentation: RememberMe.

Integration with TwelveLabs

The submit_video_for_processing function uploads a video to the platform by invoking the POST method of the /tasks/external-provider endpoint. Upon receiving the response, the function processes it to determine the outcome. If the upload is successful, the function returns the unique identifier of the submitted task. In case of an error, the function returns an error message that details the specific reason for the failure. This helps developers identify and resolve any issues with the video upload process.

1import requests
2from pprint import pprint
3
4# Constants
5API_URL = "https://api.twelvelabs.io/v1.2"
6API_KEY = "<YOUR_API_KEY>"
7INDEX_ID = "<YOUR_INDEX_ID>" # Replace with your actual index ID obtained from creating an index
8
9# Function to submit a video URL for processing by an external provider
10def submit_video_for_processing(video_url):
11 """Submit a video URL to an external processing service and return the task ID."""
12 TASKS_URL = f"{API_URL}/tasks/external-provider"
13 headers = {"x-api-key": API_KEY}
14 data = {"index_id": INDEX_ID, "url": video_url}
15 response = requests.post(TASKS_URL, headers=headers, json=data)
16 if response.status_code == 201:
17 task_id = response.json().get("_id")
18 print(f"Task submitted successfully. Task ID: {task_id}")
19 return task_id
20 else:
21 print(f"Failed to submit task: {response.status_code}")
22 pprint(response.json())
23 return None
24
25# Example usage
26video_url = "https://www.youtube.com/watch?v=TLwhqmf4Td4&ab_channel=RGSACHIN"
27task_id = submit_video_for_processing(video_url)

The get_video_summary function takes the unique identifier of a video as a parameter and invokes the POST method of the /generate endpoint to summarize it. If successful, it returns the generated summary; otherwise, it prints an error message and returns None.

1def get_video_summary(video_id):
2 GENERATE_URL = f"{API_URL}/generate" # Define the URL to generate the summary
3 data = {"video_id": video_id, "prompt": "Make a summary"} # Set up the data payload
4 response = requests.post(GENERATE_URL, headers=headers, json=data) # Make the POST request
5 if response.status_code == 200:
6 summary = response.json().get('data') # Get the summary data from the response
7 print("Video summary generated successfully.")
8 return summary # Return the summary
9 else:
10 print(f"Failed to generate summary: {response.status_code}") # Print failure message
11 pprint(response.json())
12 return None # Return None if summary generation fails

CamSense AI

Summary: “CamSense AI” is an AI-powered application that assesses webcam videos, providing instant insights and alerts. It uses the TwelveLabs Video Understanding Platform to analyze video content and identify significant changes or events.

Description: The application addresses the challenge of custom trigger creation based on content understanding of unattended recorded video. This solution is particularly useful in ecology, fire safety, and flood water level monitoring.

The typical workflow is as follows:

  1. The TwelveLabs Video Understanding Platform generates embeddings for the reference frame and the subsequent video clips and summarizes them.
  2. The application uses Groq to produce natural language descriptions of the differences.
  3. The application determines the significance of these differences.
  4. Clips that differ significantly are logged along with their timestamps and descriptions.
  5. The process concludes with the aggregation of all logs into a final report

The application was developed by Daniel Talero, Paul Kubie, and Todd Gardiner.

Colab notebook: hackathon.ipynb .

Integration with TwelveLabs

The code below creates a video indexing task that uploads a video to the TwelveLabs Video Understanding Platform by invoking the create method of the task object:

1video_files = glob(reference_filename) # Example: "/videos/*.mp4
2
3
4print(f"Uploading {reference_filename}")
5task = client.task.create(index_id=index_obj.id, file=reference_filename, language="en")
6print(f"Task id={task.id}")
7print(f"Task_video_id = {task.video_id}")
8ref_id = task.video_id
9
10
11frame_id = []
12if len(rawvids) > 0 :
13 for i in range(len(rawvids)):
14 video_files = glob( ("/content/rawdata/" + str(rawvids[i]) ) ) # Example: "/videos/*.mp4
15 print(f"Uploading {rawvids[i]}")
16 task = client.task.create(index_id=index_obj.id, file= ("/content/rawdata/" + str(rawvids[i]) ) , language="en")
17 print(f"Task id={task.id}")
18 frame_id.append(task.video_id)

The code below invokes the create method of the embed.task object to create an embedding for the reference frame:

1task = client.embed.task.create(
2 engine_name="Marengo-retrieval-2.6",
3 video_url="https://storage.googleapis.com/lab-storage-items/sample-5s.mp4")
4print(
5 f"Created task: id={task.id} engine_name={task.engine_name} status={task.status}"
6)
7
8
9def on_task_update(task: EmbeddingsTask):
10 print(f" Status={task.status}")
11
12
13status = task.wait_for_done(
14 sleep_interval=5,
15 callback=on_task_update
16 )
17print(f"Embedding done: {status}")
18task = client.embed.task.retrieve(task.id)
19if task.video_embeddings is not None:
20 for v in task.video_embeddings:
21 print(
22 f"embedding_scope={v.embedding_scope} start_offset_sec={v.start_offset_sec} end_offset_sec={v.end_offset_sec}"
23 )
24 print(f"embeddings: {', '.join([str(x) for x in v.embedding.float])}")
25 ref_emb = np.array([str(x) for x in v.embedding.float])

The code below creates embeddings for the subsequent frames:

1fref_emb = []
2for i in range(len(rawvids)):
3
4
5 task = client.embed.task.create(
6 engine_name="Marengo-retrieval-2.6",
7 video_url="https://storage.googleapis.com/lab-storage-items/sample-5s.mp4")
8 print(
9 f"Created task: id={task.id} engine_name={task.engine_name} status={task.status}"
10 )
11
12 status = task.wait_for_done(
13 sleep_interval=5,
14 callback=on_task_update
15 )
16 print(f"Embedding done: {status}")
17 task = client.embed.task.retrieve(task.id)
18 if task.video_embeddings is not None:
19 for v in task.video_embeddings:
20 print(
21 f"embedding_scope={v.embedding_scope} start_offset_sec={v.start_offset_sec} end_offset_sec={v.end_offset_sec}"
22 )
23 print(f"embeddings: {', '.join([str(x) for x in v.embedding.float])}")
24 fref_emb.append(np.array([str(x) for x in v.embedding.float]))

The code below invokes the summarize method of the generate object to summarize the reference frame:

1res = client.generate.summarize(ref_id, type='summary', prompt="In a detailed way, describe this video clip." )

The code below summarizes each subsequent video and stores the results in a list:

1The code below summarizes each subsequent video and stores the results in a list:
2fres = []
3fres_emb = []
4for i in range(len(rawvids)):
5 res2 = client.generate.summarize(frame_id[i], type='summary', prompt="In a detailed way, describe this video clip." )
6 fres.append(res2)
7 fres_emb.append(model.encode(res2.summary))
Built with