The example projects on this page demonstrate using the TwelveLabs Video Understanding Platform to enhance educational experiences. These projects address complex topics in education and pave the way for the future of AI-powered e-learning solutions.

AI’m Right

Summary: AI’m Right is an educational application developed by Muhammad Adil Fayya during SBHacks’25 at the University of California, Santa Barbara. It leverages artificial intelligence to streamline studying through quiz generation, lecture summarization, and video content search capabilities.

Description: The application integrates multiple AI services through a Flask backend that manages the processing operations and a Streamlit frontend that provides the user interface.:

  • TwelveLabs for video understanding
  • Claude for content processing,
  • Aryn for additional functionality

These services are integrated through a Flask backend that manages the processing operations and a Streamlit frontend that provides the user interface.

GitHub repo: darshanrao/RAG-Enhanced-Learning-Assistant

Integration with TwelveLabs

AI’m Right integrates with the Labs Video Understanding Platform for searching and retrieving video content, enabling you to locate specific topics or concepts within your recorded lectures without manually scanning through entire videos.

The query function uses the TwelveLabs Python SDK to perform one or more text queries using both the visual and audio search options. For each query, it retrieves paginated search results that include video IDs, confidence scores, and timestamp information. The function can optionally save these results to a JSON file and returns them as a nested list of dictionaries.

Python
1def query(self, query_text_list, file_name=None):
2 all_results = [] # List of lists of dictionaries
3
4
5 for query_text in query_text_list:
6 search_results = self.client.search.query(
7 index_id=os.getenv("INDEX_ID"),
8 query_text=query_text,
9 # query={"text": query_text},
10
11 # query = {
12 # "$not": {
13 # "origin": {
14 # "text": query_text
15 # },
16 # "sub": {
17 # "$or": [
18 # {
19 # "text": "News Anchor"
20 # },
21 # {
22 # "text": "Podcast"
23 # }
24 # ]
25 # }
26 # }
27 # },
28
29
30 options=["visual","audio"]
31 )
32
33 query_result = [] # List of dictionaries for this query
34 while True:
35
36 try:
37 # Retrieve each page of search results
38 page_data = next(search_results)
39 for clip in page_data: # Limiting to 3 results per page
40 query_result.append({
41 "video_id": clip.video_id,
42 "score": clip.score,
43 "start": clip.start,
44 "end": clip.end,
45 "confidence": clip.confidence,
46 # "metadata": clip.metadata,
47 })
48 except StopIteration:
49 break
50 all_results.append(query_result) # Append this query's results as a list
51
52
53 if file_name:
54 self.save_json(all_results, file_name)
55
56
57 return all_results # List of lists of dictionaries

The get_video_info function retrieves detailed information about a specific video using its ID. It returns the complete video metadata, including the HLS streaming URL.

Python
1def get_video_info(video_id):
2 api_key= os.getenv("TWELVE_LABS_KEY")
3
4
5 index_id=os.getenv("INDEX_ID")
6 url = f"https://api.twelvelabs.io/v1.3/indexes/{index_id}/videos/{video_id}"
7 headers = {
8 "x-api-key": api_key,
9 "Content-Type": "application/json",
10 }
11
12 response = requests.get(url, headers=headers)
13
14
15 if response.status_code == 200:
16 video_info = response.json() # Parse response as JSON
17 return video_info
18 elif response.status_code == 400:
19 print("The request has failed. Check your parameters.")
20 return None
21 else:
22 print(f"Error: Status code {response.status_code}")
23 return None

NeuroLearn

Summary: NeuroLearn is an AI-based learning platform that integrates neurofeedback to personalize education by creating.

Description: The application analyzes brainwave data to select and emphasize parts of lectures that match the student’s educational objectives. This approach keeps students engaged and improves their learning results. Developed by Akhil Dhavala, Ayush Khandelwal, Jackson Mowatt Gok, and Jacky Wong, NeuroLearn won 1st Place at the TEDAI Multimodal Hackathon (23 Labs) in San Francisco.

GitHub repo: NeuroLearn

Integration with TwelveLabs

NeuroLearn uses the TwelveLabs Video Understanding Platform to create custom snippets from lectures tailored to each student’s knowledge, background, and interests.

The code below indexes videos from YouTube and for each video generates a summary and highlights:

Python
1 def __init__(self):
2 self.api_url = os.getenv("TWELVE_LABS_BASE_URL")
3 self.headers = {"x-api-key": os.getenv("TWELVE_LABS_API_KEY")}
4
5 def index_youtube_video(self, index_id: str, youtube_url: str):
6 task_url = f"{self.api_url}/tasks/external-provider"
7 data = {
8 "index_id": index_id,
9 "url": youtube_url,
10 }
11 response = requests.post(task_url, headers=self.headers, json=data)
12 # you can get the video ID from video_id
13 return response.json()
14
15 def highlight_video(self, body: HighlightVideoBody) -> HighlightVideoResponse:
16 url = f"{self.api_url}/summarize"
17 data = body.dict()
18 response = requests.post(url, headers=self.headers, json=data)
19 return HighlightVideoResponse(**response.json())
20
21 def summarize(self, body: HighlightVideoBody) -> HighlightVideoResponse:
22 url = f"{self.api_url}/summarize"
23 data = body.dict()
24 response = requests.post(url, headers=self.headers, json=data)
25 return response.json()

42Labs - Personalized Podcast Builder

Summary: The 42 Labs Personalized Podcast Builder transforms how you learn on the go by synthesizing high-quality content from diverse sources like TED talks, podcasts, and articles into customized podcasts.

Description: The application analyzes your preferences, including topics of interest and proficiency levels, to curate content in various languages. You can interactively refine your learning experience by selecting subtopics and providing feedback, enhancing the application’s ability to offer relevant material. The application not only overcomes language and learning barriers but also holds the potential to evolve into a tool for creating personalized educational videos. The application was developed by Shivani Poddar, David Salib, Varun Theja, and Everett Knag.

GitHub:

Integration with TwelveLabs

The application uses the /classify/bulk endpoint to identify videos relevant to a specified topic:

1def classify_videos(index_id, sub_topic, api_key):
2 CLASSIFY_BULK_URL = f"{url}/classify/bulk"
3
4 data = {
5 "options": ["conversation", "text_in_video"],
6 "index_id": index_id,
7 "classes": [{"name": sub_topic,
8 "prompts": [
9 sub_topic,
10 ]}]
11 }
12 headers = {
13 "accept": "application/json",
14 "Content-Type": "application/json",
15 "x-api-key": api_key
16 }
17
18 response = requests.post(CLASSIFY_BULK_URL, headers=headers, json=data)
19 print (f'Status code: {response.status_code}')
20 print(response.json())
21 return response.json()["data"]

For each relevant video, the application invokes the /summarize endpoint to summarize videos as lists of chapters:

1def summarize_video(video_id, api_key):
2 payload = {
3 "type": "chapter",
4 "video_id": video_id,
5 }
6 headers = {
7 "accept": "application/json",
8 "x-api-key": api_key,
9 "Content-Type": "application/json"
10 }
11
12 url = "https://api.twelvelabs.io/v1.2/summarize"
13 response = requests.post(url, json=payload, headers=headers)
14
15 print(response.text)
16 chapter_summaries = [x["chapter_summary"] for x in response.json()["chapters"]]
17 return chapter_summaries
Built with