Sports
The applications on this page demonstrate how the Twelve Labs Video Understanding Platform enhances sports analysis and viewer engagement.
Chessboxing AI Clips
Summary: "Chessboxing AI Clips" is a web application that enables you to semantically search, preview, and compile video clips from the hybrid sport of chessboxing, showcasing the adaptability of the Twelve Labs Video Understanding Platform to unconventional sports.
Description: The application, developed during a Twelve Labs Hackathon, is a React and TypeScript-based web application that offers the following key features:
- Semantic video search: You can search for specific moments in chessboxing videos using natural language queries.
- Clip preview: Before selecting clips for compilation, you can preview them using the integrated video player.
- Highlight reel creation: You can select multiple clips to create custom highlight reels and add AI-generated background music.
- Shareable content: The application generates unique URLs for each highlight reel, allowing you to share your compilations on social media platforms.
The application was developed by Ray Deck and Akshay Rakheja.
GitHub: Chessboxing AI Clips
Demo: Chessboxing AI Clips
Integration with Twelve Labs
The code below iterates through a list of video files, uploading each to the Twelve Labs Video Understanding platform if it hasn't been uploaded yet. It invokes the client.task.create
function of the Twelve Labs Python SDK to create a video indexing task for each upload and monitors the task status until completion or failure:
for video_file in video_files:
video_filename = os.path.basename(video_file)
if video_filename in existing_filenames:
print(
f"File '{video_filename}' is already uploaded. Skipping to the next video."
)
continue
try:
print(f"Uploading {video_filename}")
# Assuming task creation and upload logic remains the same
task = client.task.create(index_id=INDEX_ID, file=video_file, language="en")
print(f"Task id={task.id}")
def on_task_update(task: Task):
print(f" Status={task.status}")
task.wait_for_done(callback=on_task_update)
if task.status != "ready":
raise RuntimeError(f"Indexing failed with status {task.status}")
print(
f"Uploaded {video_filename}. The unique identifier of your video is {task.video_id}."
)
except RuntimeError as e:
print(f"Error uploading {video_filename}: {e}")
# Continue with the next video
The search_twelve_labs
function invokes the POST
method of the /search
endpoint with specific parameters and a query provided by the user:
def search_twelve_labs(query):
url = "https://api.twelvelabs.io/v1.2/search"
payload = {
"search_options": ["visual", "conversation"],
"adjust_confidence_level": 0.5,
"group_by": "clip",
"threshold": "low",
"sort_option": "score",
"operator": "or",
"conversation_option": "semantic",
"page_limit": 50,
"query": query, # Use the input query here
"index_id": os.getenv("TL_INDEX") # Replace with your actual index ID
}
headers = {
"accept": "application/json",
"x-api-key": os.getenv("TL_API_KEY"), # Replace with your actual API key
"Content-Type": "application/json"
}
response = requests.post(url, json=payload, headers=headers)
if response.status_code == 200:
return response.json()
else:
return {"error": "Request failed", "status_code": response.status_code, "message": response.text}
Cricket Video Analyzer
Summary: "Cricket Video Analyzer" combines the capabilities of the Twelve Labs Video Understanding Platform with language models to offer comprehensive insights into a batter's performance. It focuses on analyzing Glenn Maxwell of Australia as a case study.
Description: The application has been developed to meet the growing demand for sophisticated video analysis in cricket. Key features include:
- Video understanding: Uses the Twelve Labs API to analyze cricket matches, extracting detailed information about the batter's playing style and techniques.
- Data augmentation: Integrates ball-by-ball commentary data from ESPN cricinfo APIs to enhance the video analysis with textual descriptions of the gameplay.
- Vector store creation: Builds a vector store from the combined dataset of video analysis and ball-by-ball commentary, enabling efficient information retrieval.
- AI-powered query answering: Utilizes GPT-4 in a Retrieval-Augmented Generation (RAG) setup to answer complex questions about a batter's performance and style based on the analyzed data.
GitHub: Cricket video analyzer
Demo: Cricket Batsman Analyzer
The application was developed by Raghavan Muthuregunathan
Integration with Twelve Labs
The code below automates the process of extracting detailed batting information from multiple videos, creating a dataset for further analysis. It invokes the generate.text
method of the Twelve Labs Python SDK to generate a ball-by-ball commentary for Maxwell's cricket innings. It compiles all the generated commentaries into a single text file named "maxwell_innings_video_analysis_concat."
video_ids = ["66370d91d1cd5a287c957d14", "66370d91d1cd5a287c957d15", "66370d95d1cd5a287c957d16","663715d0d1cd5a287c957d17"]
summary = ""
for id in video_ids:
print(id)
res = client.generate.text(video_id= id,prompt=" Listen to the commentary. Generate a ball by ball commentary of how Maxwell played. For every ball, generate a description of the shot that Maxwell played. if Maxwell got out, explain how he got out?")
summary += res.data
print(len(summary))
with open("maxwell_innings_video_analysis_concat.txt", "w") as w:
w.write(summary)
Ready Player N
Summary: Ready Player N is a Streamlit-based application that helps you discover unique features in hiking trails by analyzing hiking videos using the Marengo video understanding engine.
Description: Ready Player N streamlines the discovery of special attractions along hiking trails, such as waterfalls. The application indexes a set of hiking videos, allowing you to search for relevant information. In order to improve search accuracy, the queries are refined using Groq. In response, the application provides the following:
- A thumbnail of the matching video segment.
- A description of the featured attraction.
- A link to a hiking website with additional details.
- The ability to watch the matching video segment directly within the integrated player.
The application was developed by Bryce Hirschfeld, James Beasley, Matt Florence, Drakosfire Meigs, and Nooshin Hashemi.
SportsSnap
Summary: SportsSnap is a browser extension that enhances the sports viewing experience by providing real-time player, team, and game information overlays. It uses the Twelve Labs Video Understanding Platform for multimodal search.
Description: The application bridges the gap between video content and real-time sports data, offering instant access to detailed information while watching game highlights or replays.
The application was developed by Axel VandenHeuvel and Navan Chauhan.
Key features include:
- Real-time information overlay on sports videos
- Player and team statistics integration
- Game situation analysis
- Multimodal search capabilities
Integration with Twelve Labs
The integration with the Twelve Labs Video Understanding Platform provides the following functionalities:
-
Video understanding: Extract and analyze visual and audio information from sports content, improving the accuracy of video identification and content analysis.
-
Advanced search capabilities: Perform complex queries across various data types (visual, audio, and textual) to enhance the app's ability to provide relevant information to users.
-
Improved video-to-game matching: Improve the accuracy of matching videos to specific games in ESPN's database, addressing one of the main challenges faced during development.
-
Enhanced user experience: Provide users with more accurate and contextually relevant information, creating a more seamless and informative viewing experience.
SnowRivals
Summary: SnowRivals is an AI-powered application for skiers and snowboarders that analyzes trick videos, provides improvement feedback, and aligns with competition judging criteria. The application offers detailed insights and suggestions for enhancing trick execution and scoring.
Description: SnowRivals uses Twelve Labs' video summarization and embedding technology to analyze snowboarding and skiing trick videos by extracting the following structured data:
- Trick identification
- Athlete performance
- Success and failure detection
- Timing information
The application was developed by Jeremy London, Francesco Vassalli, Neil Fonseca, and Reid Rumack.
SnowRivals was one of the winners at the AI Tinkerers Multimodal Hackathon .
GitHub: multimodal-ui and MultiModal.
Integration with Twelve Labs
The generate
function uses the Generate API suite to analyze a snowboarding competition video. It sends a POST
request to the /summarize
endpoint with the following parameters:
video_id
: Identifier of the video to analyze.type
: Set to "summary".prompt
: Detailed instructions for generating a table of snowboarding tricks. It includes a predefined list of trick descriptions to aid in trick identification.temperature
: Controls randomness in the output (set to 0.5).
If successful, the function returns a summary table containing the following:
- Athlete: Identifier for the person performing each trick
- Trick: Name of the trick performed
- Result: Success or failure of the trick
- Half: Presence of a half pipe ("Yes"/"No")
- Rail: Presence of a rail ("Yes"/"No")
If the summary generation fails, the function prints an error message with the video_id
and the API response.
def generate(video_id:str)->str:
BASE_URL = "https://api.twelvelabs.io/v1.2"
trick_descriptions = '''Indy: Grabbing the toe edge of the board between the bindings with the back hand.
Mute: Grabbing the toe edge of the board between the bindings with the front hand.
Stalefish: Grabbing the heel edge of the board between the bindings with the back hand.
Melon: Grabbing the heel edge of the board between the bindings with the front hand.
Tail Grab: Grabbing the tail of the board.
Nose Grab: Grabbing the nose of the board.
Method: Grabbing the heel edge of the board with the front hand while arching the back and extending the legs.
Japan: Grabbing the toe edge of the board with the front hand while tucking the knees and rotating the board.
Seatbelt: Grabbing the nose of the board with the back hand.
Truck Driver: Grabbing the nose of the board with the front hand and the tail with the back hand simultaneously.
Rodeo: A backward flip with a 180 or 540-degree spin.
Misty: A forward flip with a 180 or 540-degree spin.
Cork: An off-axis spin, where the boarder is tilted.
Double Cork: A double off-axis spin.
Wildcat: A backflip with a rotation around the snowboarder’s side.
Jumps
Ollie: Lifting the front foot, followed by the back foot, to jump.
Nollie: Lifting the back foot, followed by the front foot, to jump.
Slides and Grinds
Boardslide: Sliding with the board perpendicular to the rail.
Lipslide: Approaching from the opposite side and sliding with the board perpendicular to the rail.
50-50: Sliding straight along a rail with the board parallel.
Nose Slide: Sliding on the nose of the board.
Tail Slide: Sliding on the tail of the board.
Blunt Slide: Sliding on the tail of the board with the nose raised.
Buttering
Nose Butter: Pressing down on the nose of the board while rotating the tail.
Tail Butter: Pressing down on the tail of the board while rotating the nose.
Nose Roll: A 180-degree rotation while buttering on the nose.
Tail Roll: A 180-degree rotation while buttering on the tail.
'''
data = {
"video_id": video_id,
"type": "summary",
"prompt": f"This video has a snowboarding competition. Use expert Generate a table that records one row per trick performed. Add one column called Athlete to label which person performed each trick. If you do not know their names just refer to them as person-1 or person-2 as they appear sequentially in the video. In a second column called Trick, name the trick. In the third column called Result note if the trick was a Success or a Failure. In a fourth column called Half say Yes if the video has a half pipe and No if it does not. In a fifth column called Rail say Yes if the video has a rail and no if it does not. Keep your response brief and do not include anything aside from the table. If you are unsure of what to call a track you may reference this vocabulary list {trick_descriptions}",
"temperature": 0.5
}
response = requests.post(f"{BASE_URL}/summarize", json=data, headers={"x-api-key": key})
response_dict = json.loads(response.text)
if 'summary' in response_dict:
return response_dict['summary']
else:
print(f"Summary for {video_id} unable to be generated. Got response {response}")
Smart Sweat AI
Summary: Smart Sweat AI is a mobile application that uses artificial intelligence to provide real-time feedback on exercise form during workouts. It uses the device's camera to analyze your movements and offer personalized corrections and improvements.
Description: The application serves as a digital personal trainer, utilizing AI technology to:
- Analyze exercise form in real-time through your phone camera
- Provide instant feedback and corrections on posture and movement
- Offer a comprehensive library of exercises
- Track your progress and personalize workout experiences
Smart Sweat AI uses the Twelve Labs Generate API suite to provide advanced video analysis and personalized fitness instruction.
The application was developed by Jesse Neumann, Hannah Neumann, and Roblynn Neumann.
Smart Sweat AI was one of the winners at the AI Tinkerers Multimodal Hackathon .
Website: Smart Sweat AI
National Park Advisor
Summary: National Park Advisor is an application designed to assist you in researching and planning trips to National Parks.
Description: The application utilizes two distinct RAG pipelines:
- Park information pipeline: Retrieves and presents detailed information about specific National Parks. It uses a data store to search for park information efficiently. When the data store does not have sufficient data for a particular park, the application supplements it by searching the internet, ensuring comprehensive coverage.
- Trip planning pipeline: Assists in planning visits to National Parks.
Key features include:
- National Park research assistance
- Trip planning for National Parks
- Video content search and classification
- Transcript extraction from videos
- Internet search capability for supplemental park information
The application utilizes:
- Twelve Labs Python SDK: Video search, transcript extraction, and content classification.
- Pinecone: Stores data.
- Tavily: Internet search for additional content.
- Langgraph: Workflow management.
- Groq: Language model (LLM) for fast inference.
The application was developed by Sarah Kainec and Lexie Marinelli.
GitHub: ntl-park-advisor
Demo: National Park Advisor
Integration with Twelve Labs
The get_videos_from_twelve
function performs search requests and retrieves transcriptions for up to four videos that match the provided query. It returns a dictionary that includes all key-value pairs from the state
parameter, to which it adds the video URLs and transcription data:
def get_videos_from_twelve(state: dict) -> dict:
print("getting videos")
load_dotenv()
client = TwelveLabs(api_key=os.getenv("TL_API_KEY"))
search_results = client.search.query(
index_id=TL_INDEX_ID,
query_text=state["query"],
options=["visual"]
)
filtered_search = []
unique_vids_id = []
for clips in search_results.data.root:
if clips.video_id not in unique_vids_id:
unique_vids_id.append(clips.video_id)
filtered_search.append(clips)
transcript_data = []
urls = []
for id in unique_vids_id[0:4]:
script = client.index.video.transcription(
index_id=TL_INDEX_ID,
id=id
)
url = client.index.video.retrieve(index_id=TL_INDEX_ID, id=id)
urls.append(url)
whole_script = ""
for vid_value in script.root:
whole_script = whole_script + " " + vid_value.value
transcript_data.append({"video_id": id, "video_url": url.hls.video_url, "transcript": whole_script})
return {
**state,
"video_urls": [url.hls.video_url for url in urls],
"transcript_data": transcript_data
}
The get_classified_videos_from_twelve
function classifies videos based on predefined, filters for the specified destination, and returns an updated state dictionary with URLs for up to four matching videos.
classification = [
{
"name": "Rocky Mountain National Park",
"prompts": [
"Things to do in Rocky Mountain National Park",
"Learn about Rocky Mountain National Park",
"Prepare for a trip to Rocky Mountain National Park",
"RMNP",
"Colorado"
]
},
{
"name": "Glacier National Park",
"prompts": [
"Things to do in Glacier National Park",
"Wildlife at Glacier National Park",
"Ecology of Glacier National Park",
"Glacier Science",
"Montana"
]
}
]
def get_classified_videos_from_twelve(state: dict) -> dict:
load_dotenv()
client = TwelveLabs(api_key=os.getenv("TL_API_KEY"))
classified_result = client.classify.index(
index_id=TL_INDEX_ID,
options=["visual"],
classes=classification,
)
filtered_search = []
unique_vids_id = []
for clips in classified_result.data.root:
if clips.video_id not in unique_vids_id and clips.classes.root[0].name == state["destination"]:
unique_vids_id.append(clips.video_id)
filtered_search.append(clips)
urls = []
for id in unique_vids_id[0:4]:
url = client.index.video.retrieve(index_id=TL_INDEX_ID, id=id)
urls.append(url)
return {
**state,
"video_urls": [url.hls.video_url for url in urls],
}
Updated 22 days ago