Sample appsIntegrationsDiscordPlaygroundDevEx repo
GuidesSDK ReferenceAPI Reference
GuidesSDK ReferenceAPI Reference
  • TwelveLabs API
    • Introduction
    • Authentication
    • Typical workflows
  • API Reference
    • Manage indexes
    • Upload videos
    • Manage videos
    • Any-to-video search
    • Create video embeddings
    • Create text, image, and audio embeddings
    • Analyze videos
    • Error codes
LogoLogo
Sample appsIntegrationsDiscordPlaygroundDevEx repo
API ReferenceManage videos

GET
https://api.twelvelabs.io/v1.3/indexes/:index-id/videos/:video-id
GET
/v1.3/indexes/:index-id/videos/:video-id
1curl -G https://api.twelvelabs.io/v1.3/indexes/6298d673f1090f1100476d4c/videos/6298d673f1090f1100476d4c \
2 -H "x-api-key: <apiKey>" \
3 -d transcription=true
Try it
200Retrieved
1{
2 "_id": "61e17be5777e6caec646fa07",
3 "created_at": "2022-01-14T13:34:29Z",
4 "updated_at": "2022-01-14T13:34:29Z",
5 "indexed_at": "2022-01-14T14:05:55Z",
6 "system_metadata": {
7 "duration": 3747.841667,
8 "filename": "IOKgzkakhlk.mp4",
9 "fps": 29.97002997002997,
10 "height": 360,
11 "width": 482
12 },
13 "user_metadata": {
14 "category": "recentlyAdded",
15 "batchNumber": 5,
16 "rating": 9.3,
17 "needsReview": true
18 },
19 "hls": {
20 "video_url": "https://d2cp8xx7n5vxnu.cloudfront.net/6298aa0b535db125bf6e1d10/64902a28fb01304dd47be3cb/stream/c924f34a-144e-41df-bf2a-c693703fa134.m3u8",
21 "thumbnail_urls": [
22 "https://d2cp8xx7n5vxnu.cloudfront.net/6298aa0b535db125bf6e1d10/64902a28fb01304dd47be3cb/thumbnails/c924f34a-144e-41df-bf2a-c693703fa134.0000001.jpg"
23 ],
24 "status": "COMPLETE",
25 "updated_at": "2024-01-16T07:59:40.879Z"
26 },
27 "embedding": {
28 "model_name": "Marengo-retrieval-2.7",
29 "video_embedding": {
30 "segments": [
31 {
32 "embedding_option": "visual-text",
33 "embedding_scope": "clip",
34 "end_offset_sec": 7.5666666,
35 "float": [
36 -0.04747168,
37 0.030509098,
38 0.032282468
39 ],
40 "start_offset_sec": 0
41 }
42 ]
43 }
44 },
45 "transcription": [
46 {
47 "start": 0,
48 "end": 10.5,
49 "value": "Hello, how are you?"
50 },
51 {
52 "start": 10.5,
53 "end": 15.2,
54 "value": "I'm fine, thank you."
55 }
56 ]
57}
Was this page helpful?
Previous

Update video information

Next
Built with
This method retrieves information about the specified video.
Retrieve video information

Authentication

x-api-keystring
API Key authentication via header

Path parameters

index-idstringRequired
The unique identifier of the index to which the video has been uploaded.
video-idstringRequired
The unique identifier of the video to retrieve.

Query parameters

embedding_optionlist of enumsOptional
Specifies which types of embeddings to retrieve. You can include one or more of the following values: - `visual-text`: Returns visual embeddings optimized for text search. - `audio`: Returns audio embeddings. <br/> To retrieve embeddings for a video, it must be indexed using the Marengo video understanding model version 2.7 or later. For details on enabling this model for an index, see the [Create an index](/reference/create-index) page. The platform does not return embeddings if you don't provide this parameter. The values you specify in `embedding_option` must be included in the `model_options` defined when the index was created. For example, if `model_options` is set to `visual` only, then you cannot set `embedding_option` to `audio` or both `visual-text` and `audio`.
Allowed values:
transcriptionbooleanOptional
The parameter indicates whether to retrieve a transcription of the spoken words for the indexed video. Note that the official SDKs will support this feature in a future release.

Response

The specified video information has successfully been retrieved.
_idstring or null
The unique identifier of the video.
created_atstring or null
A string indicating the date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), that the video indexing task was created.
updated_atstring or null
A string indicating the date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), that the corresponding video indexing task was last updated. The platform updates this field every time the corresponding video indexing task transitions to a different state.
indexed_atstring or null
A string indicating the date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), that the video indexing task has been completed.
system_metadataobject or null
System-generated metadata about the video.
user_metadatamap from strings to any or null
User-generated metadata about the video.
hlsobject or null
The platform returns this object only for the videos that you uploaded with the `enable_video_stream` parameter set to `true`.
embeddingobject or null
Contains the embedding and the associated information. The platform returns this field when the `embedding_option` parameter is specified in the request.
transcriptionlist of objects or null
An array of objects that contains the transcription. For each time range for which the platform finds spoken words, it returns an object that contains the fields below. If the platform doesn't find any spoken words, the `data` field is set to `null`. Note that the official SDKs will support this feature in a future release.

Errors

The parameter indicates whether to retrieve a transcription of the spoken words for the indexed video. Note that the official SDKs will support this feature in a future release.
The specified video information has successfully been retrieved.

A string indicating the date and time, in the RFC 3339 format (“YYYY-MM-DDTHH:mm:ssZ”), that the corresponding video indexing task was last updated. The platform updates this field every time the corresponding video indexing task transitions to a different state.

A string indicating the date and time, in the RFC 3339 format (“YYYY-MM-DDTHH:mm:ssZ”), that the video indexing task was created.

A string indicating the date and time, in the RFC 3339 format (“YYYY-MM-DDTHH:mm:ssZ”), that the video indexing task has been completed.

System-generated metadata about the video.

User-generated metadata about the video.

An array of objects that contains the transcription. For each time range for which the platform finds spoken words, it returns an object that contains the fields below. If the platform doesn’t find any spoken words, the data field is set to null. Note that the official SDKs will support this feature in a future release.

The platform returns this object only for the videos that you uploaded with the enable_video_stream parameter set to true.

Contains the embedding and the associated information. The platform returns this field when the embedding_option parameter is specified in the request.

Specifies which types of embeddings to retrieve. You can include one or more of the following values:

  • visual-text: Returns visual embeddings optimized for text search.
  • audio: Returns audio embeddings.

To retrieve embeddings for a video, it must be indexed using the Marengo video understanding model version 2.7 or later. For details on enabling this model for an index, see the Create an index page.

The platform does not return embeddings if you don’t provide this parameter.

The values you specify in embedding_option must be included in the model_options defined when the index was created. For example, if model_options is set to visual only, then you cannot set embedding_option to audio or both visual-text and audio.