Release notes

The sections below list all new features, enhancements, and changes to the platform, in chronological order.

Version 1.1.2

If you have used the 1.1.1 version of the API, please refer to the following section for important information regarding the changes.

Improvements

To further improve the usability of the /classify endpoint, the following changes have been made:

  • The endpoint now allows you to classify a set of videos. The video_id parameter has been deprecated and now you must pass an array of strings named video_ids instead. Each element of the array represents the unique identifier of a video you want to classify. For details, see the Classify a set of videos page.

  • The threshold field in the request is now an object, and you can use it to filter based on the following criteria:

    • The confidence level that a video matches the specified class
    • The confidence level that a clip matches the specified class.
    • The duration ratio, which is the sum of the lengths of the matching video segments inside a video divided by the total length of the video.

    For details, see the Filtering > Content classification page.

  • The endpoint now supports pagination.

  • The duration-weighted score has been deprecated. When setting the show_detailed_score parameter to true, the platform now returns the maximum, average, and normalized scores.

Version 1.1.1

If you have used the 1.1 version of the API, please refer to the following sections for important information regarding the changes.

New features

Version 1.1.1 of the Twelve Labs Video Understanding Platform introduces the following new features:

Improvements

To further improve flexibility, usability, and clarity, the following changes have been made:

  • Combined queries:
    • You can now define global values for the search_options and conversation_option parameters for the entire request instead of per-query basis. For details, see the Use combined queries page.
    • The /beta/search endpoint has been renamed to /combined-search.
  • Logo detection: The logo add-on has been deprecated. To enable logo detection for an index, you must now use the logo indexing option.
  • Conversation option: The transcription conversation option has been renamed to exact_match.
  • Classifying videos:
    • The labels parameter has been renamed to classes.
    • The threshold field you can use to narrow down a response obtained from the platform is now of type int. For details, see the API Reference > Classify a video page.

Version 1.1

The introduction of new features and improvements in the 1.1 version of the Twelve Labs Video Understanding Platform has required changes to some endpoints. If you have used the 1.0 version of the API, please refer to the following sections for important information regarding the changes.

New features

Version 1.1 of the Twelve Labs Video Understanding Platform introduces the following new features:

  • Classification of content: You can now define a list of labels that you want to classify your videos into, and the new classification API endpoint will return the duration for which the specified labels have appeared in your videos and the confidence that each of the matches represents the label you've specified. For more details, see the Classify page.
  • Combined Queries: The 1.1 version of the API introduces a new format of search queries named combined queries. A combined query includes any number of subqueries linked with any number of logical operators. Combined queries are executed in one API request.
    Combined queries support the following additional features:
    • Negating a condition: In addition to the existing AND operator, the platform now allows you to use the NOT operator to negate a condition. For example, this allows you to write a query that retrieves all the video segments in which someone is cooking but neither spaghetti nor lasagna is mentioned in the conversation.
    • The THEN operator: The platform now supports the THEN operator that allows you to specify that the platform must return only the results for which the order of the matching video segment is the same as the order of your queries.
    • Time-proximity search: The Twelve Labs Video Understanding API now allows you to use the proximity parameter to extend the lower and upper boundaries of each subquery. For example, this allows you to write a query that finds all car accidents that happened within a specific interval of time before someone wins a race.
      For details, see the Use combined queries page.
  • Logo detection: The platform can now detect brand logos.

Updates

This section lists the differences between version 1 and version 1.1 of the Twelve Labs Video Understanding API.

  • When you make an API call, make sure that you specify the 1.1 version in the URL.
    The URL should look similar to the following one: https://api.twelvelabs.io./v1.1/{resource}/{path_parameters}?{query_parameters}. For more details, see the Call an endpoint section.

  • The following methods now return a 200 OK status code when the response is empty:

    • [GET] /indexes
    • [GET] /tasks
    • [GET] /indexes/{index_id}/videos
  • The /tasks endpoint is now a separate endpoint and is no longer part of the /indexes endpoint. The table below shows the changes made to each method of the /tasks endpoint:

    1.01.1
    GET /indexes/tasksGET /tasks
    POST /indexes/tasksPOST /tasks
    GET /indexes/tasks/{task_id}GET /tasks/{task_id}
    DELETE /indexes/tasks/{task_id}DELETE /tasks/{task_id}
    POST /indexes/tasks/transfersPOST /tasks/transfers
    GET /indexes/tasks/statusGET /tasks/status
  • The /indexes/tasks/{task_id}/video_id endpoint has been deprecated. You can now retrieve the unique identifier of a video by invoking the GET method of the /tasks/{task_id} endpoint. The response will contain a field named video_id. For details, see steps six and seven on the Upload from the local file system page.

  • When an error occurs, the platform now follows the recommendations of the RFC 9110 standard. Instead of numeric codes, the platform now returns string values containing human-readable descriptions of the errors. The format of the error messages is as follows:

    • code: A string representing the error code.
    • message: A human-readable string describing the error, intended to be suitable for display in a user interface.
    • (Optional) docs_url: The URL of the relevant documentation page.
      For example, if you tried to list all the videos in an index and the unique identifier of the index you specified didn't exist, the 1.0 version of the API returned an error similar to the following one:
    {
      "error_code": 201,
      "message": "ID 234234 does not exist"
    }
    

    Now, when using the 1.1 version of the API, the error should look similar to the following one:

    {
      "code": "parameter_not_provided",
      "message": "The index_id parameter is required but was not provided."
    }
    

    For a list of error messages, see the API Reference > Error codes page.

  • The next_page_id and prev_page_id fields of the page_info object have been renamed to next_page_token and prev_page_id.

  • The type field has been removed from all the responses.

  • When performing searches specifying multiple search options, the platform returns an object containing the confidence level that a specific video segment matched your search terms for each type of search. In version v1.0, this field was a dictionary named module_confidence. In version v1.1, this field is now named module and is of type array. For details, see the Using multiple sources of information section.

  • The POST method of the /search/{page-token} endpoint has been deprecated. To retrieve the subsequent pages, you must now call the GET method of the /search/{page-token} endpoint, passing it the unique identifier of the page you want to retrieve. For details, see the Pagination > Search results page.