Streaming responses
When generating open-ended texts, the platform supports streaming responses that enable real-time processing of partial results. This enhances the user experience with immediate feedback and significantly reduces the perceived latency.
To enable streaming responses, you must set the value of the stream
parameter to true
in the request.
The response consists of a stream of JSON objects, each on its own line, following the NDJSON format. Each object represents an event in the generation process, with three event types:
stream_start
: Indicates the beginning of the stream. When you receive this event, initialize your processing logic.
Example:{ "event_type": "stream_start", "metadata": { "generation_id": "2f6d0bdd-aed8-47b1-8124-3c9d8006cdc9" } }
text_generation
: Contains a fragment of generated text. Astext_generation
events arrive, handle the text fragments based on your application's needs. This might involve displaying the text in real-time, analyzing it, or storing it for later use. Note that these fragments may be of varying lengths and are not guaranteed to align with word or sentence boundaries.
Example:{ "event_type": "text_generation", "text": "Dive into the delightful world" }
stream_end
: Indicates the end of the stream. When you receive this event, finalize your processing logic.
Example:{ "event_type": "stream_end", "metadata": { "generation_id": "2f6d0bdd-aed8-47b1-8124-3c9d8006cdc9" } }
For a description of each field in the request and response, see the API Reference > Open-ended texts page.
Prerequisites
The example in this guide assumes the following:
- You’re familiar with the concepts that are described on the Platform overview page.
- You’ve already created an index and the Pegasus video understanding engine is enabled for this index.
- You've uploaded a video, and the platform has finished indexing it.
Example
To use streaming responses in your application:
- Start a stream by invoking the
textStream
method of thegenerate
object with the following parameters:video_id
: A string representing the unique identifier of your videoprompt
: A string that guides the model on the desired format or content.
- Use a loop to iterate over the stream.
- Inside the loop, handle each text fragment as it arrives. This example prints each fragment to the standard output.
- (Optional) After the stream ends, use the
textStream.aggregatedText
field if you need the full generated text.
The example code below demonstrates using the SDKs to generate and process a streaming response. It starts a stream for a specified video and prompt, prints each text fragment as it arrives, and prints the complete aggregated text. Ensure you replace the placeholders surrounded by <>
with your values.
from twelvelabs import TwelveLabs
client = TwelveLabs(api_key="<YOUR_API_KEY>")
text_stream = client.generate.text_stream(
video_id="<YOUR_VIDEO_ID>",
prompt="<YOUR_PROMPT>"
)
for text in text_stream:
print(text)
print(f"Aggregated text: {text_stream.aggregated_text}")
import { TwelveLabs } from 'twelvelabs-js';
const client = new TwelveLabs({ apiKey: '<YOUR_API_KEY>'});
const textStream = await client.generate.textStream({
'<YOUR_VIDEO_ID>',
'<YOUR_PROMPT>',
});
for await (const text of textStream) {
console.log(text);
}
console.log(`Aggregated text: ${textStream.aggregatedText}`);
The output should look similar to the following:
This
video charmingly captures the
whims
ical and playful nature of
cats engaging
in a variety of activities
,
from frolicking and
exploring
to moments of relaxation and
quirky
interactions with their environment.
It highlights their
curious behaviors and the
joy they bring to everyday
scenes.
Aggregated text: This video charmingly captures the whimsical and playful nature of cats engaging in a variety of activities, from frolicking and exploring to moments of relaxation and quirky interactions with their environment. It highlights their curious behaviors and the joy they bring to everyday scenes.
Updated about 1 month ago