Analyze videos

The TwelveLabs Python SDK provides methods to analyze videos and generate text from their content.

Related quickstart notebook

Titles, topics, and hashtags

This method method has been flattened and is now called client.gist instead of client.generate.gist. The client.generate.gist method will remain available until July 30, 2025; after this date, it will be deprecated. Update your code to use client.gist to ensure uninterrupted service.

Description: This method analyzes a specific video and generates titles, topics, and hashtags based on its content. It uses predefined formats and doesn’t require a custom prompt, and it’s best for generating immediate and straightforward text representations without specific customization.

Function signature and example:

1def gist(
2 self,
3 *,
4 video_id: str,
5 types: typing.Sequence[GistRequestTypesItem],
6 request_options: typing.Optional[RequestOptions] = None,
7) -> Gist

Parameters:

NameTypeRequiredDescription
video_idstrYesThe unique identifier of the video that you want to generate a gist for.
typestyping.Sequence[GistRequestTypesItem]YesSpecifies the type of gist. Use one of the following values: title, topic, hashtag.
request_optionstyping.Optional[RequestOptions]NoRequest-specific configuration.

Return value: Returns a Gist object.

The Gist class contains the following properties:

NameTypeDescription
idOptional[str]Unique identifier of the response.
titleOptional[str]Suggested title for the video.
topicsOptional[List[str]]An array of topics that are relevant to the video.
hashtagsOptional[List[str]]An array of hashtags that are relevant to the video.
usageOptional[TokenUsage]The number of tokens used in the generation.

The TokenUsage class contains the following properties:

NameTypeDescription
output_tokensOptional[int]The number of tokens in the generated text.

API Reference: Generate titles, topics, and hashtags.

Related guide: Titles, topics, and hashtags.

Summaries, chapters, and highlights

This method method has been flattened and is now called client.summarize instead of client.generate.summarize. The client.generate.summarize method will remain available until July 30, 2025; after this date, it will be deprecated. Update your code to use client.summarize to ensure uninterrupted service.

Description: This method analyzes a video and generates summaries, chapters, or highlights based on its content. Optionally, you can provide a prompt to customize the output.

Function signature and example:

1from twelvelabs import TwelveLabs
2
3def summarize(
4 self,
5 *,
6 video_id: str,
7 type: str,
8 prompt: typing.Optional[str] = OMIT,
9 temperature: typing.Optional[float] = OMIT,
10 request_options: typing.Optional[RequestOptions] = None,
11) -> SummarizeResponse

Parameters:

NameTypeRequiredDescription
video_idstrYesThe unique identifier of the video that you want to summarize.
typestrYesSpecifies the type of text. Use one of the following values: summary, chapter, or highlight.
promptOptional[str]NoUse this field to provide context for the summarization task, such as the target audience, style, tone of voice, and purpose. Your prompts can be instructive or descriptive, or you can also phrase them as questions. The maximum length of a prompt is 2,000 tokens.
temperatureOptional[float]NoControls the randomness of the text output generated by the model. A higher value generates more creative text, while a lower value produces more deterministic text output. Default: 0.2, Min: 0, Max: 1.
request_optionstyping.Optional[RequestOptions]NoRequest-specific configuration.

Return value: Returns a SummarizeResponse object containing the generated content. The response type varies based on the type parameter.

When type is "summary" - Returns a SummarizeResponse_Summary object with the following properties:

NameTypeDescription
summarize_typeLiteral["summary"]Indicates this is a summary response.
idOptional[str]Unique identifier of the response.
summaryOptional[str]The generated summary text.
usageOptional[TokenUsage]The number of tokens used in the generation.

When type is "chapter" - Returns a SummarizeResponse_Chapter object with the following properties:

NameTypeDescription
summarize_typeLiteral["chapter"]Indicates this is a chapter response.
idOptional[str]Unique identifier of the response.
chaptersOptional[List[SummarizeChapterResultChaptersItem]]An array of chapter objects.
usageOptional[TokenUsage]The number of tokens used in the generation.

When type is "highlight" - Returns a SummarizeResponse_Highlight object with the following properties:

NameTypeDescription
summarize_typeLiteral["highlight"]Indicates this is a highlight response.
idOptional[str]Unique identifier of the response.
highlightsOptional[List[SummarizeHighlightResultHighlightsItem]]An array of highlight objects.
usageOptional[TokenUsage]The number of tokens used in the generation.

The SummarizeChapterResultChaptersItem class contains the following properties:

NameTypeDescription
chapter_numberOptional[int]The sequence number of the chapter. Note that this field starts at 0.
start_secOptional[float]The starting time of the chapter, measured in seconds from the beginning of the video.
end_secOptional[float]The ending time of the chapter, measured in seconds from the beginning of the video.
chapter_titleOptional[str]The title of the chapter.
chapter_summaryOptional[str]A brief summary describing the content of the chapter.

The SummarizeHighlightResultHighlightsItem class contains the following properties:

NameTypeDescription
start_secOptional[float]The starting time of the highlight, measured in seconds from the beginning of the video.
end_secOptional[float]The ending time of the highlight, measured in seconds from the beginning of the video.
highlightOptional[str]The title of the highlight.
highlight_summaryOptional[str]A brief description that captures the essence of this part of the video.

The TokenUsage class contains the following properties:

NameTypeDescription
output_tokensOptional[int]The number of tokens in the generated text.

API Reference: Summaries, chapters, and highlights.

Related guide: Summaries, chapters, and highlights.

Open-ended analysis

Description: This method analyzes a video and generates text based on its content.

Function signature and example:

1def analyze(
2 self,
3 *,
4 video_id: str,
5 prompt: str,
6 temperature: typing.Optional[float] = OMIT,
7 request_options: typing.Optional[RequestOptions] = None,
8) -> NonStreamAnalyzeResponse

Parameters:

NameTypeRequiredDescription
video_idstrYesThe unique identifier of the video for which you wish to generate a text.
promptstrYesA prompt that guides the model on the desired format or content. Your prompts can be instructive or descriptive, or you can also phrase them as questions. The maximum length of a prompt is 2,000 tokens.
temperaturetyping.Optional[float]NoControls the randomness of the text output generated by the model. A higher value generates more creative text, while a lower value produces more deterministic text output. Default: 0.2, Min: 0, Max: 1
request_optionstyping.Optional[RequestOptions]NoRequest-specific configuration.

Return value: Returns a NonStreamAnalyzeResponse object containing the generated text.

The NonStreamAnalyzeResponse class contains the following properties:

NameTypeDescription
idOptional[str]Unique identifier of the response.
dataOptional[str]The generated text based on the prompt you provided.
usageOptional[TokenUsage]The number of tokens used in the generation.

The TokenUsage class contains the following properties:

NameTypeDescription
output_tokensOptional[int]The number of tokens in the generated text.

API Reference: Open-ended analysis.

Related guide: Open-ended analysis.

Open-ended analysis with streaming responses

Description: This method analyzes a video and generates open-ended text based on its content.

Function signature and example:

1def analyze_stream(
2 self,
3 *,
4 video_id: str,
5 prompt: str,
6 temperature: typing.Optional[float] = OMIT,
7 stream: typing.Optional[bool] = OMIT,
8 request_options: typing.Optional[RequestOptions] = None,
9) -> typing.Iterator[AnalyzeStreamResponse]

Parameters:

NameTypeRequiredDescription
video_idstrYesThe unique identifier of the video for which you wish to generate a text.
promptstrYesA prompt that guides the model on the desired format or content. Your prompts can be instructive or descriptive, or you can also phrase them as questions. The maximum length of a prompt is 2,000 tokens.
temperatureOptional[float]NoControls the randomness of the text output generated by the model. A higher value generates more creative text, while a lower value produces more deterministic text output. Default: 0.2, Min: 0, Max: 1
streamOptional[bool]NoSet this parameter to true to enable streaming responses in the NDJSON format. Default: true
request_optionsOptional[RequestOptions]NoRequest-specific configuration.

Return value: Returns an iterator of AnalyzeStreamResponse objects. Each response can be a StreamStartResponse, StreamTextResponse, or StreamEndResponse.

The StreamStartResponse class contains the following properties:

NameTypeDescription
event_typeOptional[str]This field is always set to stream_start for this event.
metadataOptional[StreamStartResponseMetadata]An object containing metadata about the stream.

The StreamTextResponse class contains the following properties:

NameTypeDescription
event_typeOptional[str]This field is always set to text_generation for this event.
textOptional[str]A fragment of the generated text. Note that text fragments may be split at arbitrary points, not necessarily at word or sentence boundaries.

The StreamEndResponse class contains the following properties:

NameTypeDescription
event_typeOptional[str]This field is always set to stream_end for this event.
metadataOptional[StreamEndResponseMetadata]An object containing metadata about the stream.

The StreamStartResponseMetadata class contains the following properties:

NameTypeDescription
generation_idOptional[str]A unique identifier for the generation session.

The StreamEndResponseMetadata class contains the following properties:

NameTypeDescription
generation_idOptional[str]The same unique identifier provided in the stream_start event.
usageOptional[TokenUsage]The number of tokens used in the generation.

The TokenUsage class contains the following properties:

NameTypeDescription
output_tokensOptional[int]The number of tokens in the generated text.

API Reference: Open-ended analysis.

Related guide: Open-ended analysis.

Error codes

This section lists the most common error messages you may encounter while analyzing videos.

  • token_limit_exceeded
    • Your request could not be processed due to exceeding maximum token limit. Please try with another request or another video with shorter duration.
  • index_not_supported_for_generate
    • You can only summarize videos uploaded to an index with an engine from the Pegasus family enabled.