Sample appsIntegrationsDiscordPlaygroundDevEx repo
GuidesSDK ReferenceAPI Reference
GuidesSDK ReferenceAPI Reference
  • Get Started
    • Introduction
    • Quickstart
    • Manage your plan
    • Rate limits
    • Release notes
  • Guides
    • Search
    • Analyze videos
    • Create embeddings
  • Concepts
    • Models
    • Indexes
    • Tasks
    • Modalities
    • Multimodal large language models
  • Cloud partner integrations
    • Amazon Bedrock
  • Advanced
    • Organizations
    • Fine-tuning
    • Webhooks
    • Metadata
    • (Deprecated) Cloud-to-cloud integrations
    • Model context protocol
  • Resources
    • Platform overview
    • Playground
    • TwelveLabs SDKs
    • Frequently asked questions
    • Sample applications
    • Partner integrations
    • From the community
LogoLogo
Sample appsIntegrationsDiscordPlaygroundDevEx repo
On this page
  • Prerequisites
  • Complete example
  • Step-by-step guide
GuidesCreate embeddingsVideo embeddings

The platform allows you to retrieve embeddings for videos you’ve already uploaded and indexed. The embeddings are generated using video scene detection. Video scene detection enables the segmentation of videos into semantically meaningful parts. It involves identifying boundaries between scenes, defined as a series of frames depicting a continuous action or theme. Each segment is between 2 and 10 seconds.

Prerequisites

Your video must be indexed with the Marengo video understanding model version 2.7 or later. For details on enabling this model for an index, see the Create an index page.

Complete example

1from twelvelabs import TwelveLabs
2from typing import List
3from twelvelabs.types import VideoSegment
4
5# 1. Retrieve the embeddings
6video = client.indexes.videos.retrieve(index_id="<YOUR_INDEX_ID>", video_id="<YOUR_VIDEO_ID>", embedding_option=["visual-text", "audio"])
7
8
9# 2. Process the results
10def print_segments(segments: List[VideoSegment], max_elements: int = 5):
11 for segment in segments:
12 print(f" embedding_scope={segment.embedding_scope} embedding_option={segment.embedding_option} start_offset_sec={segment.start_offset_sec} end_offset_sec={segment.end_offset_sec}")
13 first_few = segment.float_[:max_elements]
14 print(
15 f" embeddings: [{', '.join(str(x) for x in first_few)}...] (total: {len(segment.float_)} values)"
16 )
17
18print_segments(video.embedding.video_embedding.segments)

Step-by-step guide

Python
Node.js
1

Retrieve the embedding

Function call: You call the indexes.videos.retrieve function.
Parameters:

  • index_id: The unique identifier of the index containing your video.
  • video_id: The unique identifier of your video.
  • embedding_option: An array of strings that specifies the types of embeddings the platform must return. This example uses ["visual-text", "audio"]. See the Embedding options page for details.

Return value: The response contains, among other information, an object named embedding that contains the embedding data for your video.

2

Process the results

This example iterates over the results and prints the key properties and a portion of the embedding vectors for each segment.

Was this page helpful?
Previous

Text embeddings

Next
Built with
Embeddings for indexed videos