Retrieve an indexed asset

This method retrieves information about an indexed asset, including its status, metadata, and optional embeddings or transcription. **Common use cases**: - Monitor indexing progress: - Call this endpoint after creating an indexed asset - Check the `status` field until it shows `ready` - Once ready, your content is available for search and analysis - Retrieve asset metadata: - Retrieve system metadata (duration, resolution, filename) - Access user-defined metadata - Retrieve embeddings: - Include the `embedding_option` parameter to retrieve video embeddings - Requires the Marengo video understanding model to be enabled in your index - Retrieve transcriptions: - Set the `transcription` parameter to `true` to retrieve spoken words from your video

Authentication

x-api-keystring
Your API key. <Note title="Note"> You can find your API key on the <a href="https://playground.twelvelabs.io/dashboard/api-key" target="_blank">API Key</a> page. </Note>

Path Parameters

index-idstringRequired
The unique identifier of the index to which the indexed asset has been uploaded.
indexed-asset-idstringRequired
The unique identifier of the indexed asset to retrieve.

Query Parameters

embedding_optionlist of enumsOptional
Specifies which types of embeddings to retrieve. Values vary depending on the version of the model: - **Marengo 3.0**: `visual`, `audio`, `transcription`. - **Marengo 2.7**: `visual-text`, `audio`. For details, see the [Embedding options](/v1.3/docs/concepts/modalities#embedding-options) section. <Note title="Note"> To retrieve embeddings for a video, it must be indexed using the Marengo video understanding model. For details on enabling this model for an index, see the [Create an index](/reference/create-index) page. </Note>
Allowed values:
transcriptionbooleanOptional
The parameter indicates whether to retrieve a transcription of the spoken words for the indexed asset.

Response

The specified indexed asset information has successfully been retrieved.
_idstring or null
A string representing the unique identifier of an indexed asset. The platform creates a new indexed asset object and assigns it a unique identifier when the asset has been created for indexing.
created_atstring or null

A string indicating the date and time, in the RFC 3339 format (“YYYY-MM-DDTHH:mm:ssZ”), that the indexing task was created.

embeddingobject or null

Contains the embedding and the associated information. The platform returns this field when the embedding_option parameter is specified in the request.

hlsobject or null

The platform returns this object only for the videos that you uploaded with the enable_video_stream parameter set to true.

indexed_atstring or null

A string indicating the date and time, in the RFC 3339 format (“YYYY-MM-DDTHH:mm:ssZ”), that the indexing task has been completed.

statusenum or null
The status of the indexing task.
Allowed values:
system_metadataobject or null

System-generated metadata about the indexed asset.

transcriptionlist of objects or null

An array of objects that contains the transcription. For each time range for which the platform finds spoken words, it returns an object that contains the fields below. If the platform doesn’t find any spoken words, the data field is set to null.

updated_atstring or null

A string indicating the date and time, in the RFC 3339 format (“YYYY-MM-DDTHH:mm:ssZ”), that the indexing task was last updated. The platform updates this field every time the indexing task transitions to a different state.

user_metadatamap from strings to any or null

User-generated metadata about the indexed asset.

Errors