AI Video Search API
.webp)
What can TwelveLabs' video understanding AI do for me?
TwelveLabs provides a multimodal video understanding platform that helps you search, summarize, analyze, remix, and automate workflows across your entire video library. It sees, hears, and reasons about video content to uncover deep insights, not just tags, so you can understand the full story at AI scale. The technology supports searching across speech, text, audio, and visuals, and can be deployed on cloud, private cloud, or on‑premise.
How does TwelveLabs help me find moments in videos using natural language?
You can pinpoint exact moments in large video libraries by describing what you’re looking for in natural language. TwelveLabs’ search capabilities interpret your description to locate the relevant scenes quickly and accurately.
Can I customize or fine-tune TwelveLabs' models on my own data?
Yes. TwelveLabs’ models can be trained on your data to become domain experts. Fine-tuning is available on Developer and Enterprise plans. The Free plan does not include fine-tuning.
What deployment options are available (cloud, on‑premise, etc.)?
TwelveLabs can be deployed on cloud, private cloud, or on‑premise. The Enterprise tier provides a dedicated environment, while Free and Developer tiers use shared environments. Single sign-on options (SSO/SAML) are included on higher tiers.
What are the pricing tiers and what do they include?
- Free: For testing and building. Indexing limit is under 10 hours. Shared environment. Org account included. SSO/SAML included. Fine-tuning not included.
- Developer: For launching and growing. Indexing limit is unlimited. Shared environment. Org account included. SSO/SAML included. Fine-tuning included.
- Enterprise: For scaling and services. Indexing limit is unlimited. Dedicated environment. Org account included. SSO/SAML included. Fine-tuning included.
All tiers follow a pay‑as‑you‑go model, and there is a free developer trial to explore the API.
Is there a free trial or developer sandbox?
Yes. TwelveLabs offers a free developer trial to explore the API and capabilities before committing to a plan.
What APIs does TwelveLabs offer, and what do they do?
- Search API: Pinpoint exact moments in videos using natural language descriptions.
- Analyze API: Generate textual descriptions and insights about video content.
- Embed API: Produce multimodal vector embeddings to power semantic search, recommendations, and more.
Where can I find developer resources like API docs and SDKs?
Developer resources are available through the Developer Hub, including API Docs, SDKs, Sample Apps, and the Playground. You can access these to build and test with TwelveLabs’ APIs.
What industries or use cases do you serve?
TwelveLabs serves a range of use cases including Media & Entertainment, Advertising, and Government & Security, helping organizations search, analyze, and act on video data across sectors.
How do I try the Playground?
The Playground is available to trial and experiment with TwelveLabs’ video understanding capabilities. You can try sample videos and prompts to see next‑level video intelligence in action.
How can I get started or talk to sales?
You can explore the platform via the Playground and use the “Talk to Sales” option to connect with the TwelveLabs team for a sign‑up or enterprise discussion.
How do you handle privacy and cookies?
This site uses cookies to enable essential functionality, analytics, personalization, and targeted advertising. You can Accept, Deny, or Manage Preferences, and view the Privacy Policy for details.
What languages are supported in the UI?
The site provides language options, including English and Korean, accessible via the language selector.
Are there any known limitations I should consider?
- Limited flexibility in defining entirely new classes or labels for some tasks.
- Complex or ambiguous queries may be challenging and may require refinement.
- Depending on use case, some results may vary in quality or accuracy.
- Some video types or domains may require additional customization or specialized approaches.
What is the underlying technology behind TwelveLabs' video understanding?
TwelveLabs combines a powerful encoder model (Marengo) with a native video‑language model (Pegasus) to achieve temporal and spatial reasoning across video content.
Do you offer sample apps, partner resources, or security information?
Yes. The site includes Sample Apps, Partners, Security information, and a Trust Center to help you evaluate and integrate TwelveLabs into your workflows.



























