LangChain Integration
Give your AI agents eyes, ears, and memory
The langchain-mixpeek package gives LangChain agents the ability to see video, hear audio, search images, and act on unstructured content. Includes a 6-tool agent toolkit (search, ingest, process, classify, cluster, alert), VectorStore with support for all file types, and bridge methods to convert between retrievers, tools, and toolkits.
Built for AI engineers and teams building agents that need to see, hear, or search unstructured content.
Get Started
Integrations
Works with any LangChain-compatible LLM and agent framework.
pip install langchain-mixpeek
npm install @mixpeek/langchainUse Cases
Brand protection agents that scan marketplaces for counterfeits
Archive search agents that find faces, objects, or moments in video
Content compliance agents that flag IP/trademark risks
Multimodal RAG pipelines across text, images, video, and audio
Features
Example
Agent with full Mixpeek capabilities
from langchain_mixpeek import MixpeekToolkitfrom langgraph.prebuilt import create_react_agenttoolkit = MixpeekToolkit(api_key="mxp_...",namespace="brand-protection",bucket_id="bkt_...",collection_id="col_...",retriever_id="ret_...",)agent = create_react_agent(llm, toolkit.get_tools())result = agent.invoke({"messages": [("user", "Scan these listings for counterfeits")]})
Agent capabilities
Tools: mixpeek_search, mixpeek_ingest, mixpeek_process,mixpeek_classify, mixpeek_cluster, mixpeek_alertThe agent can now:- Search video, images, audio, documents- Upload and process new content- Classify documents with taxonomies- Group similar content into clusters- Set up monitoring alerts (webhook, Slack, email)
Get Started
Integrations
Details
Quick Info
Frequently Asked Questions
Ready to integrate?
Get started with LangChain Integration in minutes. Check out the documentation or explore the source code on GitHub.
