copy and paste this google map to your website or blog!
Press copy button and paste into your blog or website.
(Please switch to 'HTML' mode when posting into your blog. Examples: WordPress Example, Blogger Example)
Home ~ AnythingLLM AnythingLLM Documentation Learn about AnythingLLM's features and how to use them Get Started→ Installation→ Features→ AnythingLLM Cloud→ Roadmap→ Changelog→ Last updated on October 7, 2025 AnythingLLM Roadmap
AI Agent Usage - AnythingLLM With AnythingLLM we make every model possible to be used as an agent, but the strength of your model to comprehend the instruction and examples of tool calling is still reliant on the model itself
AI Agent Setup - AnythingLLM For rate limit information, see the Google Custom Search API documentation That's it! You can now use AnythingLLM's Agents! If you want to learn how to use AI Agents then you can read our AI Agent Guide or watch the below video:
Desktop Installation Overview - AnythingLLM AnythingLLM Desktop is a " single-player " application you can install on any Mac, Windows, or Linux operating system and get local LLMs, RAG, and Agents with little to zero configuration and full privacy
Windows Installation - AnythingLLM In order for AnythingLLM to leverage your GPU (NVIDIA or AMD) or even NPU we need to install some extra dependencies This will be done automatically during installation
Overview - AnythingLLM You can modify your LLM provider, model, or any other details at any time in AnythingLLM with no worry We allow you to connect to both local and cloud-based LLMs - even at the same time!
MCP on AnythingLLM Docker ~ AnythingLLM AnythingLLM will automatically start MCP servers when you open the "Agent Skills" page in the AnythingLLM UI or invoke the @agent directive All MCP servers will be started in the background - subsequent "boots" will then be much faster since the MCP servers will already be running
AnythingLLM Default Transcription Model ~ AnythingLLM Using the local whisper model on machines with limited RAM or CPU can stall AnythingLLM when processing media files We recommend at least 2GB of RAM and upload files less than 10MB
Quickstart - AnythingLLM Use the Dockerized version of AnythingLLM for a much faster and complete startup of AnythingLLM compared to running the source code directly Start AnythingLLM via Docker