- Ollama
Get up and running with large language models Run DeepSeek-R1, Qwen 3, Llama 3 3, Qwen 2 5‑VL, Gemma 3, and other models, locally
- llama3. 2
The Llama 3 2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks They outperform many of the available open source and closed chat models on common industry benchmarks
- deepseek-r1 - ollama. com
DeepSeek-R1 has received a minor version upgrade to DeepSeek-R1-0528 for the 8 billion parameter distilled model and the full 671 billion parameter model In this update, DeepSeek R1 has significantly improved its reasoning and inference capabilities
- qwen3 - ollama. com
Qwen 3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models The flagship model, Qwen3-235B-A22B, achieves competitive results in benchmark evaluations of coding, math, general capabilities, etc , when compared to other top-tier models such as DeepSeek-R1
- phi4-mini - ollama. com
Phi-4-mini-instruct is a lightweight open model built upon synthetic data and filtered publicly available websites - with a focus on high-quality, reasoning dense data
- gemma2 - Ollama
At 27 billion parameters, Gemma 2 delivers performance surpassing models more than twice its size in benchmarks This breakthrough efficiency sets a new standard in the open model landscape
- openthinker - ollama. com
A fully open-source family of reasoning models built using a dataset derived by distilling DeepSeek-R1
- olmo2 - ollama. com
OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3 1 on English academic benchmarks
|