copy and paste this google map to your website or blog!
Press copy button and paste into your blog or website.
(Please switch to 'HTML' mode when posting into your blog. Examples: WordPress Example, Blogger Example)
ollama - Reddit r ollama How good is Ollama on Windows? I have a 4070Ti 16GB card, Ryzen 5 5600X, 32GB RAM I want to run Stable Diffusion (already installed and working), Ollama with some 7B models, maybe a little heavier if possible, and Open WebUI I don't want to have to rely on WSL because it's difficult to expose that to the rest of my network I've been searching for guides, but they all seem to either
Local Ollama Text to Speech? : r robotics - Reddit Yes, I was able to run it on a RPi Ollama works great Mistral, and some of the smaller models work Llava takes a bit of time, but works For text to speech, you’ll have to run an API from eleveabs for example I haven’t found a fast text to speech, speech to text that’s fully open source yet If you find one, please keep us in the loop
Request for Stop command for Ollama Server : r ollama - Reddit Ok so ollama doesn't Have a stop or exit command We have to manually kill the process And this is not very useful especially because the server respawns immediately So there should be a stop command as well Edit: yes I know and use these commands But these are all system commands which vary from OS to OS I am talking about a single command
How to add web search to ollama model : r ollama - Reddit How to add web search to ollama model Hello guys, does anyone know how to add an internet search option to ollama? I was thinking of using LangChain with a search tool like DuckDuckGo, what do you think?
Training a model with my own data : r LocalLLaMA - Reddit I'm using ollama to run my models I want to use the mistral model, but create a lora to act as an assistant that primarily references data I've supplied during training This data will include things like test procedures, diagnostics help, and general process flows for what to do in different scenarios
Ollama Server Setup Guide : r LocalLLaMA - Reddit I recently set up a language model server with Ollama on a box running Debian, a process that consisted of a pretty thorough crawl through many documentation sites and wiki forums
Completely Local RAG with Ollama Web UI, in Two Docker . . . - Reddit Here's what's new in ollama-webui: 🔍 Completely Local RAG Suppor t - Dive into rich, contextualized responses with our newly integrated Retriever-Augmented Generation (RAG) feature, all processed locally for enhanced privacy and speed
Whats the best 7b model on Ollama right now? [Feb 2024] : r . . . - Reddit I think this question should be discussed every month If applicable, please separate out your best models by use case Specifically Ollama because that's the easiest way to build with LLMs right now Best overall general use Best for coding Best for RAG Best conversational (chatbot applications) Best uncensored
High CPU usage instead of GPU : r ollama - Reddit hi there i am running ollama and for some reason i think inference is done by CPU Generation is slow and for some reason i think if i let it rest for more than 20 seconds model gets offloaded and then loaded again witch take 3 to 5 min's because its big