LlamaBot with Ollama on my home virtual private network
Or: Step 0 on my journey to save on LLM API call costs :)
Hello fellow datanistas!
Have you been curious about running an Ollama server on your home's private network? If so, I have some exciting news for you! I've just published a new blog post detailing how I did exactly that. In the post, I share step-by-step instructions on how to set up Tailscale, install Ollama on a GPU box, configure it for network access, and even interact with it using LlamaBot. I hope you’ll find it informative!
(As a bonus: doing so helped me breathe new usage life into my old GPU box running an NVIDIA GTX1080 GPU!)
You can read the post here. If you find it helpful, I encourage you to share it with others who might also benefit from it. As always, I appreciate your support and look forward to hearing your thoughts and experiences.
Happy reading and experimenting!
Cheers,
Eric