Member-only story
Navigating the AI Wave: How Ollama and OpenWebUI Help Us Survive the DeepSeek R1 Boom
Open Source AI Tools Ollama and Open WebUI
At the end of January 2025, just before the Chinese New Year — The Year of the Wood Snake, the AI world was shaken by groundbreaking news: DeepSeek unveiled its latest LLM, promising to surpass leading models from OpenAI (GPT), Google (Gemini), Anthropic (Claude), Mistral AI (Mistral), and X/Twitter (Grok). This announcement sent shockwaves through the industry, sparking discussions about the future of AI dominance.
As a Java developer, your main focus is figuring out if you can seamlessly integrate this open-source LLM into your web application. Let’s explore how you can work entirely offline with a complete AI tool stack:
- Ollama: Your local provider for LLMs, running entirely on your system.
- Open WebUI: A user-friendly web interface for interacting with Ollama, giving you a smooth, intuitive experience right from your browser.
This setup gives you complete control and privacy while working with powerful AI models locally — exactly what every developer craves, especially when you’re coding on your MacBook!
First Step: Ollama — The Local Open Source LLMs Provider
You start figuring out how to download DeepSeek R1 LLM and run it locally on your development machine — MacBook Mx, in this case. Ollama is the right tool for the job.
Ollama is a platform that allows users to run and manage large language models (LLMs) locally on their own devices. It provides an easy-to-use interface for downloading, running, and interacting with open-source AI models without relying on cloud-based services.
There are several ways to install Ollama on your laptop, such as using Docker or an installer. I chose the installer option because it allows for automatic updates to Ollama. Once installed, you’ll see the Ollama icon in your system tray.
To download an LLM or check which LLMs are available on your local laptop, you can use a command in the terminal.
ollama help
ollama run <model>