Step 5
Install Ollama
Ollama is the engine that runs AI models locally. One command installs everything.
Install Ollama
bash — Ubuntu / macOS
# One command installs Ollama and sets it up as a system service
curl -fsSL https://ollama.com/install.sh | shPowerShell — Windows
irm https://ollama.com/install.ps1 | iex💡
Official Download Page
You can also download Ollama directly from https://ollama.com/download for Ubuntu, macOS, and Windows.
Start & Enable Ollama
bash
# Start the service
sudo systemctl start ollama
# Enable auto-start on boot
sudo systemctl enable ollama
# Verify it's running
sudo systemctl status ollama
# Should show: Active: active (running)
# Test the API
curl http://localhost:11434
# Should respond: Ollama is runningEssential Ollama Commands
| Command | What it Does |
|---|---|
ollama list | List all downloaded models and their sizes |
ollama ps | Show which model is currently loaded in RAM |
ollama pull <model> | Download a model from the Ollama library |
ollama run <model> | Run a model interactively in the terminal |
ollama run <model> --verbose | Run with performance stats (tokens/sec) |
ollama rm <model> | Remove a model and free up disk space |
ollama stop <model> | Unload a model from memory |
journalctl -u ollama -f | Watch Ollama logs in real time |
sudo systemctl restart ollama | Restart the Ollama service |
ℹ️
Video Walkthrough Available
Watch a step-by-step video on YouTube: How to Install Ollama on Ubuntu Linux