Troubleshooting
Common issues and how to fix them.
OneAiMind Won't Start
journalctl -u oneaimind -n 50 --no-pager
# Or run manually:
cd ~/OneAiMind && npm start| Error | Cause | Fix |
|---|---|---|
| Connection refused (DB) | PostgreSQL stopped | sudo systemctl start postgresql |
| Cannot connect to Ollama | Ollama stopped | sudo systemctl start ollama |
| Authentication failed | Wrong password in .env | Check DATABASE_URL in .env |
| Port 3000 in use | Another process using port | sudo lsof -i :3000 |
AI Responses Are Very Slow
Normal on CPU-only with large models. Solutions:
# Switch to a smaller model in .env:
OLLAMA_MODEL=llama3.1:8b # good for 16GB RAM minimum
# Check what's loaded in memory:
ollama ps
# Check system resources:
htopUbuntu Won't Boot from USB
Can't Access from Another Device
sudo ufw allow 3000
hostname -I # find your IP addressQuick Service Commands
| Service | Start | Status | Logs |
|---|---|---|---|
| PostgreSQL | sudo systemctl start postgresql | systemctl status postgresql | journalctl -u postgresql |
| Ollama | sudo systemctl start ollama | systemctl status ollama | journalctl -u ollama -f |
| OneAiMind | sudo systemctl start oneaimind | systemctl status oneaimind | journalctl -u oneaimind -f |
Frequently Asked Questions
Does OneAiMind work completely offline?
Yes. Once installed with models downloaded, everything runs locally. No internet needed for daily use.
Is my data sent to the cloud?
No. All data stays on your machine. Cloud AI keys are optional — only used if you explicitly configure them.
Can I transfer my license to another machine?
Each license key is bound to one machine. Contact support to deactivate and reactivate on a new device.
Which AI model should I use?
For 32 GB RAM (recommended), use qwen2.5:14b as your primary model. Check the AI Models page for all RAM tiers.
Can I use OneAiMind with cloud AI (ChatGPT, Claude)?
Yes. Add your API keys in the .env file. OneAiMind can use both local and cloud models — your choice per conversation.
How much disk space do AI models use?
From 2 GB (llama3.2) to 40 GB (llama3.3:70b). A typical 32 GB RAM setup needs about 16 GB of storage for models.
Do I need a GPU?
No, but it helps tremendously. An 8 GB vRAM GPU is a massive upgrade (1–3s vs 10–30s responses). 16 GB vRAM (single or multi-card) is ideal.
Can I access OneAiMind from my phone or tablet?
Yes. Open http://<server-ip>:3000 from any device on the same network. The web UI is responsive.
Check the project's GitHub Issues for community support, or reach out via the Contact page on our website.