Ollama
Ollama runs LLMs locally on your VPS. Pull and serve Llama 3, Mistral, Phi, Gemma, and Qwen with one command. Self-hosted AI without an API key, REST API compatible with OpenAI clients. The simplest path to private inference, used by 200,000+ developers.
Версия
Latest
Операционная система
Ubuntu Server 24.04 LTS
Мин. RAM
8 GB
Типы IP
IPV4,IPV6