FWIW I run only very small databases e.g., sqlite ones shipped with applications, but haven’t had any problems in about a year now, and nothing that wasn’t recoverable from backup.
FWIW I run only very small databases e.g., sqlite ones shipped with applications, but haven’t had any problems in about a year now, and nothing that wasn’t recoverable from backup.
Correct, I run docker on a compute host that has no local storage. The host’s disks are on iSCSI LUNs.
What’s high availability to you? There’s an increasing cost.
Anyway, I have a similar setup. NAS for storage and OptiPlex Proxmox nodes for compute. But I went a different route and set up an iSCSI SAN and all the guests use block storage on the NAS. Guest backups by Proxmox go to file storage on the NAS and backed up to a second NAS.
If you do set up a RAG store, please post the tech stack you use as I’m in a similar situation. The inbuilt document store management in ollama+openwebui is a bit clunky.
I’d be interested to see how it goes. I’ve deployed Ollama plus Open WebUI on a few hosts and small models like Llama3.2 run adequately (at least as fast as I can read) on even an old i5-8500T with no GPU. Oracle Cloud free tier might work OK.
Running an LLM can certainly be an on-demand service. Apart from training, which I don’t think we are discussing, GPU compute is only used while responding to prompts.
Jellyfin is also available as a native DSM package through SynoCommunity, FWIW.
I put on my robe and wizard hat.