

INWX has been great so far, have been with them for about 6 years at this point. Also had good experiences with their customer support.
INWX has been great so far, have been with them for about 6 years at this point. Also had good experiences with their customer support.
I run Postfix, Dovecot and rspamd on my server. The configuration is here: https://git.dblsaiko.net/systems/tree/configurations/polaris
There’s also the Simple NixOS Mailserver project which is an abstraction on top of these and has a few more things. I’ve never used it myself though.
Of course, you also have to set up all the standard email setup like DKIM, DMARC, SPF and so on here.
Me too, but it’s now a subscription only. I wouldn’t recommend it to people who didn’t already buy the original.
I use borgbackup, with daily backup to borgbase.
At some point I want to set up a distributed file system between multiple locations as both a backup target and also a network share with automatic snapshots or some other undelete mechanism, but I still need to get the hardware for that and the current setup works well
True. I knew I should have left that as “NFS 4” because someone would comment this. From what I’ve read (never used it), NFS 3 is very different to 4 and also just kind of not worth using, especially just for Windows, since it has no security at all.
Please just use Kerberos instead of fiddling with uids. It’s the only sane way to get NFS access controls and user mapping. Works on both Linux and macOS (but there’s no NFS on Windows anyway).
I’d say you can run the Kerberos KDC on the NAS but if Synology has some locked down special OS you’ll need another machine for that (edit: but you say you have other servers already so that shouldn’t be a problem).
Unfortunately SMB is so screwed that you can’t reuse ordinary Kerberos for authentication there, which is unfortunate if you want to have both that and NFS. I’ve yet to look into whether Samba AD can be used for both.
This seems super overcomplicated. What I would do is put all the subdomains on the public DNS, let HTTP(S) through the firewall for the respective hosts, deny everything from outside of your local network on the http server that isn’t under the HTTP challenge path and then run the HTTP challenge as you would for a public site.
Then you can get certs, everyone outside trying to access will get 403, and inside the network you can access as normal.
Of course you’ll have to trust your http server’s ACL for that, but I’m just going to assume servers like nginx (which I use) have a reliable implementation.
Are you talking about these? They don’t look like they have a PCIe slot…
In any case, the specifications say
Form factor Low-profile 119.65 x 68.9 x 17.24 mm (Without bracket 119.65 x 68.9 x 12 mm)
It would need a PCIe slot, not a SATA connection. But I assume it doesn’t have that either then.
I have the QNAP TL-D800S. It’s an 8 bay DAS but there is also a 4 bay variant. Works well for me. It uses SFF cables to connect to the PC and comes with the appropriate PCIe card which seems more robust to me than anything USB for this.
Any registrar worth using has an API for updating DNS entries.
I just found this with a quick search: https://github.com/qdm12/ddns-updater
Yeah, when I got started I initially put everything in Docker because that’s what I was recommended to do, but after a couple years I moved everything out again because of the increased complexity, especially in terms of the networking, and that you now have to deal with the way Docker does things, and I’m not getting anything out of it that would make up for that.
When I moved it out back then I was running Gentoo on my servers, by now it’s NixOS because of the declarative service configuration, which shines especially in a server environment. If you want easy service setup, like people usually say they like about Docker, I think it’s definitely worth a try. It can be as simple as “services.foo.enable = true”.
(To be fair NixOS has complexity too, but most of it is in learning how the configuration language which builds your operating system works, and not in the actual system itself, which is mostly standard except for the store. A NixOS service module generates a normal systemd service + potentially other files in the file system.)
deleted by creator
But it actually doesnt. Most public wifis or other residential networks dont seem to give me external access to my Nextcloud, ironically, my mobile network via phone does.
A lot of those networks are run by boomers who don’t care about IPv6 or don’t want to set it up because (insert excuse from IPv6 Bingo) or non-tech people whose router doesn’t turn it on automatically. So yeah, that is unfortunately something you have to expect and work around.
Problem 1 seems to be best solved with renting the cheapest VPS I can find and then…build a permanent SSH tunnel to it? Use the WireGuard VPN of my router? Some other kind of tunnel to expose a public IPv4? Iirc, VPS are billed by throughput, I am not sure if I might run into problems here, but the only people that use it are my gf and me, and when not at home, mostly for the CalDAV stuff.
You don’t even need a tunnel. Just a proxy on a VPS that runs on IPv4 and connects to the IPv6 upstream. Set the AAAA record to the real host and the A record to the VPS. Assuming you actually get a static prefix which you should, but some IPv4-brained ISPs don’t and you get a rotating prefix, in which case it’s probably more annoying.
I do this too, mine runs on a free Oracle Cloud ARM VPS.
Kanidm has LDAP support but it’s read-only. You should prefer OAuth though since LDAP is locked to password login.