

ZFS and BTRFS both provide that functionality. Have a look into the features.
ZFS and BTRFS both provide that functionality. Have a look into the features.
In your scenario, I’d be looking at ZFS or BTRFS for your live data, especially when taking photos into account. They’ll self-repair files that may run into decay issues, which I’ve seen a lot of with photos in all formats. Since you already keep off-site backups, I’d then just keep an extra drive around that you snapshot to from time to time.
Don’t forward them, close firewall ports, change configs to not listen on those ports, setup redirects to forward all requests on those ports to whichever you want…lots of options here
You can use the majority of “AI” things with non-Nvidia hardware, so don’t feel boxed in by that. Some projects just skew towards the Nvidia tool chain, but there are many ways to run it on AMD if you feel the need.
The “Super” board is just an Orin Nano with the power profiles unlocked. There is literally no difference except the software, and if you bootstrap an Orin Nano with the latest Nvidia packages, they perform the same. At about 67 TOPS.
For your project, you could run that on pretty much any kind of CPU or GPU. I wouldn’t pay $250 for the Super devkit when you get a cheaper GPU to do this, and CPU would work just a bit slower.
Syncthing is wildly inefficient though. I can understand not wanting to use it.
Well what you’re probably looking to setup is 802.11r, but I think you’re still going to run into issues because of the proximity of where your routers are.
The issue you’re seeing is related to band shaping and signal-to-noise ratio. Your wifi client is actually the thing that is supposed to be more smoothly handling the transition between access points with your current setup, but it may not work as expected without the signal for one or the other being drastically worse. 802.11r helps with that. Results are hit or miss though, so don’t go buying new equipment just to try it out.
If you had two OpenWRT devices though, I would just make a mesh and skip the above.
Both running OpenWRT, or the Archer still runs stock firmware?
Need some clarification here:
So you have the Omnia as the primary routing device, and the tp-link in AP mode connected via Ethernet to the Omnia, correct?
Are both running OpenWRT then?
😂
HDD has 100x the storage capacity vs SSD. What are you talking about?
If you’re set on TrueNAS, then just build a box to do that.
If you want a low power solution, go with Synology or Qnap.
Well again, that’s not how the Internet works.
Well that’s how the Internet works, bud. You’re opening a port for WG to start. Either make that work and correct your routing, or find another solution.
You’re not going to be stealthy by making this overcomplicated. You’re just adding extra steps. You don’t want to use DHCP to its benefits locally, and you don’t wantbto open ports…what magic do you want to happen here?
So then just open the Unbound server to the internet, assign a hostname to it, and use it. Simple.
Okay, let me just clarify some stuff here because your language has been confusing.
You’re using a “VPN”, but on a local network. When you say “VPN”, people assume mean you’re using a client to a remote location. That’s super confusing.
For what you’re trying to do you don’t even need WG unless you mean to use your DNS server from elsewhere.
Please clarify these two things, but I think you’re just complicating a simple setup for an ad blocking DNS server somehow, right?
All I’m saying is that if you’re sharing files between two containers, giving them both volumes and using the network to share those files is not the best practiced way of doing that. One volume, two containers, both mount the same volume and skip the network is the way to do that.
To solve for this, you create user mapping in the samba configs that say “Hey, johndoe in samba is actually the ubuntu user on the OS”, and that’s how it solves for permissions. Here’s an example issue that is similar to yours to give you more context. You can start reading from there to solve for your specific use-case.
If you choose NOT to fix the user mapping, you’re going to have to keep going back to this volume and chown’ing all the files and folders to make sure whichever user you’re connecting with via samba can actually read/write files.
Ah, okay. If this is Android, just setup your Unbound host IP under ‘Private DNS’ on your phone then.
Note: this will cause issues once you leave your home network unless your WH tunnel is available from outside. Set the secondary DNS to Mullvad or another secure DNS provider if that’s the case and you shouldn’t have issues once leaving the house.
Depending on your router, you can also just set a static DHCP reservation for your phone only that sets these DNS servers for you without affecting all other DHCP devices.
The biggest thing I’m seeing here is the creation of a bottleneck for your network services, and potential for catastrophic failure. Here’s where I forsee problems:
No, the “live” filesystems will repair themselves when they detect problems. They keep revisions of your data, and run checksums constantly. When they find a file has inadvertently changed without access, it will restore said files. Think of it like Mac “Time Machine”, but it’s just the filesystem . You can restore stuff from points in time when needed.
Just read up on it.