• 0 Posts
  • 199 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle




  • You can use the majority of “AI” things with non-Nvidia hardware, so don’t feel boxed in by that. Some projects just skew towards the Nvidia tool chain, but there are many ways to run it on AMD if you feel the need.

    The “Super” board is just an Orin Nano with the power profiles unlocked. There is literally no difference except the software, and if you bootstrap an Orin Nano with the latest Nvidia packages, they perform the same. At about 67 TOPS.

    For your project, you could run that on pretty much any kind of CPU or GPU. I wouldn’t pay $250 for the Super devkit when you get a cheaper GPU to do this, and CPU would work just a bit slower.



  • Well what you’re probably looking to setup is 802.11r, but I think you’re still going to run into issues because of the proximity of where your routers are.

    The issue you’re seeing is related to band shaping and signal-to-noise ratio. Your wifi client is actually the thing that is supposed to be more smoothly handling the transition between access points with your current setup, but it may not work as expected without the signal for one or the other being drastically worse. 802.11r helps with that. Results are hit or miss though, so don’t go buying new equipment just to try it out.

    If you had two OpenWRT devices though, I would just make a mesh and skip the above.











  • Okay, let me just clarify some stuff here because your language has been confusing.

    You’re using a “VPN”, but on a local network. When you say “VPN”, people assume mean you’re using a client to a remote location. That’s super confusing.

    For what you’re trying to do you don’t even need WG unless you mean to use your DNS server from elsewhere.

    Please clarify these two things, but I think you’re just complicating a simple setup for an ad blocking DNS server somehow, right?


    1. This is the most complex way of simply sharing files between containers I’ve ever heard. That sure sounds like bad advice to me. You have a link to that?

    All I’m saying is that if you’re sharing files between two containers, giving them both volumes and using the network to share those files is not the best practiced way of doing that. One volume, two containers, both mount the same volume and skip the network is the way to do that.

    1. Samba maps users in its own DB to users that exist on its host. If you’re running it in a container, it’s likely it’s just going to default to root with uid=1000. So if you start a brand new Samba server, you need a fresh user to get started, right? So you create a user called ‘johndoe’ with uid=1100 and give it a password. Now, that user is ONLY a samba user. It doesn’t get created as an OS user. So if your default OS user is ‘ubuntu’ with uid=1000, you’re going to have permissions issues between created files for these users because 1100 is not equal to 1000.

    To solve for this, you create user mapping in the samba configs that say “Hey, johndoe in samba is actually the ubuntu user on the OS”, and that’s how it solves for permissions. Here’s an example issue that is similar to yours to give you more context. You can start reading from there to solve for your specific use-case.

    If you choose NOT to fix the user mapping, you’re going to have to keep going back to this volume and chown’ing all the files and folders to make sure whichever user you’re connecting with via samba can actually read/write files.


  • Ah, okay. If this is Android, just setup your Unbound host IP under ‘Private DNS’ on your phone then.

    Note: this will cause issues once you leave your home network unless your WH tunnel is available from outside. Set the secondary DNS to Mullvad or another secure DNS provider if that’s the case and you shouldn’t have issues once leaving the house.

    Depending on your router, you can also just set a static DHCP reservation for your phone only that sets these DNS servers for you without affecting all other DHCP devices.


  • The biggest thing I’m seeing here is the creation of a bottleneck for your network services, and potential for catastrophic failure. Here’s where I forsee problems:

    1. Running everything from a single HDD(?) is going to throw your entire home and network into disarray if it fails. Consider at least adding a second drive for RAID1 if you can.
    2. You’re going to run into I/O issues with the imbalance of the services you’re cramming all together.
    3. You don’t mention backups. I’d definitely work that out first. Some of these services can take their own, but what about the bulk data volumes?
    4. You don’t mention the specs of the host, but I’d make sure you have swap equal to RAM here if youre not worried about disk space. This will just prevent hard kernel I/O issues or OOMkills if it comes to that.
    5. Move network services first, storage second, n2h last.
    6. Make sure to enable any hardware offloading for network if available to you.