Though, fighting it takes time and effort. And you’d need to make sure to know the exact point when to pay a lawyer, when ignoring it gets you in an unfavourable situation or triggers some default ruling.
A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.
I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.
Though, fighting it takes time and effort. And you’d need to make sure to know the exact point when to pay a lawyer, when ignoring it gets you in an unfavourable situation or triggers some default ruling.
That’s kind of what happens when somebody re-uses already assigned namespaces for a different purpose. Same with other domains, or if you mess with IP addresses or MAC addresses. The internet is filled with RFCs and old standards that need to be factored in. And I don’t really see Google at fault here. Seems they’ve implemented this to specification. So technically they’re “right”. Question is: Is the RFC any good? Or do we have any other RFCs contradicting it? Usually these things are well-written. If everything is okay, it’s the network administrators fault for configuring something wrong… I’m not saying that’s bad… It’s just that computers and the internet are very complicated. And sometimes you’re not aware of all the consequences of the technical debt… And we have a lot of technical debt. Still, I don’t see any way around implementing a technology and an RFC to specification. We’d run into far worse issues if everyone were to do random things because they think they know something better. It has to be predictable and a specification has to be followed to the letter. Or the specification has to go altogether.
Issue here is that second “may” clause. That should be prohibited from the beginning, because it just causes issues like this. That’s kind of what Google is doing now, though. If you ask me, they probably wrote that paragraph because it’s default behaviour anyways (to look up previously unknown TLDs via DNS). And they can’t really prevent that. But that’s what ultimately causes issues. So they wrote that warning. Only proper solution is to be strict and break it intentionally, so no-one gets the idea to re-use .local… But judging from your post, that hasn’t happened until now.
Linux, MacOS etc are also technically “right” if they choose to adopt that “may” clause. It just leads to the consequences lined out in the sentence. They’re going to confuse users.
Any DNS query for a name ending with “.local.” MUST be sent to the
mDNS IPv4 link-local multicast […]
Implementers MAY choose to look up such names concurrently via other
mechanisms (e.g., Unicast DNS) and coalesce the results in some
fashion. Implementers choosing to do this should be aware of the
potential for user confusion when a given name can produce different
results depending on external network conditions […]
The RFC warns about these exact issues. You MAY do something else, but then the blame is on you…
Sure, I don’t really judge. Asking tech support questions is hard. You need to find the correct place. Volunteer some information, while you probably don’t even know which details are relevant… It’s rarely ill intent even if someone doesn’t get it exactly right. And your question seemed genuine to me. I’m not a mod here, though. I can’t really comment on if it’s been the right place or not. I’d say maybe find another community to ask support questions about networking. And if it’s just this one time, just attribute this as a mild overreaction by the mods. Oftentimes the lines are a bit blurry when making those kind of decisions. I still think you deserve an answer to your question, but yet again I don’t know the details…
I can only speculate. Either you didn’t give them enough time, or you weren’t polite and they ignore you, or you didn’t message the one who dealt with you. Or the mods just aren’t nice or transparent to people. Idk. I can just say your removed post looked a bit low quality since it included no useful information to help you, and it was about networking issues, not selfhosting. Also this post is probably again in violation of rule 3. Since it’s not about selfhosting in general, but your issue with the mods.
Usually, write them a direct message.
But terminal access also kind of invalidates the WebUI requirement. If you have a terminal open anyways, you could as well just do eject -t && handbrake-cli ... && eject
and skip all the switching to the browser and clicking on things… That’d close the tray, rip the DVD and spit it out when finished, all in one line. At least that’s what I would do.
Well, the usual way I’ve seen people deal with this is either open up the case and leave the extra drives dangling to the side, or just lay them on the bottom of the case (or on top) and don’t move it any more.
That works. Though, if you want to imitate that… Pay attention to the temperature of the harddisks. There is no air circulation if you just lay them flat on the floor and they might take damage from getting too warm.
But you can’t really beat the price of that solution. 25 bucks for a SATA card and some old shoe rack with holes in the shelves, and you’re set. Ready to accomodate 4 more harddisks.
Large drives. 1-2TB are dysproportionally expensive, you need an expensive mainboard to connect a bunch of them. More drives means more failures…
About half a terabyte per month. My router doesn’t write the logs to disk so I don’t have any detailed statistics.
That is correct. Most big apps have LDAP auth and YunoHost will have them integrated into their system. But lots of other apps don’t have that, or it’s complicated for other reasons… And you’ll end up with those not integrated and seperate. They show you the level of integration somewhere.
It has user-management, though. YunoHost comes with LDAP, provides email addresses to all users, a permission system to allow what groups of users can acces which services… And they integrate that into the individual services. That is, if they have some LDAP plugin. A decent amount of services can’t be tied into their user system. But it works flawless for chat, Nextcloud and the main contenders…
Ah, alright. I can see how these things idle at 15W. Or an comparable amount to a laptop. That might do it for OP. I don’t think that’s what I’m looking for, though. I need like 3 or 4 SATA ports plus maybe 2 spare for future upgrades. Hence my “compact” is a small midi tower.
Idk. I’ll look it up. If that’s true, I’d be interested to replace my trusty old Skylake generation NAS computer. We have a nice shop selling refurbished office computers in town (afbshop.de). Last time I’ve checked it was difficult to get computers <40W idle, because that’s mainly laptops or small form factor computers these days. And those are kind of unsuitable to attach several harddisks. And the Dell/HP/Fujitsu Esprimo workstations needed more power. If that’s changed in the meantime, I think I’d like to buy one.
Do you happen to know about the idle power consumption of graphics cards? Because some of the computers have GPUs and I also always wanted to fit in something like a modern Radeon graphics card to do some machine learning, have Home Assistant talk to me etc. But I’m in the same boat and can’t afford the amount of electricity the American selfhosting community uses for their projects…
But yeah, most computers I measured were kind of old. Not 25yo but old enough to be finally replaced at work, or to be had for really cheap.
Yes, I’ve seen a lot of them idling at 45+ watts. With quite some outliers. 80W is not uncommon for what I’ve seen. And you need to attach an energy meter to measure that. The CPUs itself are mostly fine. It’s the mainboard chipset that might do a lot of unnecessary power drain. Plus some other components and the power brick that only has a certain efficiency.
For an office that’s not an issue. They pay like a third for electricity. They’re making good money with the PC being used. And maintenance etc is the major cost factor, not the annual few bucks for electricity.
I’d recommend to pay attention to power efficiency. If it’s a random office PC with 80W idle, and OP is living in Europe and paying like 30ct per kWh, that means that computer is going to cost them an extra 200€ anually for the electricity.
Good question. You could set a domain name point to your IP in your router. That has different names, might be hosts-file or just buried somewhere in the DNS settings of the router. Or machinename.local or .lan works. That should give you some internal domain name, valid inside of your network / wifi.
Getting proper certificates without exposing anything is tricky. If that’s really needed: You’d either generate a self-signed certificate manually and import that onto your devices. Or you need to do some trickery with letsencrypt’s DNS challenge. That’s not super easy, but possible: https://m.youtube.com/watch?v=qlcVx-k-02E
I don’t use Bitwarden. There might be another specific solution to do it.
Uh, I’m not super up to date any more. I installed YunoHost a long time ago and it’s been running fine most of the time, I haven’t installed anything new in the last year or so. I like it. I don’t think i have any broad advice, except the usual. Do your backups in case a harddisk fails. And don’t mess with the config manually (too much) or you might run into problems.
I’m mainly using it to self-host my e-mail, Matrix chat, Peertube and Nextcloud. Have stored all the calendars and contacts stored there and sync it to my phone and computer. Have smaller websites running as a custom_webapp. And I use the reverse proxy to make Home Assistant and a few side-projects and experiments accessible from outside.
I think I get it. I mean in that situation you’d essentially pay to get some SATA ports and the space to put the harddrives. The money doesn’t really get you anything else that’d be fundamentally different from the current setup.
Idk, I’m fine with 48GB of RAM to run a lot of services and containers. And I don’t use a separate machine for storage, the hypervisor does that and I either share the filesystems via NFS or pass them through into some VM. And I don’t think a fast machine with lots of RAM is needed for storage, unless you’re using ZFS.
Yeah, but at that point you’d have to be prepared to fight it to the end. And pay your own lawyer upfront. The final decision on who’s going to pay is settled either in arbitration or by the court. And you need to win. And trademark law isn’t super easy to understand, you definitely need to invest in a lawyer and optimally get that money back. The time and effort is kinda wasted, though. You won’t get that back.