I was talking about Podman Pods. Sorry for not being clear.
Admin on the slrpnk.net Lemmy instance.
He/Him or what ever you feel like.
XMPP: [email protected]
Avatar is an image of a baby octopus.
I was talking about Podman Pods. Sorry for not being clear.
Yeah, inside of Pods you can just use the container name and thus avoid hard-coding any IPs.
Yeah, beginners are probably better served with Yunohost.
Well usually the opposite happens. People make many releases and outsource the testing to unsuspecting users.
This is IMHO fine if you clearly mark these releases as release candidates or such, so that people can make their own risk judgement. But usually that isn’t the case and one minor version looks like any other unless you have a closer look at the actual changes in the code.
I think the better option is to have many releases that are clearly marked as beta-test releases or release candidates for those that don’t mind testing stuff.
“Bigger” is a bit missleading here. Really bigger updates obviously require a major version bump to signify to users that there is potential stability or breakage issues expected.
But “bigger” in the other sense i.e. meaning slower, means that there was more time for people to run pre-release versions if they are adventurous and thus there is better testing.
Of course this assumes that there are actual beta testers and that it is easy to do so by creating such beta releases.
Yeah, running it like that here. Works fine for the most part, except that the hybrid inverter that I bought advertised “UPS” mode, but it doesn’t actually switch fast enough to avoid also adding a proper UPS (but running an UPS chained is another issue…).
It sounds a bit strange as it does actually run off the battery all the time (unless below the minimum charge limit, when it seamlessly switches to grid power automatically), but due to legal requirements it needs to switch to another supply mode when the grid power fails and this switch is not entirely seamless on my inverter.
Rebuilding my main router to work with 10gbe fiber that recently became available here. Although it is a tad expensive, so I am not actually sure yet if I will upgrade my contract.
You can use the Nextcloud app with the much simpler KaraDAV backend. Works fine for photo backups.
As usual it depends (and TDPs are highly misleading). First of all the 6700k is a 14nm chip, Vs. 32nm for the E5-2620. And the 6700k is a Skylake generation chip, compared to Sandy Bridge for the Xeon, which brings significantly better power-states. But on the other hand the 6700k is much higher clocked and has turbo-boost, with the latter being notoriously power hungry (can be disabled in the bios though).
In my educated guess the 6700k will use significantly less power if it mostly idles or does only burst tasks, which is actually what most self-hosters have as as task-loads. But if you serve websites to thousands of users which results in a consistently high CPU load, the Xeon is probably overall the better chip, including power-consumption under load.
Edit: I realized now that it is a E5-2620v2, which is Ivy Bridge and 22nm. So the difference is probably less, but overall the same considerations apply.
Really depends on the specific workload.
Another thing to keep in mind is that the 6700k is significantly more power efficient, especially when it isn’t consistently under high load.
Also if you do any sort of media processing the 6700k has a gpu and quicksync built in that can speed these things up significantly.
Due to checksum based auto-correction ZFS and btrfs (in raid1) are actually less sensitive to data-corruption due to non-ECC ram.
If Lemmy isn’t meeting your needs, why not go back to Reddit? /s
lol, what? A project migrating away from Codeberg to Github? That’s a first I think, and also stupid.
It nearly certainly happened to you, but you are simply not aware as filesystems like ext4 are completely oblivious to it happening and for example larger video formats are relatively robust to small file corruptions.
And no, this doesn’t only happen due to random bit flips. There are many reasons for files becoming corrupted and it often happens on older drives that are nearing the end of their life-span and good management of such errors can expand the secure use of older drives significantly. And it can also help mitigate the use of non-ECC memory to some extend.
Edit: And my comment regarding mdadm Raid5 was about it requiring equal sized drives and not being able to shrink or expand the size and number of drives on the fly, like it is possible with btrfs raids.
One of the main features of file systems like btrfs or ZFS is to store a checksum of each file and allow you to compare that to the current file to notice files becoming corrupted. With a single drive all btrfs can do then is to inform you that the checksum doesn’t match to the file any longer and that it is thus likely corrupted, but on a btrfs raid it can look at one of the still correct duplicates and heal the file from that.
IMHO the little extra space from mdadm RAID5 is not worth the much reduced flexibility in future drive composition compared to a native btrfs raid1.
Btrfs on a single storage prevents it from doing auto correction via checksums. I would get rid of the raid5 and do a btrfs raid1 out of these devices. Makes it also easier to swap out devices or expand the raid as btrfs supports arbitrary sizes.
OnlyOffice has good mobile apps.
The audio is very quiet, it’s probably a microphone or post processing issue.
Castropod is cool, maybe you can try to figure out why it doesn’t properly federate with Lemmy and file some issues on both sides?
You could try it with KaraDAV. Much simpler and works fine with the Nextcloud apps.