

I use Fireflyiii for my money and budgeting.
I’m surprisingly level-headed for being a walking knot of anxiety.
Ask me anything.
I also develop Tesseract UI for Lemmy/Sublinks
Avatar by @[email protected]
I use Fireflyiii for my money and budgeting.
I don’t see why not. I haven’t stood it up yet, but I’ve played with the demo. It does have a section for parts/repairs/upgrades.
Give the demo a try, and let us know.
Yeah, building a simpler version of something like that was on my ever-growing “to do” list but came across this today. Probably going to deploy it this evening or maybe this weekend (whichever day it’s supposed to rain lol).
Depends on what I’m transferring and to/from where:
scp
is my go-to since I’m a Linux household and have SSH keys setup and LDAP SSO as a fallbacksshfs
if I’m too lazy to connect via SMB/NFS (or I don’t feel like installing the tools for them) or I’m traversing a WANrsync
for bulk transfer and backupsI’ve always thought the firewall color codes were arbitrary, though I might just have not paid attention all these years lol.
Just to clarify: I meant connect your OpenWRT device to your hotspot instead of the AP you’ve been working with. Just to rule out multiple MACs being blocked on the AP.
Beyond that, I’m not really able to help troubleshoot further, but worst case and if all you need is internet, you can set your OpenWRT device up so that it just NATs your downstream connections. Double-NAT, in most cases, is fine.
Hmm. Is the upstream AP some kind of fancy deal or a run of the mill consumer router?
I’ve seen some Cisco APs configured to not allow multiple MAC addresses from the same station. Caused problems when trying to do VMs on my laptop that had the network in bridge mode.
Are you able to put your phone into hotspot, connect to that instead of the upstream AP, and see if it works?
I did that with a GL.iNet travel router after flashing stock OpenWRT, and used it as a wireless bridge for several years. It uses relayd to bridge the Wifi station interface and Ethernet. Once you have an ethernet bridge, you can connect another AP or do whatever from there.
If you create a second wifi interface in AP mode (in addition to the station/client one connected to the upstream), you should be able to add that to the LAN bridge alongside the ethernet interfaces. That bridge will then be part of the relayd bridge, and it all should just work (should, lol. I haven’t tested that config since I only needed to turn wifi into wired ethernet with this setup).
Interfaces:
LAN Bridge: Ethernet interfaces to be bridged to the wifi
I have both of its interfaces in this bridge, and it also has a static management IP (outside of the WLAN subnet). This management IP is a static out-of-band IP since the devices connected over ethernet won’t be able to access it’s WLAN IP (in the main LAN) to manage it. To access this IP, I just statically set an additional IP on one of the downstream ethernet client devices.
The LAN bridge is in a firewall zone called LAN.
WWAN: Wireless station interface that’s configured as a client to the AP providing upstream access. I have this configured statically, but DHCP is fine too. Firewall zone is WLAN.
WLANBRIDGE: The relayd bridge (Protocol: relay bridge). It’s interfaces are the LAN bridge and the WWAN interface.
Disregard the WGMesh parts; that’s separate and not related to the wireless bridging mode.
Nice! Yeah, I’ve been a big fan of it. Planning to eventually replace my custom Snapdrop with Pairdrop since they’ve made quite a few other improvements.
Quickly send files, paste images/text snippets between devices.
I’m using the older Snapdrop (which PD was forked from) with some patches I made to:
It has 100% replaced emailing things to myself or shuffling files to/from Nextcloud. I probably use it to send text (URLs, clipboard contents, etc) to/from my phone as much as I use it for sending files back and forth.
https://smallstep.com/docs/step-ca/index.html
There’s basically two executables involved:
step
is the CLI app used to request certificatesstep-ca
is the server process the step
client connects toI’ve got the CA portion bundled into Docker. It can also run as an ACME server (and is compatible with certbot
).
I run a custom build of Nginx with a few extra modules compiled in:
Some guidance can be found here: https://docs.nginx.com/nginx-waf/admin-guide/nginx-plus-modsecurity-waf-owasp-crs/
That guidance is for NginxPlus, but you can compile the dynamic module yourself with the community versions.
RCS is a whole can of worms. It’s presented like a carrier services (and carriers are in the mix, though often just for authentication), but it’s really a Google service. With Android, RCS connects directly to google’s mothership.
I believe on iOS those go to Apple’s servers which “peers” with google. Maybe search the RCS endpoint for Apple and see what comes up?
This is actually one of my New Year’s resolutions lol. Right now, my backups are local and my offsites are a hodgepodge of cloud services (basically holding encrypted container blobs of my stuff). Not ideal.
I’m looking at signing up for rsync.net since a lot of my backups are done via rsync anyway. Plan is to keep my local backups as-is and rsync them to rsync.net.
AI bots absolutely rip through your sites like something rabid.
SemrushBot being the most rabid from my experience. Just will not take “fuck off” as an answer.
That looks pretty much like how I’m doing it, also as an include for each virtual host. The only difference is I don’t even bother with a 403. I just use Nginx’s 444 “response” to immediately close the connection.
Are you doing the IP blocks also in Nginx or lower at the firewall level? Currently I’m doing it at firewall level since many of those will also attempt SSH brute forces (good luck since I only use keys, but still…)
Most containers default to UTC, and depending what you’re running, that may be fine.
I only mount /etc/timezone
/ /etc/localtime
if I’m running a container where it needs to be on the same timezone as the host (DB containers, anything where I want the logs in local time, etc). Not all containers use the TZ
env var, so bind mounting the timezone files from the host is a guaranteed way to sync them.
I always do some level of RAID. If for no other reason, I’m not out of commission if a disk fails. When you’re working with multi TB, restoring from a backup can take a while. If rapid recovery from a disk failure is not a high priority for you, then you could probably do without RAID.
Either way, make sure you test your backups occasionally.
Another way to put it: With RAID, a disk failure is like your Check Engine light coming on. You can still drive, but you should address the problem as soon as you can. Without RAID, it’s like your engine has seized up and you have to tow it for repair and are without your car until it’s fixed.
I like Joplin. Works offline and syncs with my Nextcloud.
How exactly are “communities offering services” a different thing than “hosted software”?
It’s a lot easier to ask Matt down the street to customize or add a feature than it is to ask Google, FB, etc.
Case in point: I’ve run my own email server since 2013 or so. I’ve got friends and family that use it. One of my friends asked if there was any way to setup rules to filter emails and such. I was like “yep” and added on Sieve to Dovecot and setup the webmail (Roundcube at the time) with the Sieve plugin.
Granted, that’s a pretty basic feature that pretty much all commercial email providers offer, but the point is someone asked for it and I made it happen for them.
I’ve self hosted long before the privacy/subscription nightmare of modern cloud/SaaS platforms was a thing. I do it because I enjoy it (and at the time I got started, I had crap internet so having good local services like offline Wikipedia was important).
Not everyone has to self-host. I run lots of services, mostly for myself, but friends and family who don’t know a kernel driver from a school bus driver also use them. So the expectation that everyone self host is and always has been “pie in the sky”. And that’s okay.
Privacy regulations are all fine and dandy, but even with the strictest ones in place, you still do not own or control your data. You’re still subscribing to services instead of owning software. You can’t extend, modify, or customize hosted software. Self hosting FOSS applications addresses all of those.
So rather than expect everyone to self-host, we should be working towards communities offering services to one another, pooling resources, and letting those interoperate with each other.
To make fun of an old moral panic in the 90s: “It’s 11pm. Do you know where your data is?” Yep, it’s down the street in Matt’s house.
Looks like it, yeah:
The UI still shows Fuel, but it seems like you can enter the kWh and it should calculate. Maybe plug some values into the demo to be sure. If you do, let us know!