I did that with a couple Mac G4 towers running OSX server between our house and my wife’s office. Was fun looking at the ssh logs watching bots try to brute force the ssh password.
fail2ban is your friend in that situation. Monitors failed ssh attempts and blocks the IP using a firewall rule. Runs great on Linux, but I hear it may work on Mac too.
+1 on fail2ban. I have it on every Linux server internally and externally. It is kind of fun to watch people get banned (in a sick sort of way).
Be careful not to set the ban timeout to be too high and accidentality ban yourself doing an ssh login. Ask me how I know about that little issuette on a prod server
I only use key-based authentication to prod servers. Kinda hard to screw up a password with that
See guys? I KNEW I was in the right place, lol.
There is a piece of software we use at work to sync all of our installer laptops called BTSync. Works ok I guess. Just thought I throw that out there since it sounds just like what you are looking for.
EDIT: BTsync is now called Resilio Sync.
If I changed something and it made a change on the Wifes computer OMG I’m cleaning for a week and fixing it her computer is only work and no play and it tells her if I was there telepathy of some kind this year I was finally able to go from win xp to 10 and owo wow but now things work again with the rest of the world. If she lost access to her files on the server I would never be seen again
You should have heard my wife when I setup PiHole servers in the house. All of a sudden the first link in Google with the ‘Ad’ symbol no longer works (imagine that). The frustration lasted a few days… It was a very quiet few days.
“Hunt the Wumpus” on thermal teletypes at Peabody College for Teachers. Summer of… '80? '81? Hooked on gaming. Spent hours hand-copying cannon-shooters in TRS-BASIC on the High School TRS-80s when I was in middle school while waiting for my mom (she worked at my school…). Played “Code of Hammurabi” on same TRS-80s at a friends house (from cassette tape, of course). Thought mom was the coolest when she got the first hard drive on campus. A whopping 9Mb for her TRS-80 Model III (she eventually got a second one), so she didn’t have to swap those 8" floppies any more. Or at least as much. Yes, 8" floppies, not those weaksauce 5 1/4" floppies.
*sigh* And I just spent this morning performing an emergency install of a new router for the house. The old one went off into left field with two high schoolers doing remote learning, and me trying to finish up knowledge transfer on my last day of work.
I switched to Unifi gear a few years ago. Have had 0 issues with hardware since. I’m a full believer in you get what you pay for when it comes to networking gear.
I really need to set up a pihole. I mentioned it to my wife and she didn’t sound very excited, but I can at least just set my DNS there.
I’m running the PiHole container on a pair of coreOS vms on my server. Having two helps with redundancy.
If you set up your normal DNS (or 184.108.40.206) as the second DNS wouldn’t it just fail gracefully?
DNS doesn’t work that way. When you make a DNS request, the request goes out to all of the resolvers in your list. Whichever one returns an IP first is the one your system uses.
In the case of a PiHole, the PiHole would fail to return an IP, but the secondary resolver would. The whole idea about a PiHole is to ignore DNS requests for known bad-acting domains. So if you setup a secondary DNS to 220.127.116.11 or 18.104.22.168, then your PiHole would be doing nothing.
Honestly, I’ve had few issues with my PiHole. the CoreOS and pihole are both set to auto-update on a regular basis, so by having two, I help limit if the CoreOs is rebooting due to updates or if the container is restarting for some reason on one, then the other will continue to resolve.
Setting up CoreOS was a different story altogether. Redhat now owns it and the newer implementation is easier to setup, but still a bit of a pain as you have to create a check-sumed ignition file for the VM to use when first powering on. The nice thing is that that ignition file includes your rsa key, so you don’t have to worry about using a password for initial login.
If you’re just going to run it on a Pi, then you can just use a normal install and not worry about docker.
Well, for a ‘blocked’ name, doesn’t the pihole just return the IP address of itself to serve up a blank page so you’re not left hanging? It’d win the speed race vs. 22.214.171.124.
My unifi access point has been pretty good with the exception that it seems the access point goes down if my internet goes down, kind of annoying, I’d like to still talk to local devices in that event.
I figured a Netgate pfsense router would be a good, high reliability device, but it’s been trouble, several corrupted filesystem recoveries. Can’t believe people rely on these for businesses.
Back to the topic, I started with a Linux HTPC acting as a server. Moved off that to a 4 bay QNAP NAS and ran that for many years, QNAP still updates the software. Docker containers came out for it in the interim, used those heavily on it. Overall, pretty good device, though since I had it for so long it became long in the tooth speed wise. I planned on making my own more powerful server (handle video cameras and such) but never got around to it. Until… it finally died. At some point QNAP went from standard Linux ext3/4 filesystems to something proprietary so I could not read the data by plugging in the drives elsewhere. Rather annoying. I had backups of the important stuff, but didn’t want to have to re-rip my Plex library, etc. so I ended up buying the newer 4 bay version. $ I didn’t want to really spend.
So I still have the newer QNAP but it’s now mostly backup for my server box. I somewhat followed ideas in https://blog.linuxserver.io/2019/07/16/perfect-media-server-2019/. I rather like their podcast https://selfhosted.show A lot of work to setup though.
I do have a now spare QNAP TR-004 4 drive expansion enclosure if anyone needs one.
Uh oh. That’s not fun. It wasn’t just a problem of them being in raid?
Now you have me wanting to pull the drives and see if I can read them before it takes a dump.
That would certainly complicate things, but no, I just mirror. Used to be easy to read a single drive off the mirror in that case. It depends on if you formatted your drives under the new scheme or not. If you can do snapshots on your filesystems, and/or thin partitions, I think you have the new scheme. Supposedly they use some oddball kernel extension normal distros don’t have to support the thin stuff and snapshots as well as some closed source stuff. Seems some people have been able to get data off, depending on what sort of partitions/RAID they used, but it’s not easy.
That experience made me go 180 to the snapraid/mergerfs setup. I can access any individual drive if need be with snapraid giving me some redundancy. Easy expansion too.
from the cli, it looks like mine are mounted from LVM as ext3:
/dev/mapper/cachedev1 on /share/CACHEDEV1_DATA type ext4 (rw,usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,data_err=abort,delalloc,nopriv,nodiscard,noacl)
It is set up as raid5, and I am a fan of LVM, so that might have been my choice, or maybe that’s how the raid5 is done, IDK.
I just looked, and mine is a TS-451. .
My TS-451+ (formerly a TS-451) shows similar. No idea what a ‘/dev/mapper/cachedevX’ is, that’s probably where the abstraction is, enabling the snapshot/thin stuff. Normally for LVM you’ll see symbolic links from /dev/mapper/ to an LV or a partition or a UUID. IIRC if you haven’t done snaps or thin vols it’s possible to figure out the offset where the plain ext4 starts and can be mounted… maybe. I committed both of those two sins.
You’re right. It does return a dummy IP. It should win a race with 126.96.36.199. I never tested.