I am currently running an Unraid server with some docker containers, here are a few of them: Plex, Radarr, Sonarr, Ombi, NZBGet, Bitwarden, Storj, Hyrda, Nextcloud, NginxProxyManager, Unifi, Pihole, OpenVPN, InfluxDB, Grafana.
All web-services are reverse-proxied through traefik
At home:
loki + cadvisor + node-exporter + grafana + prometheus
syncthing
tinc vpn server
jackett + radarr + sonarr + transmission
jellyfin
samba server
calibre server
On a remote server: loki + cadvisor + node-exporter + grafana + prometheus
syncthing
tinc vpn server
dokuwiki
firefox-sync
firefox-send
vscode server
bitwarden
freshrss
znc bouncer + lounge irc client + bitlbee
an httptunnel server (like ngrok)
firefly iii
monicahq
kanboard
radicale
syncthing
wallabag
tmate-serverHome server's a Raspberry Pi 4.
The config I ended up using - https://0bin.net/paste/gnWY4+Tn-jZ2UMZm#RgQfZ3uD7MIlK7nWKLLX...
It's deployed on docker, proxied through traefik.
Does anyone have recommendations for password+sensitive-data management?
I'm currently using Keepass and git, but I have one big qualm. You cannot choose to not version-control that one big encrypted (un-diff-able) file.
They both store passwords/data in gpg-encrypted files in a git repo. I'm not sure what the state of GUIs/browser plugins are for it, but I'm pretty sure there are some out there.
You can also set up your git config to be able to diff encrypted .gpg files so that the files are diff-able even though they're encrypted.
One other alternative to keepass is pass[1].
What are the issues with syncthing?
I think the problem is entirely caused by the US having absolutely abysmal private internet speeds and capacity. Since you can’t then have your own server at home, you are forced to have it elsewhere with sensible internet connections.
It’s as if, in an alternate reality, no private residences had parking space for cars; no garages, no street parking. Everyone would be forced to either use public transport, taxis and chauffeur services to get anywhere. Having a private vehicle would be an expensive hobby for the rich and/or enthusiasts, just like having a personal server is in our world.
I do everything for little to nothing in my life, and there's no reasonable default as to where the line is other than a cost/benefit comparison.
For my for many years owning a car was far more expensive than renting or getting taxis when needed. Owning a car absolutely would have been an expensive hobby, and the same is true for many in cities.
Having a personal server is exceptionally cheap. I had a VPS unnoticed recently I'd forgotten to cancel which cost about 10 dollars per year. That's about one minimum wage hour where I live. If you mean literally a personal server a raspberry pi can easily run a bunch of things and can cost about the same as a one off.
It's time, and upfront costs of software. If I want updates, and I do (security at least) I need some ongoing payments for those, and then I need to manage a machine. That management is better done by people other than me (as even if they earned the same as me they'd be faster and better) and they can manage more machines without a linear increase in their time.
So why self host? Sometimes it'll make sense, but the idea it should be the default to me doesn't hold. Little needs to be 100% in house, and sharing things can often be far more efficient. Software just happens to be incredibly easy to share.
You can’t outsource your privacy. Once you’ve given your information to a third party, that third party can and will probably use it as much as they can get away with. And legal protection from unreasonable search and seizure is also much weaker once you’ve already given out your information to a third party.
To generalize, and to also answer your other comments in a more general sense, you can’t outsource your freedom or civic responsibility. If you do, you turn yourself into a serf; someone with no recourse when those whom you place your trust in ultimately betray you.
(Also, just like “owning” a timeshare is not like owning your house, having a VPS is not self-hosting.)
I found on my Linode account one last weekend. It’s been up since 2010 running Debian 5, no updates cause the repos are archived. Couple of PHP sites on there which I don’t control the domains of (but the sites where active).
Last email I have from the people there is 2012, a backup. The company apparently is not in business anymore (I know the domains registrar was on the personal account of the owner. He might have have auto renew on).
Backed up everything there and shut it down.
The trend definitely traces to the advent and eventual domination of asymmetric Internet connectivity. My first DSL connection was symmetric, so peer-to-peer networking and running servers ("self-hosting") were just natural. Since then, asymmetric bandwidth has ruled the US.
It's not so much that connectivity technology in the US is strictly poor—many cities have options providing hundreds of megabits or a gigabit or more of aggregate bandwidth. It's that the capacity allocation of some shared delivery platforms (e.g., cable) is dramatically biased toward download/consumption, and against upload/share/host. And there's no way for consumers to opt for a different balance. I'd gladly take 500/500 versus 1000/50. Even business accounts, which for their greatly increased costs are a refuge of symmetric connectivity and static IPs, are more commonly asymmetric today.
I think that this capacity imbalance and bias toward consumption snowballs and reinforces the broader assumptions of consumption at the edge (why make a product you self-host when most people don't have the proper connectivity?). This in turn means more centralization of services, applications, and data.
Nevertheless, even with mediocre upload speeds (measured in mere tens of megabits), I insist on self-hosting data and applications as much as I can muster. All of my devices are on my VPN (using the original notion of "VPN," meaning quite literally a virtual private network; not the more modern use of VPN to mean "encrypted tunnel to an Internet browsing egress node located in a data center"). For example, why would I use Dropbox when I can just access my network file system from anywhere? To me, it's a matter of simplicity. Everything I use understands a simple file system.
And most people actually do outsource their jobs. They are employees rather than working for themselves…
That might be true if you are in SF, NY, Toronto, London or some other major metropolitan with a good public transportation network. However for a large number of places in North America including metropolitans like LA, San Diego, Minneapolis, Dallas, having a car is almost as necessary as anything as that is the only way to get around the city without spending half a day in public transit.
Having a car is not a hobby when you live outside of a very dense city center. That's just the tool that enables you to live.
While I know that some car owners do just have it for fun, I think a lot more are because it's useful.
(edit: forgot to state country, Slovenia)
Saudi is like this, I hear, Jakarta too. I assume there's more.
* nginx to reverse-proxy each of the services.
* NextCloud.
* Matrix Homeserver (synapse).
* My website (dumb Flask webapp).
* Tor (non-exit) relay.
* Tor onion service for my website.
* Wireguard VPN (not running in a container, obviously).
All running on an openSUSE Leap box, with ZFS as the filesystem for my drives (simple stripe over 2-way mirrors of 4TB drives).It also acts as an NFS server for my media center (Kodi -- though I really am not a huge fan of LibreELEC) to pull videos, music, and audiobooks from. Backups are done using restic (and ZFS snapshots to ensure they're atomic) and are pushed to BackBlaze B2.
I used to run an IRC bouncer but Matrix fills that need these days. I might end up running my own Gitea (or gitweb) server one day though -- I don't really like that I host everything on GitHub. I have considered hosting my own email server, but since this is all done from a home ISP connection that probably isn't such a brilliant idea. I just use Mailbox.org.
I plan to use Wireguard too, so I shouldn't run on containers? Can you elaborate on that?
I run it on the host.
This is a bit tangential, but to clarify, do you mean that you listen to audiobooks on your TV using Kodi? Do you also have a way of syncing them to a more portable device, like your phone?
Sometimes, though not very often -- I work from home and so sometimes I'll play an audiobook in my living room and work at the dinner table rather than working from my home office.
> Do you also have a way of syncing them to a more portable device, like your phone?
Unfortunately not in an automated way (luckily I don't buy audiobooks very regularly -- I like to finish one before I get another one). I really wish that VLC on Android supported NFS, but it doesn't AFAIK (I think it requires kernel support).
I used to run Docker containers several years ago, but I found them far more frustrating to manage. --restart policies were fairly hairy to make sure they actually worked properly, the whole "link" system in Docker is pretty frustrating to use, docker-compose has a laundry-list of problems, and so on. With LXD I have a fairly resilient setup that just requires a few proxy devices to link services together, and boot.autostart always works.
Personally, I also find it much simpler to manage a couple of services as full-distro containers. Having to maintain your own Dockerfiles to work around bugs (and missteps) in the "official library" Docker images also added a bunch of senseless headaches. I just have a few scripts that will auto-set up a new LXD container using my configuration -- so I can throw away and recreate any one of my LXD containers.
[Note: I do actually maintain runc -- which is the runtime underneath Docker -- and I've contributed to Docker a fair bit in the past. So all of the above is a bit more than just uneducated conjecture.]
Overleaf: https://sdan.xyz/latex
A URL Shortener: https://sdan.xyz
All my websites (https://sdan.xyz/drf, https://sdan.xyz/surya, etc.)
My blog(s) (https://sdan.xyz/blog, https://sdan.xyz/essays)
Commento commenting server (I don't like disqus)
Monitoring (https://sdan.xyz/monitoring, etc.)
Analytics (using Fathom Analytics) and some more stuff!
I wrote this to setup my web server, mail server and VPN server, and auto-generate all my VPN keys.
But at the same time, I understand the security risks and if I have to I can just stop netdata's container and add some more security on it before turning it on again (I'm not running some SaaS startup, so security isn't a huge concern and I don't think you can do anything with my netdata that can affect or show anything else that can make me prone to attack)
I see a lot of people putting their home stuff behind CloudFlare, but when I reviewed their free tier, I didn’t actually see any security benefit to outweigh the privacy loss, and I didn’t see that covered on your blog post.
The main thing is being able to hide your origin IP address. That turns many types of DDoS attacks into CloudFlare's problem, not yours, and it doesn't matter that you're on the free tier[0]. If you firewall to only allow traffic from CF[1], then you can make your services invisible to IP-based port scans / Shodan.
CloudFlare isn't a magic-bullet for security, but, used correctly, they greatly reduce the attack surface.
Whether any of that is worth the privacy / security risk of letting CloudFlare MITM your traffic is up to you.
1. This is hosted on GCP. Actually was thinking of using Cloudflare Argo once my GCP credits expire so that I can truly self host all this (although all I have is an old machine).
2. For me, Cloudflare makes my websites load faster on pages. Security wise, I have pretty much everything enabled... like always on HTTPS, etc. and I some strict restrictions on SSHing into my instance (also note that none of my ip addresses are exposed thanks to Cloudflare), so really I'm not sure what security risk there may be.
3. How am I losing privacy loss? Just curious, not really understanding what you're saying there.
Of course, it would simplify privilege escalation if someone successfully attack netdata service. If you want public dashboard, streaming is supposed to be quite safe (no way to send instruction to streaming instance of netdata).
https://github.com/dantuluri/sd2/blob/master/docker-compose....
You'll need Mongo and Redis (last I remember) as well (which I believe are the two images that follow the sharelatex image.
Here’s my home lab: https://imgur.com/a/aOAmGq8
I don’t self host anything of value. It’s not cost effective and network performance isn’t the best. Google handles my mail. GitHub can’t be beat. I use Trello and Notion for tracking knowledge and work, whether personal or professional. Anything else is on AWS. I do have a VPN though so I can access all of this when I’m not home.
The NAS is for backing up critical data. R720 was bought to experiment with Amazon Firecracker. It’s usually off at this point. Was running ESXI, now running Windows Server evaluation.
The desktop on the left is the new toy. I’m learning AD and immersing myself 100% in the Microsoft stack. Currently getting an idiomatic hybrid local/azure/o365 setup going. The worst part about planning a MS deployment is having to account for software licensing that is done on a per-cpu-core basis.
The status quo is radically anti-consumer, IMO, as radical as abolition of all copyright would be.
Of all the ways to try to promote creativity in the 21st century, making information distribution illegal by default and then using force of law to restrict said distribution unless authorized is pretty wack.
> Bums me out when I see people putting so many resources into running/building elaborate piracy machines.
These two comments are rather at odds to me.
That said, IME generally the type of person who's big into self hosting isn't a Microsoft guy. I work with MS stuff at work at the moment. The entire thing is set up for Enterprise and Regulations. It's hugely overcomplicated for that specific goal only.
At home I don't care about Regulations(tm). The only reason I can see for someone to bother with it is if they want to train out of hours for a job at an MS shop.
I also have piracy.
How would _you_ suggest I handle the 2TB of public domain media I have, then?
It seems to hold rack-mounted gear quite well.
Looks like the IKEA IVAR storage system. https://www.ikea.com/kr/en/catalog/categories/departments/li...
nginx
Plex
Radarr / Sonarr / SABnzbd / qBittorrent / ZeroTier -> online.net server
FreeNAS x2
Active Directory
At home: nginx
vCenter
urbackup
UniFi SDN, Protect
Portainer / unms / Bitwarden
Wordpress (isolated)
Guacamole
PiHole
InfluxDB / grafana
Active Directory
Windows 10 VM for Java things
L2TP on my router
Everything I expose to the world goes through CloudFlare and nginx with Authenticated Origin Pulls [0], firewalled to CF's IPs [1], and forced SSL using CF's self-signed certs. I'm invisible to Shodan / port scans.Have been meaning to move more to colo, especially my Wordpress install and some Wordpress.com-hosted sites, but inertia.
[0] https://support.cloudflare.com/hc/en-us/articles/204899617-A...
I've always been unable to pull this off completely as I always want a way to SSH into my home network - but maybe there is a better way I can pull off this sort of 'break glass' functionality.
I use one for the sites below. It is written in Java/Kotlin, but barely works anywhere except Windows.
...
Home: Two VMware hosts on Hyve Zeus (Supermicro, 2xE5 64GB), one on an HP Microserver Gen8 (E3-1240v2 16GB). PiHole bare metal on a recycled Datto Alto w/ SSD (some old AMD APU, boots faster than a Pi and like 4w). Cloud Key G2 Plus for UniFi / Protect.
VMware because it's what I'm used to. Hyper-V because it's not. Used to have some stuff on KVM but :shrug:
Docker running random stuff
Used to run Pihole until I got an Android and rooted it. Used to mess with WebDAV and CalDAV. Nextcloud is a mess; plain SFTP fuse mounts work better for me. My approach has gone from trying to replicate cloud services to straight up remoting over SSH (VNC or terminal/mosh depending connectivity) to my home computer when I want to do something. It's simple and near unexploitable.
This is the way it should always have been done from the start of the internet. When you want to edit your calendar, for example, you should be able to do it on your phone/laptop/whatever as a proxy to your home computer, actually locking the file on your home computer. Instead we got the prolifetation of cloud SaaSes to compensate for this. For every program on your computer, you now need >1 analogous but incompatible program for every other device you use. Your watch needs a different calendar program than your gaming PC than your smart fridge, but you want a calendar on all of them. M×N programs where you could have just N, those on your home computer, if you could remote easily. (Really it's one dimension more than M×N when you consider all the backend services behind every SaaS app. What a waste of human effort and compute.)
Why computer at home though? For someone who moves around a lot and doesn't invest into "a home", this would be bothersome. Not to mention it's more expensive, in terms of energy and money. I think third-party data centers are fine for self-hosting.
I guess one reason people might gravitate to home hosting is owning your own disks, the tinfoil hat perspective. You can encrypt volumes on public cloud as well, but it's still on someone else's machine. They could take a snapshot of the heap memory and know everything you are doing.
* MinIO: for access to my storage over the S3 API, I use it with restic for device backups and to share files with friends and family
* CoreDNS: DNS cache with blacklisted domains (like Pihole), gives DNS-over-TLS to the home network and to my phone when I'm outside
* A backup of my S3-hosted sites, just in case (bejarano.io, blog.bejarano.io, mta-sts.bejarano.io and prefers-color-scheme.bejarano.io)
* https://ideas.bejarano.io, a simple "pick-one-at-random" site for 20,000 startup ideas (https://news.ycombinator.com/item?id=21112345)
* MediaWiki instance for systems administration stuff
* An internal (only accessible from my home network) picture gallery for family pictures
* TeamSpeak server
* Cron jobs: dynamic DNS, updating the domain blacklist nightly, recursively checking my websites for broken links, keeping an eye on any new release of a bunch of software packages I use
* Prometheus stack + a bunch of exporters for all the stuff above
* IPsec/L2TP VPN for remote access to internal services (picture gallery and Prometheus)
* And a bunch of internal Kubernetes stuff for monitoring and such
I still have to figure out log aggregation (probably going to use fluentd), I want to add some web-based automation framework like NodeRED or n8n.io for random stuff. I'd also like to host some password manager but I still have to study that.
I also plan on rewriting wormhol.org into supporting any S3 backend, so that I can bind it's storage with MinIO.
And finally, I'd like to move off single-disk storage and get a decent RAID solution to provide NFS for my cluster, as well as a couple more nodes to add redundancy and more compute.
Edit: formatting.
I would be _very_ interested in a write up/explanation of this set up
Essentially, this setup achieves 5 features I wanted my DNS to have:
- Confidentiality: from my ISP; and from anyone listening to the air for plain-text DNS questions when I'm on public WiFi. Solution: DNS-over-TLS[1]
- Integrity: of the answers I get. Solution: DNS-over-TLS authenticates the server
- Privacy: from web trackers, ads, etc. Solution: domain name blacklist
- Speed: as in, fast resolution times. Solution: caching and cache prefetching[2]
- Observability: my previous DNS was Dnsmasq[3], AFAIK Dnsmasq doesn't log requests, only gives a couple stats[4], etc. Solution: a Prometheus endpoint
CoreDNS ticks all of the above, and a couple others I found interesting to have.
To set it up, I wrote my own (better) CoreDNS Docker image[7] to run on my Kubernetes cluster; mounted my Corefile[8] and my certificates as volumes, and exposed it via a Kubernetes Service.
The Corefile[8] essentially sets up CoreDNS to:
- Log all requests and errors
- Forward DNS questions to Cloudflare's DNS-over-TLS servers
- Cache questions for min(TTL, 24h), prefetching any domains requested more than 5 times over the last 10 minutes before they expire
- If a domain resolves to more than one address, it automatically round-robins between them to distribute load
- Serve Prometheus-style metrics on 9153/TCP, and provide readiness and liveness checks for Kubernetes
- Load the /etc/hosts.blacklist hosts file (which has just short of 1M domains resolved to 0.0.0.0), reloads it every hour, and does not provide reverse lookups for performance reasons
- Listens on 53/UDP for regular plain-text DNS questions (LAN only), and on 853/TCP for DNS-over-TLS questions, which I have NAT'd so that I can use it when I'm outside
The domain blacklist I generate nightly with a Kubernetes CronJob that runs a Bash script[9]. It essentially pulls and deduplicates the domains in the "safe to use" domain blacklists compiled by https://firebog.net/, as well as removing (whitelisting) a couple hosts at the end.
That's pretty much it. The only downside to this set up is that CoreDNS takes just short of 400MiB of memory (I guess it keeps the resolve table on memory, but 400MiB!?) and lately I'm seeing some OOM restarts by Kubernetes, as it surpasses the 500MiB hard memory limit I have on it. A possible solution might be to keep the resolve table on Redis, which might take up less memory space, but I'm still to try that out.
[1] Which I find MUCH superior to DNS-over-HTTPS. The latter is simply a L7 hack to speed up adoption, but the correct technical solution is DoT, and operating systems should already support it by now (AFAIK, the only OS that supports DoT natively is Android 9+).
[2] It was when I discovered CoreDNS' cache prefetching that I convinced myself to switch to CoreDNS.
[3] http://www.thekelleys.org.uk/dnsmasq/doc.html
[4] It gives you very few stats. I also had to write my own Prometheus expoter[5] because Google's[6] had a fatal flaw and no one answered to the issue. In fact, they closed the Issues tab on GitHub a couple months after my request, so fuck you, Google!
[5] https://github.com/ricardbejarano/dnsmasq_exporter
[6] https://github.com/google/dnsmasq_exporter (as you can see the Issues tab is no longer present)
[7] https://github.com/ricardbejarano/coredns, less bloat than the official image, runs as non-root user, auditable build pipeline, compiled from source during build time. These are all nice to have and to comply with my non-root PodSecurityPolicy. I also like to run my own images just so that I know what's under the hood.
[8]
local:65535 {
ready
health
}
(global) {
log
errors
cache 86400 {
prefetch 5 10m 10%
}
dnssec
loadbalance
prometheus :9153
}
(cloudflare) {
forward . tls://1.1.1.1 tls://1.0.0.1 {
tls_servername cloudflare-dns.com
}
}
(blacklist) {
hosts /etc/hosts.blacklist {
reload 3600s
no_reverse
fallthrough
}
}
.:53 {
import global
import blacklist
import cloudflare
}
tls://.:853 {
import global
import blacklist
import cloudflare
tls /etc/tls/fullchain.pem /etc/tls/privkey.pem
}
[9] #!/bin/bash
HOSTS_FILE="/tmp/hosts.blacklist"
HOSTS_FILES="$HOSTS_FILE.d"
mkdir -p "$HOSTS_FILES"
download() {
echo "download($1)"
curl \
--location --max-redirs 3 \
--max-time 20 --retry 3 --retry-delay 0 --retry-max-time 60 \
"$1" > "$(mktemp "$HOSTS_FILES"/XXXXXX)"
}
# https://firebog.net/
## suspicious domains
download "https://hosts-file.net/grm.txt"
download "https://reddestdream.github.io/Projects/MinimalHosts/etc/MinimalHostsBlocker/minimalhosts"
download "https://raw.githubusercontent.com/StevenBlack/hosts/master/data/KADhosts/hosts"
download "https://raw.githubusercontent.com/StevenBlack/hosts/master/data/add.Spam/hosts"
download "https://v.firebog.net/hosts/static/w3kbl.txt"
## advertising domains
download "https://adaway.org/hosts.txt"
download "https://v.firebog.net/hosts/AdguardDNS.txt"
download "https://raw.githubusercontent.com/anudeepND/blacklist/master/adservers.txt"
download "https://s3.amazonaws.com/lists.disconnect.me/simple_ad.txt"
download "https://hosts-file.net/ad_servers.txt"
download "https://v.firebog.net/hosts/Easylist.txt"
download "https://pgl.yoyo.org/adservers/serverlist.php?hostformat=hosts;showintro=0"
download "https://raw.githubusercontent.com/StevenBlack/hosts/master/data/UncheckyAds/hosts"
download "https://www.squidblacklist.org/downloads/dg-ads.acl"
## tracking & telemetry domains
download "https://v.firebog.net/hosts/Easyprivacy.txt"
download "https://v.firebog.net/hosts/Prigent-Ads.txt"
download "https://gitlab.com/quidsup/notrack-blocklists/raw/master/notrack-blocklist.txt"
download "https://raw.githubusercontent.com/StevenBlack/hosts/master/data/add.2o7Net/hosts"
download "https://raw.githubusercontent.com/crazy-max/WindowsSpyBlocker/master/data/hosts/spy.txt"
## malicious domains
download "https://s3.amazonaws.com/lists.disconnect.me/simple_malvertising.txt"
download "https://mirror1.malwaredomains.com/files/justdomains"
download "https://hosts-file.net/exp.txt"
download "https://hosts-file.net/emd.txt"
download "https://hosts-file.net/psh.txt"
download "https://mirror.cedia.org.ec/malwaredomains/immortal_domains.txt"
download "https://www.malwaredomainlist.com/hostslist/hosts.txt"
download "https://bitbucket.org/ethanr/dns-blacklists/raw/8575c9f96e5b4a1308f2f12394abd86d0927a4a0/bad_lists/Mandiant_APT1_Report_Appendix_D.txt"
download "https://v.firebog.net/hosts/Prigent-Malware.txt"
download "https://v.firebog.net/hosts/Prigent-Phishing.txt"
download "https://phishing.army/download/phishing_army_blocklist_extended.txt"
download "https://gitlab.com/quidsup/notrack-blocklists/raw/master/notrack-malware.txt"
download "https://ransomwaretracker.abuse.ch/downloads/RW_DOMBL.txt"
download "https://ransomwaretracker.abuse.ch/downloads/CW_C2_DOMBL.txt"
download "https://ransomwaretracker.abuse.ch/downloads/LY_C2_DOMBL.txt"
download "https://ransomwaretracker.abuse.ch/downloads/TC_C2_DOMBL.txt"
download "https://ransomwaretracker.abuse.ch/downloads/TL_C2_DOMBL.txt"
download "https://zeustracker.abuse.ch/blocklist.php?download=domainblocklist"
download "https://v.firebog.net/hosts/Shalla-mal.txt"
download "https://raw.githubusercontent.com/StevenBlack/hosts/master/data/add.Risk/hosts"
download "https://www.squidblacklist.org/downloads/dg-malicious.acl"
cat "$HOSTS_FILES"/* | \
sed \
-e 's/0.0.0.0//g' \
-e 's/127.0.0.1//g' \
-e '/255.255.255.255/d' \
-e '/::/d' \
-e '/#/d' \
-e 's/ //g' \
-e 's/ //g' \
-e '/^$/d' \
-e 's/^/0.0.0.0 /g' | \
awk '!a[$0]++' | \
sed \
-e '/gamovideo.com/d' \
-e '/openload.co/d' > "$HOSTS_FILE"
rm -rf "$HOSTS_FILES"I remember comparing low power homeservers, consumer NAS and a refurb Thinkpad and the latter won when considering the price/performance and idle power consumption (<5W). You also get a built screen & keyboard for debugging and a efficient DC-UPS if you're brave enough to leave the batteries in. That's of course assuming you don't need multiple terabytes of storage or run programs that load the CPU 24/7, which I don't. These days a rPi 4 would probably suffice for my needs but I still think the refurb thinkpad is a smart idea.
I do leave the batteries in. Is it dangerous? I read some time ago that it is not dangerous, but the capacity of the battery drops significantly, I don't care about capacity, and safe shutdowns are important to me.
In the past I used an HP DL380 Gen. 7 (which I still own, and wouldn't mind selling as I don't use it), but I had to find a solution for the noise. And power consumption was at around 18EUR for my EUR/kWh.
Cramming down what ran on 12 cores and 48GiB of RAM on a 2-core, 4GiB (I only upgraded the memory 2 months ago) machine was a real challenge.
The ThinkPad cost me 90EUR (IBM refurbished), we bought two of them, the other one burnt. The recent upgrades (8GiB kit + Samsung Evo 1TB) cost me around 150EUR. Overall a really nice value both in compute per EUR spent and in compute per Wh spent. Really happy with it, I just feel it is not very reliable as it is old.
(blacklist) {
hosts /etc/hosts.blacklist {
reload 3600s
no_reverse
fallthrough
}
}
.:53 {
import blacklist
... (more config)
}https://github.com/epoupon/lms for music
https://github.com/epoupon/fileshelter to share files
Eveything is packaged on debian buster (amd64 and armhf) and run behind a reverse proxy.
One UI question? Is there a reason you left off volume controls? That's something that always annoys me still about Bandcamp and I had submitted a patch to Mastodon to create a volume control for their video component.
I have around 10 desktops that run in containers in various places for various common tasks I do. Each one has a backed up homedir, and then I have a ZFS-backed fileserver for centralized data. I connect to them using chrome remote desktop or x2go. I've had my work machine die one time too many, so with these scripts I can go from a blank work machine to exactly where I left off before the old one died, in a little over an hour. None of my files are stuck to a particular machine, so I can run on a home server, and then when I need to travel, transfer the desktop to a laptop, then transfer it back again when I get home. Takes about 10 minutes to transfer it.
https://github.com/kstenerud/virtual-builders
I also run most of my server apps this way:
https://github.com/kstenerud/virtual-builders/tree/master/ma...
Incoming mail points directly to an RPi at home on dsl... Postfix + Dovecot IMAP. It's externally accessible, my dedicated server does the dynamic dns to point to the RPi; the domain MX points to that. Outgoing mail forwards through the dedicated server, which has an IP with good reputation and DKIM.
This gets me a nice result that my current and historical email is delivered directly to, and stays at, home, and my outgoing mail is still universally accepted. There's no dependency on google or github. There's no virtualization, no docker, no containers, just Linux on the server and on the rpi to keep up to date. It uses OS packages for everything so it stays up to date with security updates.
I also host Aether P2P (https://getaether.net) on a Raspberry Pi-like device, so it helps the P2P network. But I’m biased on that last one, it’s my own software.
- matrix home server
- xmpp server
- websites for wife and I (Cloudlinux, Plesk, Imunify360)
- nextcloud
- jellyfin + jackett + sonarr + radarr
- rutorrent
- CDN origin server (bunnycdn pulls files from this)
- znc bouncer
- freeipa server
- Portainer with pihole, Prometheus, grafana and some microservices on them
- Gitea server
- spare web server I use as staging environment
All of this is behind a firewall, I’ve been fortunate enough I’ve got /27 assigned to me, so more than enough IP addresses available to me, I’m using all but about 5 or 6 of them, but plan to change that soon. I’m going to be assigning dedicated IPs to every site I host (3 total), put my XMPP server on its own vm instead of sharing it with Matrix and giving it its own IP.I blog about this stuff if anyone’s interested: https://thegeekbin.com/
VM management: libvirt (used to host gaming PC and financial applications)
Container management: Docker (used to be k8s but gave up)
Photo gallery: Koken, Piwigo, Lychee
Media acquisition: Radarr, Sonarr, NZBGet, NZBHydra
Media access: Plex
Monitoring: InfluxDB, Grafana, cAdvisor, Piwik, SmartD, SmokePing, Prometheus
Remote data access: Nextcloud
Local data access: Samba, NFS
Data sync: Syncthing
WireGuard
Unifi server
IRC: irssi, WeeChat, Glowing Bear, Sopel (runs a few bots)
Finance: beancount-import, fava
Chat: Riot, Synapse (both Matrix)
Databases: Postgres, MariaDB, Redis
Speed test: IPerf3
I also have a seedbox for high-bandwidth applications.You don't want less tested web app to expose some security hole for someone to start snooping on your traffic toward BitWarden after SSL termination.
If you don't want an extra box at home, you can always get a $5/mo cloud instance for public stuff, where you don't have to worry about increased electricity bill from DDoS having CPU spiked or choking your home network.
On the front end I have two 1Gbit circuits (AT&T and Google) going into an OPNSense instance doing load-balancing and IPS running on a Dell R320 with a 12-thread Xeon and 24GB of RAM
Services are hosted on a Dell R520 with 48GB RAM and two 12-thread Xeons running Ubuntu and an up-to-date ZFS on Linux build.
Media storage handled by two Dell PowerVault 1200 SAS arrays.
Back-end is handled by a Cisco 5548UP and my whole apartment is plumbed for 10Gbit.
Holy hell. How did that come about?
I live in a stable first-world democracy. Or, since it seems to be getting less stable recently, maybe a better way to put it is: I participate in a stable global economy. If "the cloud" catastrophically fails to the point where I lose all of the above without warning, I will likely have bigger problems than never being able to watch a favorite tv show again.
I wonder if this exposes two kinds of people: those who value mobility, and are more comfortable limiting the things that are important to them to a laptop and a bug-out bag, and those who value stability, and are inclined to build self-sufficient infrastructure in their castles.
I don't self host a lot of services (and the ones that do could go away tomorrow without hurting me much) but I only have one cloud resource - email. It kind of has to be that way for various reasons; I'd self host if I could reasonably do so. I also think I value my $75/mo more than I value an endless stream of entertainment.
(edit: just wanted to say, thanks for posting this. It is a valuable discussion point.)
* It's a target for my rsync backups for all my client systems (most critical use); Docker TIG stack (Telegraf, InfluxDB, Grafana) which monitors my rackmount APC UPS, my Ubiquiti network hardware, Docker, and just general system stats; Docker Plex; Docker Transmission w/VPN; Docker Unifi; A custom network monitor I built that just pings/netcats certain internal and external hosts (not used too seriously but it comes in handy); and finally a neglected Minecraft server.
I went for low power consumption since it's an always-on device and power comes at a premium here + fanless. I highly suggest the NUC as it's a highly capable device and with plenty of power if upgraded a bit!
https://dischord.org/2019/07/23/inside-the-sausage-factory/
At home I have:
A Synology DS412+ with 4 x 4TB drives
An ancient HP Microserver N36L with 16GB RAM and 4 x 4TB drives running FreeBSD
Ubiqiuti UniFi SG + CloudKey + AP
An OG Pi running PiHole
The DS412+ is my main network storage device, with various things backed up to the Microserver. Aside from the OEM services it also runs Minio (I use this for local backups from Arq), nzbget, and Syncthing in Docker containers.FreeBSD server running various things:
* Home Assistant, Node-RED, and some other home automation utilities running in a FreeBSD Jail.
* UniFi controller in a Debian VM.
* Pi-Hole in a CentOS VM.
* StrongSwan in a FreeBSD VM.
* ElasticSearch, Kibana, Logstash, and Grafana running in a Debian VM.
* PostgreSQL on bare metal.
* Nginx on bare metal, this acts as a front-end to all of my applications.
I also have:
* Blue Iris on a dedicated Windows box. This was a refurbished business desktop and works well, but my needs are starting to outgrow it.
* A QNAP NAS for general storage needs.
Future plans are always interesting, so in that vein here are my future plans:
Short term:
* Move my home automation stuff out of the FreeBSD Jail into a Linux VM. The entire Home Assistant ecosystem is fairly Linux-centric and even though it works on FreeBSD, it's more pain than I'd really like. Managing VMs is also somewhat easier than managing Jails, though I'm sure part of this is that I'm using ezjail instead of something more modern like iocage.
* Get Mayan-EDMS up and running. I hate paper files, this will be a good way to wrangle all of them. I've used it before, but didn't get too deep into it. This time I'm going all-in.
Medium term:
* Replace my older cameras with newer models.
* Possibly upgrade my Blue Iris machine to a more powerful refurbished one.
* Create a 'container VM', which will basically be a Linux VM used for me to learn about containers.
Long term:
* Replace my FreeBSD server with new hardware running a proper hypervisor (e.g., Proxmox, VMware ESXi). This plan is nebulous as what I have meets my needs, this is more about learning new tools and ways of doing things.
• Apache: hosting a few websites and a personal (private) wiki.
• Transmission: well, as an always-on torrent client. Usually I add a torrent here, wait for it to download and then transfer it via SFTP to my laptop.
• Gitea: mostly to mirror third party repos I need or find useful.
• Wireguard: as a VPN server for all my devices and VPS, mostly so I don't need to expose SSH to the internet. Was really easy to setup and it's been painless so far.
I also used to have all my DVDs ripped onto my media server, but I never really watched any of them, so now they are just gathering digital dust on some offline disks.
the other thing that is bothering me is that songs keep dissappearing from my playlists every once in a while
people keeping their own movie library makes perfect sense as there are still no services today, that I know of, that have access to all movies a certain person might want, or if they do the service is basterdised by some region lock
Sure I could buy DRM-laden stuff from some online store but there's no guarantee I can access it forever. I could buy a bunch of Blu-Rays or DVDs and stick them on a shelf but that's not convenient. I could pay for a subscription service but not a single one has anything close to everything I want to watch.
- httpd
- nextcloud (mostly for android syncing, for normal file operations I prefer sftp). Nextcloud is great but the whole js/html/browser is clumsy.
- roundcube (again mostly imap but just to have alternative when phone isnt available - I havent used it for ages)
- postfix
- dovecot
- squid on separate fib with paid vpn (mitming all the traffic, removing all internet "junk" from my connections, all my devices, including android are using it over ssh tunnel).
- transmission, donating my bandwidth to some OSS projects
- gitolite, all my code goes there
I think this is it.
Everything is running on mitx board, with 16gb of ram, 3x 3tb toshiba hdds in zraid and additional 10tb hitachi disk. FreeBSD. 33 watts.
it costs about $800/month for the half cage and all the hardware in it, when you amortise it out. And there's plenty of performance overhead for when one project gets a lot of attention or I want to ad something new.
Pretty much the only thing I use cloud computing for is the nightly job for S3stat, because it fits the workload pattern that EC2 was designed for. Namely, it needs to run 70 odd hours of computing every day, and gets 3 hours to do it in.
For SaaS sized web stuff, self hosting still makes the most sense.
So I set up Yunohost [0] on a small box, and now I install self hosted services whenever I need them. Installing a new service is a breeze–but more importantly, upgrading them is a breeze to.
For now I self host Mattermost, Nextcloud, Transmission.
Tbh I run hot and cold about self hosting since after work, I really really want to be able relax at home.
Not wonder why the hell my nuc hasn't come up after a reboot. Or why is it so hard to increase the disk space on my FreeNAS https://www.ixsystems.com/community/threads/upgrading-storag...
I wasn't happy with any of the free wiki hosting solutions available so I ended up self-hosting a mediawiki site. It's been...challenging...to convince my wife and family to adapt and use wiki markup.
I've been considering switching to something that uses standard markdown instead since it's easier to write with.
For me I'm just after a simple pure text knowledge-base.
Currently I use vuepress https://vuepress.vuejs.org/
The positives with vuepress for me were:
* Plain Markdown (With a little bit of metadata)
* Auto generated search (Just titles by default)
* Auto Generated sidebar menus
The negatives:
* No automatic site contents, I mostly use the search to move around docs
* Search is exact not fuzzy
* The menu settings are in a hidden folder
Active Directory (x2)
Exchange Server 2013
MS SQL
Various Single Purpose VMs providing automation
Debian for SpamAssassion
Debian for my web domains
Custom SMTP MTA thats in front of SpamAssassin and Exchange
Raspberry Pis: TVHeadEnd
Remote Cameras
Plus a Windows Server hosting all my files/media.I used to self-host a lot more, but have been paring back recently.
Home automation/security system + 'Alexa': completely home grown using python + android + arduino + rpi + esp32
I have hosted media folders/streaming applications for friends and family, but this has been by far my most used and most useful hack.
* Unbound for dns-over-tls and single point of config hostnames for my home network
* Syncthing for file sync
* offlineimap to backup my email accounts
* Samba for a home media library
* cron jobs to backup my shares
* Unifi controller
On my todo list:
* Scheduled offsite backup (borg + rsync.net being the top contender currently)
* Something a bit more dedicated to media streaming than smb. some clients like vlc handle it fine, others do not.
* Pull logs for my various websites locally
What do you all spend on this sort of thing? Whether hosting remotely or on local hardware, what would you say is the rough monthly/annual cost to move your Netflix/Spotify/etc equiv to a self-hosted setup (excluding own labor)?
Websites - nothing. Using GCP free server. About to move it to Oracle's free VMs though thanks to GCP's IPV4 shenanigans and Oracle's free offering being better (higher IO & you get two VMs).
Personally I have a home server which has minimal monthly costs. I just buy disks every now and then.
- A weather station that lives on a pole on the yard. Powered by GopherWX https://github.com/chrissnell/gopherwx
- InfluxDB for weather station
- Heatermeter Barbecue controller
- oauth2_proxy, fronted by Okta, to securely access the BBQ controller while I'm away. This proxy is something that everyone with applications hosted on their home network should look into. Combined with Okta, it's much easier than running VPN.
In the public cloud, I host nginx, which runs a gRPC proxy to the gopherwx at home. I wrote an app to stream live weather from my home station to my desktops and laptops and show it in a toolbar.
nginx in the cloud also hosts a public website displaying my live weather, pulled as JSON over HTTPS from gopherwx at home.
I have a second raspberry pi running a version of Kali Linux. I only hack my own stuff for learning.
Once upon a time I ran a public facing website and quake server, and published player stats. No time these days for much play.
Man, at my last job in a large enterprise, I WISH they were running fingerd. Would have made for some pretty cool, lightweight integrations.
(I guess these may not really be “self-hosted” since I don’t make them publically accessible through ports ... just vpn in to my home network)
- my websites with nginx
- IRC (ngircd)
- ZNC
- espial for bookmarks and notes
- node-red to automate RSS -> twitter and espial -> pinboard
- transmission
- some reddit bots manager I’ve written in Haskell+Purescript.
- some private file upload system mostly to share images in IRC in our team
- goaccess to self host privacy respecting analytics
At home, Plex.
Basically all the stuff I don't want to pay a cloud provider to host.
Overall the R720 with 48GB of ram has been one of my best buys hands down. down the road I plan on grabbing a second server and a proper NAS or unraid setup.
- docker (just dev env with a lot of images, almost everything I can is tested in there, and maybe used there too. Just on VM if is a desktop gadget or app)
- generic web
- some stacks, Rails, nodejs, php.
- ...
- Calibre- Windows Media share feature for remote videos on devices and TV (, don't like it really, mess with subtitles and really will look for a docker oss alternative)
Wish list:
- wallabag
- firefox-sync (stuck on Chrome yet, no alternative on this found)
- email sync
It's not so great for now. Looking on this thread for contacts and calendar (currently used from the cloud classic providers)
Everything. I keep infrastructure simple as I found as a developer, infrastructure configuration, dependency issues and updates took an extraordinary amount of time while providing zero benefit for products of a small to medium size. I do have a plan in place should I need to scale, but it is not worth maintaining an entirely different stack full of dependencies for the off chance I get a burst in traffic I can't handle.
nginx
mailinabox (email, nextcloud)
gogs
6 static websites
3 (dumb) little personal web-projects
selfoss
mumble
openvpn
# rpi-3 at home osmc (kodi) + 8TB of raided HDDs
nginx
chorus-2 in kodi publicly available (behind htpasswd) updated w/ dynamic DNS
a nightly cron job rsyncs the from the linode instance
# another rpi-3 in garden shed 8TB of raided HDDs
nightly cron of the other rpi-3- mail server in Docker container
- ZNC in Docker container
- Shadowsocks server
- Wekan as a Snap
- My blog, statically generated using Pelican, served from nginx
At home, I only have a Synology NAS that is exposed to the internet.
I am unhappy with the complexity of Mayan EDMS. I'm debating moving to Paperless. All I want is a digital file system that 1) looks at directories and automatically handles files 2) has user permissions/personal files so I can let my family use it 3) has a web form for uploads.
I am planning to change gitea to sourcehut- the git service as well as builds.
Any ideas for things a raspberry pi 3 & 4 could be useful for?
I use NFS on the NAS for the storage unit. It's the only thing I need to backup.
Relying on streaming providers, cloud email services, etc., has left me in a very foul mood lately and I feel like I need to take back control. My biggest trigger was when I purchased an actual physical audio CD (this year; because NONE of the popular streaming providers offer the album), ripped it to FLAC, and then realized I had no reliable/convenient way to expose this to my personal devices. I used to have a very elaborate setup with subsonic doing music hosting duty, and all of my personal devices were looped in on it. This was vastly superior to Spotify, et. al., but the time it takes to maintain the collection and services was perceived to be not worth it. From where I am sitting now, its looking like its worth it again.
How long until media we used to enjoy is squeezed completely out of existence because a handful of incumbent providers feel its no longer "appropriate" for whatever money-grabbing reasons?
* Pleroma/Mastodon - I had been using Pleroma, but I'm not happy about a few things, so I bit the bullet to upgrade to a t3.small and am now running Mastodon. I love all the concepts of the fediverse, though the social norms are still being ironed out.
* Write Freely (https://writefreely.org/) at https://lesser.occult.institute for my blog (right now mostly holds hidden drafts)
* Matrix (Synapse) and the Riot.im frontend for a group chat. I'm a little conflicted, because right now the experience around enabling E2EE is very alarming for low-tech users and a pain for anyone who signs in from many places, and if it isn't enabled I have better security just messaging my friends with LINE. That said, I really want to write some bots for it. Group chats are the future of social networking, they all say...
Surprisingly (at least to me), there are some really big companies like Microsoft, IBM/RedHat, and others pushing this workflow. The editor is supposed to basically be VSCode in browser and compatible with most extensions.
I'm using my RPi as a jump box and have some commands to turn on my home desktop + mount the file system and that kind of stuff when connecting. I've used it in the past and it's worked nicely.
I got k8s running but got blocked by some bugs when installing Che. Looks neat though. It would be cool to have a 2007 macbook with the computing power of a 2990WX workstation :).
The orchestrator can now deploy itself! All declarative service configuration with autoscaling etc. It manages the infra and service deployment for me. Thinking about open sourcing.
Nginx/nchan, NodeJS, static sites (vanilla/angular/react deployments), nfs, MongoDB, Redis
I still have the email domain, because it's easier to run it forever than migrate all the things you signed up for. But actually running my own email is too much of an obligation and need to keep up on all the anti spam measures.
VMware ESXi, with VM's for Squid, DNS, MySQL, Nginx, Apache, basic file server, Gitlab, and one that's basically for IRSSI
Strongly considering just moving everything to Debian with containers for everything, easier to manage than VM's.
On colo’d hardware:
- off-site backup server (Borg backup on top of zfs) - this is a dedicated box
- a mix of VMs and docker containers - mostly custom web apps
- email (it’s easier than you think)
At home:
- file server using zfs
- Nextcloud
- more custom web apps
- tvheadend
- VPN for remote access (IKEv2)
- gitlab
- gitlab ci
Also run an IPSec mesh between sites for secure remote access to servers etc
While my workplace uses AWS a massive amount, I still prefer to run my own hardware and software. Cloud services are not for me.
* Nextcloud - your own Dropbox! Amazing stuff.
* VPN - simple Docker service that is super reliable and easy to set up (docker-ipsec-vpn-server)
* Ghost - a very nice lean and mean blogging CMS
* MQTT broker for temperature sensors
* Samba server
* Deluge - Torrent client for local use
* Sabnzbd - NZB client
* Gitea - my own Git server
* Mail forwarder - very handy if you just want to be able to receive email on certain addresses without setting up a mailbox
* Pihole - DNS ad-blocking
* Jellyfin - self-hosted Netflix
It's become sort of my hobby to self-host these kind of things. I use all of these services almost daily and it's very rewarding to be able to fully self-host it. I also really love Docker, self-hosting truly entered a new era thanks to readily avaibable Docker images that make it very easy to experiment and run things in production without having to worry about breaking stuff.
Of course you can't even tell Macos to not suspend wifi or whatever if you close the lid while on battery so now I'm trying to move it to a Raspberry Pi 4 but I've got an obscure ssl error with OTP22 on it while querying an api, so I'm trying to debug that instead ... oh the joy.
All my side projects and some clients are hosted old school style in a dedicated servers. I do overpay because that's the same price and machine since 2013 and yet it's still way cheaper than any cloud offering, especially because of the hosted databases pricings.
TT-RSS + mercury-parser + rss-bridge + Wallabag to replace Feedly and Pocket.
Syncthing + restic + rclone and some home grown scripting for backups.
Motion + MotionEye for home security.
Deluge + flexget + OpenVPN + Transdroid.
Huginn + Gotify for automation and push notifications.
Apache for hosting content and reverse proxying.
Running on a NUC using a mix of qemu/kvm and docker containers.
- Nginx
- Nextcloud (with Calendar/Contacts on it)
- IRC client (thelounge)
- IRC server
- DLNA server
- Ampache server
- video and photo library thru NFS (locally only)
- OpenVPN
- Shiori for bookmarks
- Gitea for private projects
- Syncthing (to keep a folder synchronized across my devices)
- Jenkins
What do you feed into Grafana?
I have a home server + some raspberry pis lying around that I want to start using.
* Email (postfix + dovecot)
* XMPP (prosody + biboumi for IRC gateway)
* Static websites
* Mercurial code hosting (mercurial-server + hgweb)
* File storage (sftp, mostly accessed via sshfs)
Some on a HP microserver somewhere, some on a VPS.The only things I host are either just hobbies or non-essentials:
At home: - Node-red for home automation - PiHole for ad filtering on the local network - Plex on my NAS for videos - A Raspi for reading my Ruuvitags and pushing the info to MQTT On Upcloud and DigitalOcean and a third place: - Unifi NVR (remote storage for security cameras) - Flexget + Deluge for torrents - InfluxDB + Grafana for visualizing all kinds of stuff I measure - Mosquitto for MQTT
- Nextcloud
- Mailu.io
- Huginn
- Gotify
- Airsonic
- Gitea
All on a dedicated box. Planning to add password sync, wallabag, syncthing a VPN and a few other features. Other boxes I have run various things from DNS to backup MXes and a WriteFreely instance on OpenBSD.
Internally I host a ton of stuff, mostly linked to a Plex instance.
I notice I was a lot more keen on hosting a bunch of crap myself before I knew how to do it "right", and before devops, orchestration ("you mean running scripts in remote shells?"), cloud, or containers or any of that were things. And yet it all worked just fine back then—time spent fixing problems from my naïve "apt-get install" or "emerge" set-up process wasn't actually that bad, compared with the up-front cost of doing it all "right" these days. A couple lightly-customized "pet" servers were fine, in practice. Hm.
I've mostly fallen into it at my job because the alternative to me pushing dev services to SAAS offerings and maintaining the glue myself is a pile of poorly-maintained IT-provided Server 2008 R2 boxes.
4 Ubuntu 16.04 servers:
- Nginx/PHP for Wordpress - MySQL - Redis - Mail
Planning to expand the the Nginx/PHP servers to at least two, and add load balancers. All certs are provided by an Ansible script using Lets Encrypt (yuck).
At home:
Proxmox running on two homebuilt AMD FX 8320 servers with 32GB each, with drives provided by FreeNAS on a homebuilt Supermicro server with about 10TB of usable space (on both HDDs and SSDs)
Ubuntu 16.04 Servers:
- 2x DNS - 2x DHCP - GitLab - Nagios - Grafana - InfluxDB - Redmine - Reposado - MySQL
Other:
- Sipecs
All set up via Ansible.
Next will set up a Kubernetes cluster (probably as far as I’ll get with containers).
> Resilio Sync for iPhone pictures backups and "drop box" file access
> Transmission server
> SMB share of NAS to supply OSMC boxes on every TV
> Nighthawk N7000 running dd-wrt with a 500gb flash drive attached as storage for my Amcrest wifi cameras
> Edgerouter Lite running VPN server
> Hassbian for my zwave home automation stuff
> A pi with cheap speakers that I can log into and play a phone ringing sound so my wife will look at her phone!
Appveyor
Gitea
Graylog + Elastic Search
Minecraft/Pixelmon
Nodered
ruTorrent
Taiga
Tiny Tiny RSS
Ubooquity*
WikiJS
Zulip (chat/IM)
*I hate it, but haven't found something betterAlso, kudos to those brave souls who are running Tor exit nodes!
Edit: Forgot a bunch
- Traefik (reverse proxy)
- Git Annex
- Gitea
- Drone (CI)
- Docker Registry
- Clair (security scanning for docker images)
- Selfoss (RSS reader)
- Grafana / Prometheus / Alertmanager (overkill really)
- A few custom applications...
Turris Omnia running transmission under lxcNow I only host my own project: http://billion.dev.losttech.software:2095/
Also regular Windows file sharing which I use for media server and backups.
Though I'd like to expand that. Maybe a hosted GitLab.
Also, I use it to find flats when I need ro.
- Mail server (OpenSMTPD)
- IMAP (Dovecot)
- CVS server for my projects.
- httpd(8) for my website.
I still need to add rspamd for spam check. But insofar, I received just one spam E-mail.
Out of curiosity, do you genuinely prefer CVS or just haven't migrated from a historical repo?
Also NextCloud (files, contacts and calendar), few WordPress websites and Fathom for website analytics.
Unifi controller
Miniflux
CouchPotato
DSMR Reader (software that logs smart electricity meter data)
Gitea
Deluge
MySQL
PostgreSQL
Cloud Storage mirror (for Google Drive backup)
Intel NUC: Full Bitcoin node
Bitcoin lightning node
Remote (Digital Ocean): Trading Software
Various PHP websitescloud (time4vps 1TB storage node) borg calibre AdGuard
-- home server data drive rsyncs to an internal data drive (XFS to btrfs), btrfs drive takes a snapshot and unmounts when not in use, then important stuff is rsynced to my VPS. --- home drives backed up with borg for encryption
I keep looking at hosting my own mail server, but get scared off by tales of config/maintenance dramas.
syncthing
nfs server
UPnP server, connected to my media NAS
gitea server, for my personal projects
droneci, linked to my gitea server, for building websites and releases I publish
A few locally hosted services, such as DevDocs, draw.io or Asciiflow, for convenience. - postfix/dovecot for mailing
- searx instance
- synapse for matrix
- unbound for DoT
- nginx for my blog
- gophernicus for old times sake
At home: - nextcloud
- monero full node
- unbound backup instance
- fhem for home automation
- restic for backupAll my business backups go to the same box. I have a pi and enrypted usb drive copying my backups to my shed from my house.
PiHole, HRCloud2, HRScan2, HRConvert2, my wordpress blog, a KB, and a few other nick knacks. Currently working on a noSQL share tool (for auth-less large file sharing) and then maybe this idea that's been floating around my head for a Linux update server. Like WSUS for linux.
All on a few Vultr + Digitalocean droplets, 2 raspis + 1 atomic pi, a couple HP i5 mini desktop machines, and a Dell r610 rack server with 24 cores and 48GB of ram (with about 36TB of assorted shucked and unshucked USB hard drives attached in a few GlusterFS / ZFS pools). I have a home-built UPS with about 1.5kwh worth of lead-acid batteries powering everything, and it's on cheap Montreal power anyway so I only pay $0.06¢/kwh + $80/mo for Gigabit fiber. It's a mix of stuff for work and personal because I'm CTO at our ~9 person startup and I enjoy tinkering with devops setups to learn what works.
All organized neatly in this type of structure: https://docs.sweeting.me/s/an-intro-to-the-opt-directory
Some examples: https://github.com/Monadical-SAS/zervice.elk https://github.com/Monadical-SAS/zervice.minecraft https://github.com/Monadical-SAS/ubuntu.autossh
Ingress is all via CloudFlare Argo tunnels or nginx + wireguard via bastion host, and it's all managed via SSH, bash, docker-compose, and supervisord right now.
It's all built on a few well-designed "LEGO block" components that I've grown to trust deeply over time: ZFS for local storage, GlusterFS for distributed storage, WireGuard for networking, Nginx & CloudFlare for ingress, Supervisord for process management, and Docker-Compose for container orchestration. It's allowed me to be able to quickly set up, test, reconfigure, backup, and teardown complex services in hours instead of days, and has allowed me to try out hundreds of different pieces of self-hosted software over the last ~8 years. It's not perfect, and who knows, maybe I'll throw it all away in favor of Kubernetes some day, but for now it works really well for me and has been surprisingly reliable given how much I poke around with stuff.
TODOs: find a good solution for centralized config/secrets management that's less excruciatingly painful than running Vault+Consul or using Kubernetes secrets.
* My Website
* Seafile
* FreshRSS
* RSSBridge for making rss feed for websites that don't have one
* Dokuwiki
* A Proxy
* Multiple Telegram and Reddit botsWhat might be the easiest way to achieve this? Running a Kube cluster is insane for my needs, I imagine I'd be perfectly happy with a few Pi's running various Docker Containers. However I'm unsure what the easiest way to manage this semi-cloud environment.
edit: Oh yea, forgot Docker Compose existed. That may be the easiest way to manage this, though I've never used it.
1) Do you identify the reverse proxy by host or by path?
e.g. <service>.yourdomain.com or yourdomain.com/<service>
2) Do you still run everything over a VPN?
External services I need are directly accessible via a local reverse proxy that's publicly visible over IPv6.
For IPv4-only scenarios I proxy through a linode instance (that also hosts a few things, including my blog) which sends the traffic in over v6.
Obviously this is all fronted by a traditional firewall.
And before you ask: it's surprising how often v6 connectivity is available these days. Mobile phone providers have moved to v6 en masse, and even terrestrial internet providers are starting to get religion.
It's still not available in my workplace (surprise surprise), but other than that, much to my surprise, v6 is my primary mode of connectivity.
2) No - but I do use Cloudflare to proxy inbound traffic
- Hand-rolled Go reverse proxy with TLS from LE.
- Several Pg DBs for development.
- VPN server.
- Chisel for hosting things "from home" while running on my laptop remotely.
- Etcd
- Jenkins
- Gitea
- Pi-hole
- A few different development projects
So, mail, DNS, and a few web sites. I’ve been running something like this for more than 15 years now.
And SyncThing, https://syncthing.net/
It all started with hosting subsonic
- Ampache
- Shaarli
- Dokuwiki
- Deluge
- Hugo blog
Everything running on a cheap server from kimsufi.
* Gogs
* WordPress
* Wallabag
* Ghost
* Minio
* Email (yes, this is my primarily and only email)
* TinyTinyRSS
* NextCloud
* Meemo
* MediaWiki