Technological freedom

you can skip the intro and get right to the solution

I’m a big fan of self-hosting stuff, and posted about it before. I run over 20 services from a server at home, and it’s super handy. Here’s a sample list:

And over a dozen more! I used to manage these all “by hand” and back in those days I could only bear to run a couple of these, but with the advent of Docker, Docker Compose and sites like linuxserver.io, it’s borderline trivial these days to host stuff and it has made me go from a few to almost two dozen services.

I run these on an always-on fairly low-power CPU powering a mini-ITX custom built server in a case that has room for 8 drives. Managing it comes down to running apt upgrade once a week, and deleting some files that I er… download 😅, because it gets full every 3-4 months. It’s boring these days, and that’s fine by me. A lot of my “devops” knowledge comes from trying to automate more of it from time to time, and maybe one day I’ll migrate it all to (gasp) Kubernetes. Maybe.

Freedom but, like, not a free-for-all

All of these services are behind a single Nginx instance serving as a proxy, and they are were all visible on the internets and accessible from everywhere. All of them have logins, of course, but in the back of my head it always felt kinda iffy to have these available everywhere. It’s handy, but also makes you vulnerable to issues these projects might have. And it’s too much of a pain to keep track of CVEs for 20 or so projects.

The solution could be to only allow access to them from my home network, which is as simple as adding:

allow from 192.168.1.0/24;
deny all;

to every Nginx server I have configured. And, to be honest, it’s very rare to need to access 95% of my services from outside of my home network. But those 5% (like music streaming when I’m out and about on 4G, or committing code from a coffee shop) make a one-size-fits-all approach impossible, and I like to have those.

So the solution, at least for me, was to use a VPN. And that led me down a deep rabbit hole, and made me think that maybe security is so bad these days precisely because securing networks is still so clunky.

I already use a VPN, so this should be simple

I already use a VPN for when I’m out and about and log in to public Wi-Fi services, and so should you because they’re a security nightmare. But re-using it for my NAS proved to be beyond my abilities.

Here’s a summary of my network with the VPN I use in public:

LAN plus WG setup
LAN plus WG setup

I run a pretty standard LAN with a bridged modem from my ISP forwarding all traffic to an OPNSense router box, on which I define port-forwarding rules. The NAS is behind that router, along with every device at my house.

To run my safe-to-surf-outside-on-public-Wi-Fi VPN, I use wg-access-server on a VPS I rent with Vultr, that also runs some other stuff that is not relevant to this post.

This works out fine, since it’s based on WireGuard, the new VPN solution that is much easier to setup than OpenVPN and the like, and there are Android and macOS apps available. I just have to login to the server, add a new device, point the app to a QR code or download a config file, and that’s it. Been using it for around 2 years or so now, and it works fine.

So it should be easy to just extend the VPN to my NAS and just protect everything I’m running on it like that, right? Well, that was the plan. I would install WireGuard on the NAS, and just add allow from 10.44.0.0/24 to my Nginx config, and all’s good - right? Well, it didn’t work! I’m not sure if it was my inability to add proper firewall rules on OPNSense, but tried as I might (and I did try, for 2 days) I couldn’t get it to work… 😑

So OK, new plan - I’ll keep the VPN setup on the Vultr server I use now for out-and-about surfing, and just use something else for my personal NAS services. Sounds convoluted, but I already have something that works for one use case, I just need something for another use case. I went shopping for another VPN, and my requirements were a bit steep:

  • OPNSense friendly, ideally in the form of a plugin I could configure on its UI
  • Able to work on multiple devices, with a minimal set consisting of Linux, macOS and Android
  • Open source, so that it’s been vetted by the community somewhat
  • Ideally free, affordable also OK

By the way, having a smartphone with mobile data is great to test this stuff. I tethered mine to my laptop for checking pings and service access, and it worked really well.

The candidates

OpenVPN

OpenVPN has been around since 2001, and it’s widely used. However, it is notoriously difficult to work with, perhaps as a result of its age. Compared to WireGuard’s ~4,000 lines of code, OpenVPN sports ~70,000 which usually means more potential for bugs and slower iteration speed for new releases. OpenVPN is not as fast as WG on the same hardware, and has support for some outdated crypto that WG, being much more recent, refuses to work with.

To note that I used to run OpenVPN as my out-and-about VPN server before switching to WG. The mobile apps (Android, iOS) are OK, but WG’s speed and ease-of-use won me over.

OPNSense supports becoming an OpenVPN server, but this too proved too complex to setup - nothing worked properly, and I couldn’t ping my services from a 4G connection.

Other possible OPNSense VPN solutions

I ran through all options supported by OPNSense:

  • IPSec is supported out-of-the-box, but that seemed like an even bigger hassle to work with
  • L2TP and PPTP are legacy technologies, and not recommended
  • stunnel doesn’t have a macOS app
  • tinc’s documentation was scary
  • Zerotier showed promise, but I couldn’t get it to work either

So by this point, I was a few days into utter frustration at my inability to get any of these to work which felt off, since I’ve been working with servers and networks since the early 2000s. I vented on Twitter, and I’m glad I did because from a reply to that tweet by the great Ben Pickles came the solution.

The solution was Tailscale

I had heard of Tailscale before, mostly as a replacement for Hamachi, that I remember using to play LAN-only games like Neverwinter Nights online with friends! Since there was no plugin for OPNSense, I thought it would be a no-go, but searching for it on their docs revealed this page that gave me all I needed to get started, and it was actually straightforward from that point on.

Step by step

Without OPNSense

If you don’t use OPNSense, you can just skip all the bits below that mention it, install Tailscale on your NAS only and point the .xyz records to it, which should just work since it punches holes through NAT. I just wanted to incorporate it in my router properly.

With OPNSense

First step is to create a Tailscale account. Free accounts can only have one user, but that’s fine for me - I use this “services” VPN all by myself, and our household’s “out-and-about” VPN for mobile connections has no user limitations.

As mentioned above, the docs page gave me what I needed to start. I logged in to my OPNSense box, and installed Tailscale with:

# opnsense-code ports
# cd /usr/ports/security/tailscale
# make install
# service tailscaled enable
# service tailscaled start

This took a while, since my router is a low-power x86 box from PCEngines. But once installed, started and enabled, you can run:

# tailscale up

which will display a URL to associate this new device with your Tailscale account. Once done, the OPNSense box got a Tailscale IP in the 100.xxx.xxx.xxx range.

Then, you can access the OPNSense UI and in the “Interfaces > Assignments” page there should a new interface to add to your router in the “New interface:” dropdown. Click the + button to add it, and fill in the details:

Adding the Tailscale interface
Adding the Tailscale interface

Now you can create new rules to forward certain protocols from your router’s Tailscale IP. Here are mine as an example:

Firewall rules
Firewall rules

As you can see, all services except HTTP and HTTPS are still allowed from the internet. These include:

  • SSH, since it’s pretty secure anyway and I don’t want to get locked out for not being on the VPN
  • Telnet, since it’s isolated to my Amiga BBS and it’s supposed to be public anyway1
  • Zeronet, Syncthing and BitTorrent ports, as they’re required to be open to use these services at all, or more efficiently

It’s really for the web apps I want secure that there’s conditional access. You probably noticed that there’s still WAN access to HTTP and HTTPS in the rules above, that’s because there are levels of access for the web services in my network. They are:

  • clear (WAN interface) webapps, accessible from everywhere, like Airsonic to stream music on 4G out and about
  • home (LAN interface) webapps, that are accessible only from my house
  • VPN (Tailscale interface) webapps, accessible from my house and from devices on my Tailscale VPN

I could have all services be either home or VPN, since there’s a Tailscale app for Android, but it just places a heavier burden on a phone’s battery to be on a VPN all the time, so a few webapps are still unrestricted. There’s also SyncThing for example, that is in the clear so I can sync folders with friends that aren’t on my VPN.

So since all of this traffic is still allowed through, the final piece of the puzzle is to configure my Nginx servers to filter by IP.

Here’s an example of a clear server:

server {
  server_name airsonic.qoob.domain.tld;

  listen 443 ssl http2;
  add_header Strict-Transport-Security "max-age=31536000";

  ssl on;
  ssl_certificate /etc/letsencrypt/live/qoob.domain.tld/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/qoob.domain.tld/privkey.pem;

  ssl_protocols  SSLv3 TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers RC4:HIGH:!aNULL:!MD5;
  ssl_prefer_server_ciphers on;
  keepalive_timeout    60;
  ssl_session_cache    shared:SSL:10m;
  ssl_session_timeout  10m;

  location / {
    proxy_pass http://localhost:10040;

    proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
    proxy_set_header        Accept-Encoding    "";
    proxy_set_header        Host               $host;
    proxy_set_header        X-Real-IP          $remote_addr;
    proxy_set_header        X-Forwarded-For    $proxy_add_x_forwarded_for;
    proxy_set_header        X-Forwarded-Proto  $scheme;
    add_header              Front-End-Https    on;
    add_header              Permissions-Policy interest-cohort=();
    proxy_redirect     off;
  }
}

Pretty straightforward setup - the Docker container listens to port 10040 (arbitrary number that I chose) and Nginx proxies requests to it, with no filtering. I have an SSL certificate by Let’s Encrypt that is super handy to avoid naggy browser prompts and also because it’s 2021 and they’re free. I make a donation every year and so should you.

For webapps in the home category, we have:

server {
  server_name gitea.qoob.domain.tld;

  listen 443 ssl http2;
  add_header Strict-Transport-Security "max-age=31536000";

  ssl on;
  ssl_certificate /etc/letsencrypt/live/qoob.domain.tld/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/qoob.domain.tld/privkey.pem;

  ssl_protocols  SSLv3 TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers RC4:HIGH:!aNULL:!MD5;
  ssl_prefer_server_ciphers on;
  keepalive_timeout    60;
  ssl_session_cache    shared:SSL:10m;
  ssl_session_timeout  10m;

  allow 192.168.1.0/24;
  allow 127.0.0.1;
  deny all;

  client_max_body_size 128M;

  proxy_connect_timeout       600;
  proxy_send_timeout          600;
  proxy_read_timeout          600;
  send_timeout                600;

  location / {
    proxy_pass http://localhost:10080;

    proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
    proxy_set_header        Accept-Encoding    "";
    proxy_set_header        Host               $host;
    proxy_set_header        X-Real-IP          $remote_addr;
    proxy_set_header        X-Forwarded-For    $proxy_add_x_forwarded_for;
    proxy_set_header        X-Forwarded-Proto  $scheme;
    add_header              Front-End-Https    on;
    add_header              Permissions-Policy interest-cohort=();
    proxy_redirect     off;
  }
}

As you can see, it’s virtually identical except for the filtering block:

allow 192.168.1.0/24;
allow 127.0.0.1;
deny all;

This allow all LAN traffic to access it, as well as localhost for some edge-case setups that include websockets. All other IPs trying to access these domain names that are resolveable online will get:

Nginx says no
Nginx says no

Finally, for webapps accessible only from the VPN, we have:

server {
  server_name bookstack.qoob.domain.xyz;

  listen 443 ssl http2;
  add_header Strict-Transport-Security "max-age=31536000";

  ssl on;
  ssl_certificate /etc/letsencrypt/live/qoob.domain.xyz/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/qoob.domain.xyz/privkey.pem;

  ssl_protocols  SSLv3 TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers RC4:HIGH:!aNULL:!MD5;
  ssl_prefer_server_ciphers on;
  keepalive_timeout    60;
  ssl_session_cache    shared:SSL:10m;
  ssl_session_timeout  10m;

  location / {
    proxy_pass http://localhost:6875;

    proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
    proxy_set_header        Accept-Encoding    "";
    proxy_set_header        Host               $host;
    proxy_set_header        X-Real-IP          $remote_addr;
    proxy_set_header        X-Forwarded-For    $proxy_add_x_forwarded_for;
    proxy_set_header        X-Forwarded-Proto  $scheme;
    add_header              Front-End-Https    on;
    add_header              Permissions-Policy interest-cohort=();
    proxy_redirect     off;
  }
}

Pro-tip: you probably noticed that the proxy and SSL blocks are very similar in all servers. To avoid typing these over and over, you should use Nginx includes. So you could save this:

proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_set_header        Accept-Encoding    "";
proxy_set_header        Host               $host;
proxy_set_header        X-Real-IP          $remote_addr;
proxy_set_header        X-Forwarded-For    $proxy_add_x_forwarded_for;
proxy_set_header        X-Forwarded-Proto  $scheme;
add_header              Front-End-Https    on;
add_header              Permissions-Policy interest-cohort=();
proxy_redirect     off;

to /etc/nginx/includes/proxy and then replace the config above with:

server {
  server_name bookstack.qoob.domain.xyz;

  listen 443 ssl http2;
  add_header Strict-Transport-Security "max-age=31536000";

  ssl on;
  ssl_certificate /etc/letsencrypt/live/qoob.domain.xyz/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/qoob.domain.xyz/privkey.pem;

  ssl_protocols  SSLv3 TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers RC4:HIGH:!aNULL:!MD5;
  ssl_prefer_server_ciphers on;
  keepalive_timeout    60;
  ssl_session_cache    shared:SSL:10m;
  ssl_session_timeout  10m;

  location / {
    proxy_pass http://localhost:6875;

    include includes/proxy; # <<< here
  }
}

Same for the SSL repeated blocks, and the IP allow blocks. It lets you save typing, and you can just change one file to change all servers that include it.

Anyway, notice how there’s no IP blocking for server configs of VPN-only services - that’s because the only IPs coming to these services will be from inside the VPN. In order to achieve this, I registered a .xyz counterpart to my usual domain name, and in that domain, all the A records are set to Tailscale IPs.

XYZ domain records
XYZ domain records

Why do this? Surely from inside the VPN you can still access the normal .com domain, and you could add 100.0.0.0/8 IP clearance to the home webapps. Well, adding 100.0.0.0/8 clears all Tailscale IPs and that doesn’t sound right. I could still add all of my assigned Tailscale IPs one by one to Nginx to clear, but that sounds laborious (even with an include file) and I’m lazy.

Also, even if you’re on 4G tethering on your laptop, and connected to Tailscale, traffic doesn’t 100% go through Tailscale unless you have an exit node and that sounded like more of a hassle to setup and get going properly. This way, all .xyz services resolve to Tailscale IPs and then your operating system knows that to reach them, it has to use Tailscale as a gateway. It’s good for performance too, since you don’t have to be running all traffic through Tailscale. I call that a win-win, and another domain cost me less than 20€/year and it adds to my collection of domains, or as I call them, my internet pokemans. 😜

The only downside is some services I use don’t cope well with loading from a different domain name (like Bookstack) so I had to move them 100% into the vpn category, which is no biggie. Also, I have two set of cookies for the services on .com and .xyz so sometimes I have to login when I move from one to the other - again, no biggie.

You could also just setup your own domain with dnsmasq or something similar, but then you wouldn’t have an SSL certificate to go with it, because Let’s Encrypt needs to be able to access your server to issue a certificate, so a better approach was to use this .xyz domain approach and just have a normal IP on it, get the certificate, then switch the records over to a Tailscale IP.

Further work

As mentioned, Tailscale allows for exit nodes for total traffic tunneling, so check that out it that’s your bag. There’s also the concept of relay nodes to share whole subnets, but when I used the option in OPNSense it told me it’s only mature in Linux and not FreeBSD, so I didn’t dwell too much on it. Also, with OPNSense forwarding traffic, not much use for the feature in my case but you might find it interesting.

Also - do you know what is Tailscale’s secret for all this ease of use? It uses WireGuard! So I know for sure my dream setup can be totally self-hosted, but I’ll leave that fight for another day…

Conclusion

So that’s it - I finally have a setup I’m happy with in terms of security. Whether at home or out and about, I can still use all of my services, plus I’m certain no one else can.

Moral of the story: modern VPN solutions are complex and hard to deploy, Tailscale is a godsend, venting on Twitter can sometimes point you in the right direction. Thanks again, Ben! 🙌

 

Feel free to reply with comments to this tweet


  1. Down for maintenance at the moment [return]