has_many :codes

A self hosted alternative to Ngrok

Published  

I am currently working on a SaaS app that lets end users add custom domains to their accounts. Of course I need to offer TLS encryption (HTTPS) for these domains, but because these domains are out of my control I cannot use the DNS challenge for the domain verification with Let’s Encrypt, in order to request TLS certificates. In this case, the HTTP challenge method can be used instead.

This works fine with web servers that are exposed to the Internet, but the HTTP verification doesn’t work if the app is not accessible by Let’s Encrypt’s servers - for example, an app running locally on your dev machine. I was reminded about this today when I tried to simulate a production environment for my SaaS with a Kubernetes cluster running on my Mac. Because I use cert-manager with Kubernetes to manage TLS certificates, I could just use an issuer for self signed certificates in development, but I also wanted to test locally the actual workflow of a user adding a custom domain with HTTP verification like in production.

There are some services that let you expose an app running on your computer to the Internet so you don’t have to deploy it to some server, so this would help to work around the HTTP verification issue. One popular service is Ngrok, which is pretty easy to use but is a paid service if you want to use custom domains.

Since I’m cheap (:D), I was looking for free alternatives and came across this blog post by Jacob Errington, which shows how to set up a a simple alternative using Nginx as a proxy running on a server exposed to the Internet, and an SSH reverse tunnel to forward HTTP requests to the app running locally. So this requires a server but it can be a very cheap one (surely cheaper than Ngrok) since it only has to proxy requests and nothing else. Also, I agree with Jacob that devs usually have one or more servers already.

The method he describes is pretty simple, so I implemented a solution inspired by it and based on Docker. Have a look at the repo, here I will briefly explain how it works.

Two containers are required to implement a tunnel similar to what Ngrok does. The first container must run on a server exposed to the Internet that acts as a proxy server. To start it, run:

docker run --name tunnel-proxy --env PORTS="80:3000,443:3001" -itd --net=host vitobotta/docker-tunnel:0.30.0 proxy

The meaning of the PORTS environment variable is explained in a moment.

On the proxy server, an Nginx instance listens to the ports that you want to expose and accepts HTTP requests. Here’s the Nginx configuration template:

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;

events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    #include /etc/nginx/conf.d/*.conf;
}

stream {
    ${TUNNELS}
}

It’s the basic configuration that comes with Nginx out of the box, apart from the stream block; this is used for TCP load balancing, which is required for the TLS passthrough so that TLS termination happens on the app’s side, not on the proxy server. This block references an environment variable - TUNNELS - that is replaced at startup when running the proxy, with one or more blocks that look like this:

server {
    listen <port A>;

    proxy_pass 127.0.0.1:<port B>;
    proxy_responses 0;
}

Here port A is the port exposed to the Internet, and port B is a port that will be used by an SSH connection initiated by the dev machine to forward the requests to the app.

As you can see from the Dockerfile,

FROM nginx:alpine

RUN apk add --no-cache bash autossh

ADD nginx.conf.template /
ADD start.sh /

RUN chmod +x /start.sh

ENV PORTS "80:3000,443:3001"
ENV PROXY_HOST "1.2.3.4"
ENV PROXY_SSH_PORT "22"
ENV PROXY_SSH_USER "user"

ENTRYPOINT ["/start.sh"]

when the container starts it runs the script start.sh that accepts one argument; accepted values are proxy, to start the container as the proxy server, and app, to initiate the SSH connection from the dev machine to the proxy server.

When the container is running in “proxy” mode, the environment variable PORTS associates each port to expose to the Internet with a port that will be used by the SSH tunnel to forward requests to the app. With the ports specified, the script generates the server blocks,

TUNNELS=""

for MAPPINGS in `echo ${PORTS} | awk -F, '{for (i=1;i<=NF;i++)print $i}'`; do
  IFS=':' read -r -a MAPPING <<< "$MAPPINGS"; unset IFS

  read -r -d '' TUNNELS <<-EOS
${TUNNELS}

server {
  listen ${MAPPING[0]};

  proxy_pass 127.0.0.1:${MAPPING[1]};
  proxy_responses 0;
}
EOS
done

then it replaces the placeholder in the Nginx config template with these server blocks, and starts Nginx:

export TUNNELS

bash -c "envsubst < /nginx.conf.template > /etc/nginx/nginx.conf && nginx -g 'daemon off;'"

When the container is running in “app” mode, PORTS associates each port that the app is listening to locally, with the port that the SSH tunnel will use to forward requests to the app. The port used by the tunnel here must match a port specified for the SSH tunnel for the proxy. To start the container in “app” mode, run:

docker run --name tunnel-app --env PORTS="80:3000,443:3001" --env PROXY_HOST="1.2.3.4" --env PROXY_SSH_PORT="22" --env PROXY_SSH_USER="${USER}" -v "${HOME}/.ssh/id_rsa:/ssh.key" -itd vitobotta/docker-tunnel:0.30.0 app

In “app” mode, the script first finds the IP of the Docker host in a way that works on Mac/Windows/Linux:

DOCKER_HOST="$(getent hosts host.docker.internal | cut -d' ' -f1)"

if [ -z "${DOCKER_HOST}" ]; then
  DOCKER_HOST=$(ip -4 route show default | cut -d' ' -f3)
fi

then generates and runs the autossh command:

TUNNELS=" "

for MAPPINGS in `echo ${PORTS} | awk -F, '{for (i=1;i<=NF;i++)print $i}'`; do
  IFS=':' read -r -a MAPPING <<< "$MAPPINGS"; unset IFS
  TUNNELS="${TUNNELS} -R ${MAPPING[1]}:${DOCKER_HOST}:${MAPPING[0]} "
done

autossh -M 0 -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -o "StrictHostKeyChecking=no" -o "ServerAliveInterval=5" -o "ServerAliveCountMax 3" -i /ssh.key ${TUNNELS} ${PROXY_SSH_USER}@${PROXY_HOST} -p ${PROXY_SSH_PORT}

autossh starts an SSH connection and keeps an eye on it to keep it alive / restart it automatically when needed.

The code

while true; do
  sleep 1 &
  wait $!
done

blocks the script so to prevent the container from exiting. When the container is stopped, the autossh connection is terminated:

close_connection() {
  pkill -3 autossh
  exit 0
}

trap close_connection TERM

The method just described seems to work well but please keep in mind that if someone decides to flood your proxy server with requests, this may cause problems to your home/work Internet connection too… so because of this and for security concerns in general it’s best you have the proxy server behind Cloudflare. If you use Cloudflare and are going to use the method described in this post with Let’s Encrypt’s HTTP verification, please read this as well. Hope this helps.

© Vito Botta