FreeBSD: Simple Hosting
The following blog post is an attempt to give an overview (or “Getting Started” guide) regarding hosting of network services with FreeBSD. The goal of this post is to provide an easy entry point. The information shown in this guide is not sufficient for any serious production environments or mission critical systems. This post is mainly targeting hobbyists who’d like to start using FreeBSD for their personal hosting purposes.
The focus lies on setting up a simple yet scalable reverse proxy which terminates TLS connections in one central location.
Prerequisites
The scenario covered in this guide assumes a single physical host running FreeBSD and a working internet connection to that host.
Furthermore, this post assumes that the reader is familiar with:
- FreeBSD as an OS
- Working with jails
- General network knowledge (IP addresses, netmasks, routing, …)
The following is outside the scope of this post:
- Security related aspects including firewalling
- Load balancing
- Failover
- Proper use & configuration of HAproxy or NGINX
This is not a hand-holding step-by-step guide. Do not follow this guide blindly. This is meant to serve as an entry point or reference. Do your own research and read the documentation properly. No warranty is provided whatsoever. Proceed at your own risk.
Scenario
The following scenario is what we’re after:
This is a common scenario as it covers these situations:
- A single FreeBSD server somewhere at home or at your SME office
- A rented FreeBSD server (eg. a VPS or a single physical host)
- A home-grade lab/research environment
SSL/TLS termination
Most services you’re going to host on your server should most likely allow communication via TLS encrypted connections (eg. SSL, HTTPS, SFTP, …). We’re going to use HAproxy as both a reverse proxy and TLS terminator.
Using a centralized TLS terminator has a couple of benefits:
- One central location to handle all TLS certificate workflows (acquiring, renewal and revoking).
- It’s easier to scale up in the future such as having a dedicated host running only HAproxy equipped with purposefully chosen hardware for this task.
- Other dedicated hosts (physical machines) can be served by the same proxy.
Setting up HAproxy as a TLS terminator is fairly easy. We start by defining a frontend listening on port 443 for incoming TCP connection requests:
frontend http_in
bind *:443
mode tcp
# Redirect HTTP requests to HTTPS
redirect scheme https if !{ ssl_fc }
# Hand everything else off to the TLS terminator
default_backend tls_terminator
Here, we’re also redirecting any HTTP requests to HTTPS. Note that this is purely a scheme upgrade. We’re not actually listening on port 80 (standard port for HTTP connections). This catches cases where clients attempt to open an HTTP connection on port 443
.
The reason for having a separate frontend is scalability: In the future we might run into situations where we’d like to forward TLS connections to a host without actually terminating the TLS connection.
The tls_terminator
backend is straight forward as well:
backend tls_terminator
mode tcp
server localsslterminate 127.0.0.1:9999 check
We’re simply forwarding all incoming connections to 127.0.0.1:9999
without any further processing. This gives us the ability to create an HTTP frontend which listens for plain HTTP connections on port 80
as well as for TLS connections on port 9999
that still need to be terminated:
frontend http
bind *:80
bind 127.0.0.1:9999 ssl crt-list /usr/local/etc/haproxy/certs_list
mode http
# Populate X-FORWARDED-FOR etc. properly
option forwardfor
# Redirect HTTP requests to HTTPS
redirect scheme https if !{ ssl_fc }
# Upgrade all insecure requests
http-response set-header Content-Security-Policy upgrade-insecure-requests
# Certbot
acl certbot_challenge path_beg /.well-known/acme-challenge/
use_backend certbot if certbot_challenge
# Routing
use_backend engineer.insane.blog if { hdr(host) -i blog.insane.engineer }
use_backend ... # Add your other backends here
This is where the actual magic happens:
- We’re accepting plain HTTP connections on port
80
and TLS connections on port127.0.0.1:9999
. crt-list
points to a file on the filesystem containing a list of all of our certificates. We’ll automate the creation & maintenance of this file using some shell scripts later on.- All HTTP requests are redirected to HTTPS. This is not strictly necessary but it’s what I do on my setups as I have no need to serve plain HTTP connections.
- The
certbot
backend will be used to forward ACME challenges to certbot for certificate renewal (in case of Let’s Encrypt).
Other backends and redirects can be added to this frontend like you’d do in any scenario involving HAproxy.
Certificates
On this particular setup, I’m exclusively using Let’s Encrypt’s certificate services. The specific setup/configuration I’m using here is most likely neither the only nor the most correct one. The important thing to me is that cerbot
can acquire, renew and revoke certificates without taking any of the services behind this proxy offline during that.
certbot
is used to handle certificate acquisition and renewal. We’re simply forwarding all requests where the URL path begins with /.well-known/acme-challenge/
to the certbot
backend. The backend is straight forward again:
backend certbot
server localhost localhost:8542 check
All connections are simply forwarded to localhost:8542
. What is listening there? An instance of NGINX running in the same jail as HAproxy itself:
user www;
worker_processes 1;
error_log /var/log/nginx_error.log info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /var/log/nginx_access.log;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
server {
listen 8542;
root /usr/local/www/certbot_challenge;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
}
Automation
There are typically three workflows when dealing with TLS certificates:
- Acquiring a new certificate
- Renewing all existing certificates
- Revoking (and removing) an existing certificate
For each of these tasks, I have created a script which simply takes the corresponding domain as a parameter.
Each of these scripts invokes the corresponding cerbot
command, manipulates the certs as needed for HAproxy and then reloads HAproxy.
ssl_acquire_cert.sh
:
#!/bin/sh
# $1 is domain (eg. blog.insane.engineer)
# Configuration
BIN_CERTBOT=/usr/local/bin/certbot
BIN_CAT=/bin/cat
PATH_WEBROOT=/usr/local/www/certbot_challenge
PATH_CERTS=/usr/local/etc/letsencrypt/live
PATH_CERT_DESTINATION=/usr/local/etc/haproxy/certs
PATH_HAPROXY_CERTSLIST=/usr/local/etc/haproxy/certs_list
####################################
# Don't change anything below this #
####################################
printUsage() {
echo "Usage: $0 <domain>"
}
# Sanity checks
if [ "$#" != 1 ] ; then
printUsage
exit 1
fi
# Acquire the new certificate
echo "Acquiring new cert..."
if ! $BIN_CERTBOT certonly --webroot -w $PATH_WEBROOT -d $1; then
echo "Acquiring new cert failed."
exit 1
fi
# Concate
echo "Concating cert..."
$BIN_CAT $PATH_CERTS/$1/fullchain.pem $PATH_CERTS/$1/privkey.pem > $PATH_CERT_DESTINATION/$1.pem
# Add to haproxy certs list
echo "Adding cert to HAproxy certs list..."
echo $PATH_CERT_DESTINATION/$1.pem >> $PATH_HAPROXY_CERTSLIST
# Reload haproxy
echo "Reloading HAproxy..."
service haproxy reload
ssl_revoke_cert.sh
:
#!/bin/sh
# $1 is domain (eg. blog.insane.engineer)
# Configuration
BIN_RM=/bin/rm
BIN_SED=/usr/bin/sed
BIN_CERTBOT=/usr/local/bin/certbot
PATH_WEBROOT=/usr/local/www/certbot_challenge
PATH_CERTS=/usr/local/etc/letsencrypt/live
PATH_CERT_DESTINATION=/usr/local/etc/haproxy/certs
PATH_HAPROXY_CERTSLIST=/usr/local/etc/haproxy/certs_list
####################################
# Don't change anything below this #
####################################
printUsage() {
echo "Usage: $0 <domain>"
}
# Sanity checks
if [ "$#" != 1 ] ; then
printUsage
exit 1
fi
# Revoke cert
echo "Revoking cert..."
$BIN_CERTBOT revoke --webroot -w $PATH_WEBROOT --cert-path /usr/local/etc/letsencrypt/live/$1/cert.pem --key-path /usr/local/etc/letsencrypt/live/$1/privkey.pem
# Remove from $PATH_CERT_DESTINATION
echo "Cleaning up stage 1..."
$BIN_RM $PATH_CERT_DESTINATION/$1.pem
# Remove from $PATH_HAPROXY_CERTSLIST
echo "Cleaning up stage 2..."
FILENAME="$1.pem"
FILENAME_ESCAPED=$(echo $FILENAME | $BIN_SED "s#\.#\\\.#g") # Escape dots
$BIN_SED -in-place "s#$PATH_CERT_DESTINATION/$FILENAME_ESCAPED##g" "$PATH_HAPROXY_CERTSLIST" # Remove path
$BIN_SED -in-place '/^\s*$/d' "$PATH_HAPROXY_CERTSLIST" # Remove all empty lines. Unfortunately, directly deleteing with the command above doesn't seem to work and when replacing we end up with an empty line
# Reload haproxy
echo "Reloading HAproxy..."
service haproxy reload
ssl_renew_certs.sh
:
#!/bin/sh
# Configuration
BIN_CERTBOT=/usr/local/bin/certbot
PATH_WEBROOT=/usr/local/www/certbot_challenge
####################################
# Don't change anything below this #
####################################
# Renew certs
echo "Renewing certs"
$BIN_CERTBOT renew --webroot -w $PATH_WEBROOT --renew-hook "/root/scripts/ssl_renew_hook.sh"
# Reload haproxy
echo "Reloading HAproxy..."
service haproxy reload
The reneval script is a bit “special” in regards of supplying certbot
with a hook that gets invoked for every certificate renewal. The corresponding ssl_renew_hook.sh
script looks like this:
#!/bin/sh
###
# This is a post hook script for letsencrypt cert renewal.
# This script will concatenate the private key with the newly created
# full chain.
###
# Configuration
BIN_CAT=/bin/cat
PATH_CERT_DESTINATION=/usr/local/etc/haproxy/certs
####################################
# Don't change anything below this #
####################################
# Determine file name
PEM_NAME=`basename "$RENEWED_LINEAGE"`
PEM_NAME="$PEM_NAME.pem"
# Concate
echo "Concating cert..."
$BIN_CAT $RENEWED_LINEAGE/fullchain.pem $RENEWED_LINEAGE/privkey.pem > $PATH_CERT_DESTINATION/$PEM_NAME
SNI based TLS termination
Sometimes we’d like to reverse proxy a host which handles its own TLS termination. The common approach is to simply not routing those connections through the previously setup HAproxy instance. However, the configuration outlined in this post has been purposefully designed to allow for this scenario. After all, we want to have ONE single entry point for incoming external connections: Our HAproxy instance.
Assuming that the corresponding URL would be foo.insane.engineer
we can extend our http_in
frontend like this:
frontend http_in
bind *:443
mode tcp
# Redirect HTTP requests to HTTPS
redirect scheme https if !{ ssl_fc }
# Inspect SNI to selectively terminate TLS
tcp-request inspect-delay 2s
tcp-request content accept if { req.ssl_hello_type 1 }
use_backend engineer.insane.foo if { req.ssl_sni -m dom foo.insane.engineer }
# Hand everything else off to the TLS terminator
default_backend tls_terminator
Here, we’re using SNI inspection to check the hostname of the connection. If it matches foo.insane.engineer
, we immediately invoke the corresponding engineer.insane.foo
backend:
backend engineer.insane.foo
mode tcp
server pve01n02 192.168.7.15:8006 check
After these changes, every connection to foo.insane.engineer
will be forwarded to 192.168.7.15:8006
without terminating the TLS connection. Therefore, host 192.168.7.15
can implement its own TLS termination.
Note that the backend explicitly sets the mode to tcp
as we did not terminate the TLS connection (mode http
will not work).
Summary
At this point you should have a working jail running HAproxy for reverse proxying & TLS termination as well as an NGINX instance for transparent certificate handling. From here, you can spawn any number of jails/VMs hosting your actual web services which can be routed through HAproxy without having to deal with TLS certificates in any of them as they receive plain HTTP connections from the HAproxy reverse proxy jail. Furthermore, hosts which require or desire to terminate the TLS connection themselves can be proxied through the same HAproxy instance.
Good luck.