Building an OpenVPN Cluster, Zalando-Style
A Zalando DevOps Engineer describes how we did it.
Since Zalando’s earliest days, members of our technology team have been able to use OpenVPN to work anywhere, anytime. Back then (the late 2000s), we had only a single (though state-of-the-art) instance to work with, and a lot of manual maintenance to perform. Last year, in light of a years-long period of explosive growth, our team realized that we needed to build something scalable, fully redundant, and easier-to-maintain for hundreds of users. In other words, our own OpenVPN cluster!
The above diagram illustrates the structure of our new cluster, which I built using typical network models as guides. Achieving greater reliability was the primary goal behind this design, which makes it possible to scale up easily and add as many VPN servers as we need. It’s also easier to add many more IPs. Our team decided to use six servers in order to achieve site reliability and redundancy in each side. For better scaling, we put the servers behind a load balancer for the external datacenters (DC1 + DC2). The internal servers (DC3) are not load-balanced because they are running on high-availability hosts. The config looks like this:
client
dev tun?
proto udp
hand-window 10
remote 10.1.1.1 1194
remote 10.1.1.2 1194
remote 1.2.3.4 1194
remote 5.6.7.8 1194
resolv-retry infinite
nobind
persist-key
persist-tun
ca trusted_chain.pem
cert user.crt
key user.key
comp-lzo
verb 3
route-method exe
route-delay 2
On the server, the user config is very simple — we only add the IP address:
vpnXX:/etc/openvpn/ccd# cat username
ifconfig-push 192.168.178.10 192.168.178.9
ifconfig-ipv6-push fd7a:6ca6:e640:8000::192.168.178.10
The rest of the config is similar for all the servers in the cluster:
mode server
tls-server
tls-cipher TLS-DHE-RSA-WITH-AES-256-GCM-SHA384:TLS-DHE-RSA-WITH-AES-256-CBC-SHA:TLS-DHE-DSS-WITH-AES-256-CBC-SHA
# some encryption hardening
push "topology net30"
topology net30
port XXXXX #chose your port
proto udp
dev vpninterface
dev-type tun
ca keys/trusted_chain.pem
cert keys/XXX.crt
key keys/XXX.key # This file should be kept secret
dh keys/XXX.pem
crl-verify keys/XXX.pem
ifconfig 192.168.178.1 192.168.178.2
script-security 2
learn-address /etc/openvpn/route_add.sh
client-disconnect /etc/openvpn/route_delete.sh
keepalive 3 10
comp-lzo
persist-key
persist-tun
status /logs/openvpn-status.log
verb 3
client-config-dir ccd
ccd-exclusive
## duplicate-cn
log-append /logs/openvpn.log
management x.x.x.x port
##routen
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DOMAIN "
push "route 8.8.8.8 255.255.255.255"
push “route X.X.X.X Y.Y.Y.Y”
Our configuration offers the same IP to the user every time, which greatly simplifies rule setting. However, a new problem arises in such a cluster design: Basically, the network in the datacenter is forced to know which server the user is currently logged on. To handle cases like these, we’ve created two simple scripts in case the cluster can self learn where the user is login. route_add.sh will add the logged-in user using a static route to the VPN server; route_delete.sh will remove it after logout. Two examples:
route_add.sh
#!/bin/bash
if [ ! -z "$ifconfig_pool_remote_ip" ]; then
ip route add "$ifconfig_pool_remote_ip" dev $dev
ip route add "::$ifconfig_pool_remote_ip" dev $dev
fi
exit 0
route_delete.sh
#!/bin/sh
ip route del "$ifconfig_pool_remote_ip"
After adding the route to the server’s routing table, the only necessary thing to do is to announce it. We use a dynamic routing protocol in the backend of our datacenter, and Quagga on every VPN server to announce the routes. Depending on your infrastructure, you can use one of the preferred routing protocols: Routing Information Protocol (RIP), Border Gateway Protocol (BGP) or Open Shortest Path First (OSPF).
It’s important to activate routing on the VPN server. You can do this by typing:
echo 1 > /proc/sys/net/ipv4/ip_forward
router ospf
redistribute kernel
redistribute connected # this is the important line
network X.X.X.X area 0.0.0.0
area 0.0.0.0 authentication message-digest
The user can log on and off to every server, and the IP address will be announced to the entire network.
For security and access control, we chose to use Firewall Builder, an open-source iptables manager that allows us to build a ruleset and install it on all servers in the cluster simultaneously. Since making improvements to our user management system earlier this year, we’ve reduced the complexity of iptables from hundreds of rules/10,000 lines to 50 rules and a more viewable interface.
With this set up, we have about 800 users (150 in parallel) and haven’t faced any performance issues at anytime, from anywhere!
We're hiring! Do you like working in an ever evolving organization such as Zalando? Consider joining our teams as a Software Engineer!