I don’t think it is necessary to explain what a Varnish Cache is and how it affects the load speed of the site. At least not in this post. If you are here, I suppose that you want to know how to setup SSL in Varnish.
In fact, the developer of Varnish finds it a bad idea to implement SSL support. But there are still several ways to tackle this issue.
Option 1
First, you need to understand whether it is possible to handle SSL with another service and leave communication with the server on which Varnish is installed at the default port.
Example: DigitalOcean offers Fully Managed SSL Certificates. In other words, you can create a Load Balancer and configure SSL.
Then, create a droplet on the internal network (without access from the outside), raise the environment (Varnish and other software), and attach it to the Load Balancer.
But before you start, I want to highlight that you should make sure that your hosting provider offers some solutions (Cloud Flare, etc).
Option 2
Let’s imagine that we have to raise the environment to implement the API (monolith) on Laravel Framework, and we have our own VPS with root access.
The process looks like this:
- Nginx handles the 443 port, handles static assets and proxy other requests to another Varnish Cache:6081.
- Varnish checks the cache, and if not then proxy request to the backend (Nginx: 81, why Nginx and not PHP I will write below), gets the result, caches, and gives Nginx.
- Nginx: 81 handle requests and run PHP on 9000 port or a socket.
- PHP launches Laravel… It’s no longer interesting to us, we’ve known it for a long time.
So the scheme in short:
Nninx: 443 -> Varnish: 6081 -> Nginx: 81 -> PHP: 9000
Why doesn’t Varnish apply directly to PHP? Because PHP-FPM does not understand Varnish requests, and you will most likely get a 503 error.
Laravel, by the way, is a good solution since they “play nice together.”
I will skip boring guides on installing LEMP and concentrate your attention on configs.
Virtualhost for Nginx: /etc/nginx/conf.d/api.myserver.com.conf
server {
listen 443;
server_name www.api.myserver.com api.myserver.com;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
location / {
proxy_pass http://127.0.0.1:6081;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header HTTPS "on";
}
location ~ /\.ht {
deny all;
}
location ~ /.well-known {
allow all;
}
# I used letsencrypt service )
ssl_certificate /etc/letsencrypt/live/myserver.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myserver.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
server {
listen 80;
server_name www.api.myserver.com api.myserver.com;
return 301 https://www.$host$request_uri;
}
server {
listen 81;
server_name api.myserver.com www.api.myserver.com;
root /var/www/api/public;
index index.php;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ .php$ {
include snippets/fastcgi-php.conf;
fastcgi_param HTTPS on;
fastcgi_pass 127.0.0.1:9000; # OR unix:/var/run/php7.4-fpm.sock;
}
}
I used this template. You just have to modify the backend port.
# /etc/varnish/default.vcl
...
backend server1 { # Define one backend
.host = "127.0.0.1"; # IP or Hostname of backend
.port = "81"; # Port Nginx or whatever is listening
.max_connections = 300; # That's it
...
/etc/default/varnish
...
DAEMON_OPTS="-a :6081 \ # input
-T localhost:6082 \ # admin
-f /etc/varnish/default.vcl \ # proxy conf
-S /etc/varnish/secret \ # secret conf
-s malloc,256m"
...
That’s all. Those simple maneuvers can significantly accelerate a project. I hope this little guide will help you save your time and reduce your suffering.