DevOps Bird's Eye View on Plone 6

published Oct 13, 2022

Talk by Jens Klein at the Plone Conference 2022 in Namur.

pip is the tool almost every Pythonista learns early to use. Plone 6 installs just with "pip install -c ... Plone".

But it needs more: Zope configuration, including existing or development of own add-ons, using newer package-versions, configuring RelStorage, CI-CD integration, building Docker images, deployment on Swarm or K8s, an ingress, some load balancing, caching, ...

The talk is not a tutorial but gives a 3000 foot view and acts as a starter to dig deeper.

In the past we only had Plone as backend. Now also the frontend, running on a different port, different process. The backend talks to the database.

The Plone frontend is a Node project. It pre-renders pages on first request, and for this it talks to the backend.

Plone backend is a WSGI server based on Zope, with a complete CMS Rest API. The backend still has all the Classic UI features in it, even when this is not used with the new default front end.

You would use pip to install the backend: pip install Plone. But it is not perfect, so I created mxdev to override constraints. Previously Buildout was used to generate a directory structure and configuration files, but currently you use cookiecutter-zope-instance for this. And then there is a WSGI server to start Zope, by default this is waitress.

A web request comes in at the web server (nginx, traefik), and then most requests go to frontend, api requests to backend, and the frontend talks to the backend for the first request. You can scale the backend horizontally by adding more backends (zeoclients). The backends talk to the ZEO database, with usually a shared filesystem for the blobs (binary large files). You could scale the frontend as well, if that turns out to be the bottle neck.

If you use multiple backends or frontends, you will need to put a load balancer in between.

If your sysadmins don't like this ZEO setup, you can use a relational database. This uses the RelStorage package. Postgres would be the best choice, but MySQL should work as well. The blobs are stored as large objects in the database, which is more performant for a transaction.

At some point, you need some kind of caching. You can go to a cloud provider for a cache. But you can do it yourself: use a varnish cache. You put this between the web server and backend, and/or between web server and frontend. You need to configure it. Under very high load you could even say: cache this request for three seconds.

Now on to hosting. Let's focus on Docker.

Disclaimer: Docker swarm could die in the long term. But it is nice to start with, and it gives you knowledge that is also useful for Kubernetes.

You get images from docker hub, or other container registries. Usually you will build your own frontend, so you need to create a custom image for this. For the backend you can also do this. You could create an image on your laptop and upload it, but it is better to build and store them in CI/CD, to avoid big uploads on slow connections.

Now it is time to deploy it. Basics: There is docker compose, but you should use Docker Swarm with 1 to n nodes, or same with Kubernetes. You need storage for your database. This could be a managed database by your provider. And you need fixed IPs.

Simple deployment for one site would be a single node docker swarm, 1 docker stack with all included, all in one yaml file. 1 traffic, 2 frontends, 2 backends, 1 postgres. The 2 frontends and backends are so that you can upgrade 1, bring it up, the upgrade the second and bring it up, so no downtime.

You can add varnish in here. Traefik sends all requests there. Varnish returns a cached response, or it adds a header. With the header, Traefik sends the requests to frontend or backend.

Now some tools. With Traefik UI you can inspect what is configured or wrong. Traefik has Let's Encrypt built in. Portainer is a UI and management interface for Docker Swarm and Kubernetes, and you can add this with an image. You inspect the state of the cluster, stop and start services, view logs, open a web console.

You really need a CI/CD system, otherwise this get nasty with images. You need workflows and a container registry. GitLab and GitHub have both.

Want to play with this? See the deployment training.