As part of my private migration project to a new Kubernetes clusters, I decided to write about some of the steps like the Traefik Proxy and Nextcloud that also can benefits others as reference, or myself in the future. My target Kubernetes is a self-hosted RKE Kubernetes cluster (v19.7) running on VMware vSphere, managed by Rancher. I use it for testing purpose in addition to some private home tools. It also hosts this blog site for now.
What is Traefik Proxy, Cert-Manager and NextCloud
- Traefik Proxy is a HTTP reverse proxy and load balancer that’s handling the edge/ingress traffic in Kubernetes and connects path and sub paths to a micro-service / container. Traefik Proxy provides also TLS offloading with Lets Encrypt support. Together with Cert-Manager, Traefik makes the TLS setup very easy.
- Cert-Manager is a Kubernetes add-on to automate the management and issuance of TLS certificates from various issuing sources. In my Environment I already have setup Cert-Manager with ACME DNS challenge towards CloudFlare and GoDaddy. I will come back to this part later in a later Blog post.
- Nextcloud is a self-hosted file storage and synchronization service, related to Dropbox and other Managed file storage services.
Deploy using Helm
Let’s start of by Installing Traefik from Helm. I assume you have helm3 installed on you client machine pointing to your k8s cluster.
Add Traefik’s chart repository to Helm and Install Traefik
helm repo add traefik https://helm.traefik.io/traefik
helm install traefik traefik/traefik --set service.spec.externalTrafficPolicy=Local
To get the real IP address from clients set externalTrafficPolicy=Local. There are some down sides with this you can read about in the MetalLB doc here.
Cert-Manager by jetstack
We install cert-manager using Helm here as well.
helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace \
--set installCRDs=true
Next, we need to define our ClusterIssuer to use with ACME DNS. You can also use a Issuer bound to a namespace, if the hole cluster it self is out of scope. In my environment we use ClusterIssuer with DNS names as selector with ACME. It will be used by multiple services later.
As the example in Jetstack doc, I use cloudflare as DNS provider.
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
...
solvers:
- dns01:
cloudflare:
email: user@example.com
apiKeySecretRef:
name: cloudflare-apikey-secret
key: apikey
selector:
dnsNames:
- 'example.com'
- '*.example.com'
With Cloudflare you need to use API token for cert-manager login and set the ACME DNS challenge. I ended up using the Global API token key, this might be risky in the long run. Apply the manifest to the same namespace as cert-manager.
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-apikey-secret
namespace: cert-manager
type: Opaque
data:
api-key: Z2xvYmFsLWFwaS10b2tlbg==
To test if its working you can issue for a certificate like this:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: test.example.com-cert
spec:
secretName: test.example.com-tls
renewBefore: 240h
dnsNames:
- test.example.com
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
To verify if its working use the describe on the certificate.
kubectl describe certificate test.example.com-cert
kubectl describe certificate test.example.com-cert
Name: test.example.com-cert
Namespace: default
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1
Kind: Certificate
Metadata:
Spec:
Dns Names:
test.example.com
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-prod
Renew Before: 240h0m0s
Secret Name: test.example.com-tls
Status:
Conditions:
Last Transition Time: 2021-04-07T08:10:06Z
Message: Certificate is up to date and has not expired
Observed Generation: 3
Reason: Ready
Status: True
Type: Ready
Not After: 2021-07-06T07:10:05Z
Not Before: 2021-04-07T07:10:05Z
Renewal Time: 2021-06-26T07:10:05Z
Revision: 1
Events: <none>
You will now have a new secret containing a sign certificate in the default namespace you use.
kubectl describe secrets test.example.com-tls
Name: test.example.com-tls
Namespace: default
Labels: <none>
Annotations: cert-manager.io/alt-names: *.example.com
cert-manager.io/certificate-name: example.com
cert-manager.io/common-name: test.example.com
cert-manager.io/ip-sans:
cert-manager.io/issuer-group:
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt-prod
cert-manager.io/uri-sans:
Type: kubernetes.io/tls
Data
====
tls.crt: 3420 bytes
tls.key: 1679 bytes
Deployment of Nextcloud
In my cluster I use nfs-client-provisioner to dynamically provision Kubernetes Volumes. nfs-client-provisioner is using a NFS exported volume on my TrueNAS. It will later create sub-folders mapped with the name containing Namespace and PhysicalVolumeClame.
FYI: nfs-client-provisioner are now deprecated and I will look into using nfs-subdir-external-provisioner as an alternative. But for now I stick to what I have.
helm repo add nextcloud https://nextcloud.github.io/helm/
helm install nextcloud-example-com nextcloud/nextcloud \
--set ingress.enabled=true \
--set ingress.annotations."cert-manager\.io/cluster-issuer": letsencrypt-prod \
--set nextcloud.host: nextcloud.example.com
You also need to have look at the many database options (internal is default), persistence, cronjob and other important settings.
Nextcloud Security
Nextcloud will now start and probably work, but its not secure. Nextcloud has an internal security scanner/status page you can find this under settings (admin page) and overview. I find this feature very useful. You can also read about harding at nextcloud.com. I addition I recommend using the scan.nextcloud.com tool.
Default the internal security scanner is giving some warnings after installation:
* The reverse proxy header configuration is incorrect, or you are accessing Nextcloud from a trusted proxy. If not, this is a security issue and can allow an attacker to spoof their IP address as visible to the Nextcloud. Further information can be found in the documentation.
* The "Strict-Transport-Security" HTTP header is not set to at least "15552000" seconds. For enhanced security, it is recommended to enable HSTS as described in the security tips.
* Your web server is not properly set up to resolve "/.well-known/caldav". Further information can be found in the documentation.
* Your web server is not properly set up to resolve "/.well-known/carddav". Further information can be found in the documentation.
The first this to correct it the trusted reverse proxy. In my environment it’s Traefik and is running in namespace traefik.
helm upgrade nextcloud-example-com nextcloud/nextcloud \
--set nextcloud.extraEnv[0].name: TRUSTED_PROXIES \
--set nextcloud.extraEnv[0].value: traefik.traefik
To correct the HSTS and some other headers managed by Traefik, lets deploy a Traefik Middleware. Middleware is attached to the Router and will modify the request before sending it a service (container) or client (browser).
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: nextcloud-headers
namespace: nextcloud
spec:
headers:
frameDeny: true
sslRedirect: true
browserXssFilter: true
#Instructs some browsers to not sniff the mimetype of files. This is used for example to prevent browsers from interpreting text files as JavaScript.
contentTypeNosniff: true
#HSTS
stsIncludeSubdomains: true
stsPreload: true
stsSeconds: 31536000
#X-Frame-Options,
#Prevents embedding of the instance within an iframe from other domains to prevent Clickjacking and other similar attacks.#Instructs some browsers to not sniff the mimetype of files. This is used for example to prevent browsers from interpreting text files as JavaScript.
customFrameOptionsValue: SAMEORIGIN
#
referrerPolicy: "no-referrer"
The second Middleware under will address the faulty /.well-known/carddav and /.well-known/caldav.
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: nextcloud-redirectregex
namespace: nextcloud
spec:
redirectRegex:
regex: "https://(.*)/.well-known/(card|cal)dav"
replacement: "https:///remote.php/dav/"
Apply it using kubectl.
Be aware of how to address them later i kubernetes then using CRD. <middleware-namespace>-<middleware-name>@kubernetescrd
Lets apply attach it to the router, using annotations on the nextcloud ingress.
helm upgrade nextcloud-example-com nextcloud/nextcloud \
--set ingress.annotations."traefik\.ingress\.kubernetes\.io/router\.middlewares"=nextcloud-nextcloud-redirectregex@kubernetescrd,nextcloud-nextcloud-headers@kubernetescrd
And that’s it. All warning should now be addressed and you should be ready to start using Nextcloud.