Skip to content

Kubernetes

This guide assumes that all prerequisites have been met. Please visit the corresponding Prerequisites page for your infrastructure provider.

Note

You may also use this guide for deployments to other cloud platforms (e.g. Oracle Kubernetes Engine), however it is up to you to satisfy any prerequisites for those platforms. Use at your own risk.

Deploy Tower#

Create a namespace#

Create a namespace to group the Tower resources within your K8s cluster.

  1. Create the namespace (e.g. tower-nf):

    1
    kubectl create namespace tower-nf
    
  2. Switch to the namespace:

    1
    kubectl config set-context --current --namespace=tower-nf
    

Configure container registry credentials#

Nextflow Tower is distributed as a collection of Docker containers available through the Seqera Labs container registry cr.seqera.io. Contact support to get your container access credentials. Once you have received your credentials, grant your cluster access to the registry using these steps:

  1. Retrieve the name and secret values from the JSON file you received from Seqera Labs support.

  2. Create a Kubernetes Secret, using the name and secret retrieved in step 1, with this command:

    1
    2
    3
    4
    kubectl create secret docker-registry cr.seqera.io \
      --docker-server=cr.seqera.io \
      --docker-username='<YOUR NAME>' \
      --docker-password='<YOUR SECRET>'
    

    Note: The credential name contains a dollar $ character. To prevent the Linux shell from interpreting this value as an environment variable, wrap it in single quotes.

  3. The following snippet configures the Tower cron service and the Tower frontend and backend to use the Secret created in step 2 (see tower-cron.yml and tower-svc.yml):

1
2
imagePullSecrets:
        - name: "cr.seqera.io"

This parameter is already included in the templates linked above — if you use a name other than cr.seqera.io for the Kubernetes Secret, update this value accordingly in the configuration files.

Tower ConfigMap#

configmap.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
 kind: ConfigMap
 apiVersion: v1
 metadata:
   name: tower-backend-cfg
   labels:
     app: backend-cfg
 data:
   TOWER_SERVER_URL: "https://<YOUR PUBLIC TOWER HOST NAME>"
   TOWER_CONTACT_EMAIL: "support@tower.nf"
   TOWER_JWT_SECRET: "ReplaceThisWithALongSecretString"
   TOWER_DB_URL: "jdbc:mysql://<YOUR DB HOST NAME AND PORT>/tower"
   TOWER_DB_DRIVER: "org.mariadb.jdbc.Driver"
   TOWER_DB_USER: "tower"
   TOWER_DB_PASSWORD: "<YOUR DB PASSWORD>"
   TOWER_DB_DIALECT: "io.seqera.util.MySQL55DialectCollateBin"
   TOWER_DB_MIN_POOL_SIZE: "2"
   TOWER_DB_MAX_POOL_SIZE: "10"
   TOWER_DB_MAX_LIFETIME: "180000"
   TOWER_SMTP_HOST: "<YOUR SMTP SERVER HOST NAME>"
   TOWER_SMTP_USER: "<YOUR SMTP USER NAME>"
   TOWER_SMTP_PASSWORD: "<YOUR SMTP USER PASSWORD>"
   TOWER_CRYPTO_SECRETKEY: "<YOUR CRYPTO SECRET>"
   TOWER_LICENSE: "<YOUR TOWER LICENSE KEY>"
   TOWER_ENABLE_PLATFORMS: "awsbatch-platform,gls-platform,azbatch-platform,slurm-platform"
   FLYWAY_LOCATIONS: "classpath:db-schema/mysql"
   TOWER_REDIS_URL: "redis://<YOUR REDIS IP>:6379"
 ---
 kind: ConfigMap
 apiVersion: v1
 metadata:
   name: tower-yml
   labels:
     app: backend-cfg
 data:
   tower.yml: |
     mail:
       smtp:
         auth: true
         # FIXME `starttls` should be enabled with a production SMTP host
         starttls:
           enable: false
           required: false

     # Uncomment to specify the duration of Tower sign-in email link validity
     auth:
       mail:
         duration: 30m
  1. Download and configure configmap.yml as per the configuration page.

  2. Deploy the configmap to your cluster:

    1
    kubectl apply -f configmap.yml
    

Where is my tower.yml?

The configmap.yml manifest includes both tower.env and tower.yml. These files are made available to the other containers through volume mounts.

Redis#

redis.aks.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
 kind: StorageClass
 apiVersion: storage.k8s.io/v1
 metadata:
   name: standard
   labels:
     app: redis
   annotations:
     storageclass.beta.kubernetes.io/is-default-class: "true"
 provisioner: kubernetes.io/disk.csi.azure.com
 parameters:
   kind: Managed
   storageaccounttype: Premium_LRS
 allowVolumeExpansion: true
 reclaimPolicy: Retain
 ---
 apiVersion: v1
 kind: PersistentVolumeClaim
 metadata:
   name: redis-data
   labels:
     app: redis
 spec:
   accessModes:
     - ReadWriteOnce
   resources:
     requests:
       storage: 10Gi
   storageClassName: standard
 ---
 apiVersion: apps/v1
 kind: StatefulSet
 metadata:
   name: redis
   labels:
     app: redis
 spec:
   selector:
     matchLabels:
       app: redis
   serviceName: redis
   template:
     metadata:
       labels:
         app: redis
     spec:
       initContainers:
         - name: init-sysctl
           image: busybox
           command:
             - /bin/sh
             - -c
             - |
               sysctl -w net.core.somaxconn=1024;
               echo never > /sys/kernel/mm/transparent_hugepage/enabled
           securityContext:
             privileged: true
           volumeMounts:
             - name: host-sys
               mountPath: /sys
       containers:
         - image: cr.seqera.io/public/redis:5.0.8
           name: redis
           args:
             - --appendonly yes
           ports:
             - containerPort: 6379
           volumeMounts:
             - mountPath: "/data"
               name: "vol-data"
       volumes:
         - name: vol-data
           persistentVolumeClaim:
             claimName: redis-data
         - name: host-sys
           hostPath:
             path: /sys
       restartPolicy: Always
 ---
 apiVersion: v1
 kind: Service
 metadata:
   name: redis
   labels:
     app: redis
 spec:
   ports:
     - port: 6379
       targetPort: 6379
   selector:
     app: redis
redis.eks.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
 kind: StorageClass
 apiVersion: storage.k8s.io/v1
 metadata:
   name: standard
   labels:
     app: redis
   annotations:
     storageclass.beta.kubernetes.io/is-default-class: "true"
 provisioner: kubernetes.io/aws-ebs
 parameters:
   type: gp2
   fsType: ext4
 allowVolumeExpansion: true
 reclaimPolicy: Retain
 ---
 apiVersion: v1
 kind: PersistentVolumeClaim
 metadata:
   name: redis-data
   labels:
     app: redis
 spec:
   accessModes:
     - ReadWriteOnce
   resources:
     requests:
       storage: 10Gi
   storageClassName: standard
 ---
 apiVersion: apps/v1
 kind: StatefulSet
 metadata:
   name: redis
   labels:
     app: redis
 spec:
   selector:
     matchLabels:
       app: redis
   serviceName: redis
   template:
     metadata:
       labels:
         app: redis
     spec:
       initContainers:
         - name: init-sysctl
           image: busybox
           command:
             - /bin/sh
             - -c
             - |
               sysctl -w net.core.somaxconn=1024;
               echo never > /sys/kernel/mm/transparent_hugepage/enabled
           securityContext:
             privileged: true
           volumeMounts:
             - name: host-sys
               mountPath: /sys
       containers:
         - image: cr.seqera.io/public/redis:5.0.8
           name: redis
           args:
             - --appendonly yes
           ports:
             - containerPort: 6379
           volumeMounts:
             - mountPath: "/data"
               name: "vol-data"
       volumes:
         - name: vol-data
           persistentVolumeClaim:
             claimName: redis-data
         - name: host-sys
           hostPath:
             path: /sys
       restartPolicy: Always
 ---
 apiVersion: v1
 kind: Service
 metadata:
   name: redis
   labels:
     app: redis
 spec:
   ports:
     - port: 6379
       targetPort: 6379
   selector:
     app: redis
redis.gke.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
 apiVersion: v1
 kind: PersistentVolumeClaim
 metadata:
   name: redis-data
   labels:
     app: redis
 spec:
   accessModes:
     - ReadWriteOnce
   resources:
     requests:
       storage: 10Gi
   storageClassName: standard
 ---
 apiVersion: apps/v1
 kind: StatefulSet
 metadata:
   name: redis
   labels:
     app: redis
 spec:
   selector:
     matchLabels:
       app: redis
   serviceName: redis
   template:
     metadata:
       labels:
         app: redis
     spec:
       initContainers:
         - name: init-sysctl
           image: busybox
           command:
             - /bin/sh
             - -c
             - |
               sysctl -w net.core.somaxconn=1024;
               echo never > /sys/kernel/mm/transparent_hugepage/enabled
           securityContext:
             privileged: true
           volumeMounts:
             - name: host-sys
               mountPath: /sys
       containers:
         - image: cr.seqera.io/public/redis:5.0.8
           name: redis
           args:
             - --appendonly yes
           ports:
             - containerPort: 6379
           volumeMounts:
             - mountPath: "/data"
               name: "vol-data"
       volumes:
         - name: vol-data
           persistentVolumeClaim:
             claimName: redis-data
         - name: host-sys
           hostPath:
             path: /sys
       restartPolicy: Always
 ---
 apiVersion: v1
 kind: Service
 metadata:
   name: redis
   labels:
     app: redis
 spec:
   ports:
     - port: 6379
       targetPort: 6379
   selector:
     app: redis

Download the appropriate manifest for your infrastructure:

Deploy to your cluster:

1
kubectl apply -f redis.*.yml

Note

You may also be able to use a managed Redis service such as Amazon Elasticache or Google Memorystore, however we do not explicitly support these services, and Tower is not guaranteed to work with them. Use at your own risk.

If you do use an externally managed Redis service, make sure to update configmap.yml accordingly:

1
TOWER_REDIS_URL: redis://<redis private IP>:6379

Tower cron service#

tower-cron.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: cron
   labels:
     app: cron
 spec:
   selector:
     matchLabels:
       app: cron
   template:
     metadata:
       labels:
         app: cron
     spec:
       imagePullSecrets:
         - name: "cr.seqera.io"
       volumes:
         - name: config-volume
           configMap:
             name: tower-yml
       initContainers:
         - name: migrate-db
           image: cr.seqera.io/private/nf-tower-enterprise/backend:v23.1.0
           command: ["sh", "-c", "/migrate-db.sh"]
           envFrom:
             - configMapRef:
                 name: tower-backend-cfg
           volumeMounts:
             - name: config-volume
               mountPath: /tower.yml
               subPath: tower.yml
       containers:
         - name: backend
           image: cr.seqera.io/private/nf-tower-enterprise/backend:v23.1.0
           envFrom:
             - configMapRef:
                 name: tower-backend-cfg
           volumeMounts:
             - name: config-volume
               mountPath: /tower.yml
               subPath: tower.yml
           env:
             - name: MICRONAUT_ENVIRONMENTS
               value: "prod,redis,cron"
           ports:
             - containerPort: 8080
           readinessProbe:
             httpGet:
               path: /health
               port: 8080
             initialDelaySeconds: 5
             timeoutSeconds: 3
           livenessProbe:
             httpGet:
               path: /health
               port: 8080
             initialDelaySeconds: 5
             timeoutSeconds: 3
             failureThreshold: 10
  1. Download the manifest:

  2. tower-cron.yml

  3. Deploy to your cluster:

1
kubectl apply -f tower-cron.yml

Wait for completion

This container will create the required database schema the first time it is instantiated. This process can take a few minutes to complete and must be finished before you instantiate the Tower backend. Make sure this container is in the READY state before proceeding to the next step.

Tower frontend and backend#

tower-svc.yml
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  labels:
    app: backend
spec:
  selector:
    matchLabels:
      app: backend
  strategy:
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  template:
    metadata:
      labels:
        app: backend
    spec:
      imagePullSecrets:
        - name: "cr.seqera.io"
      volumes:
        - name: config-volume
          configMap:
            name: tower-yml
      containers:
        - name: backend
          image: cr.seqera.io/private/nf-tower-enterprise/backend:v23.1.0
          envFrom:
            - configMapRef:
                name: tower-backend-cfg
          env:
            - name: MICRONAUT_ENVIRONMENTS
              value: "prod,redis,ha"
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: config-volume
              mountPath: /tower.yml
              subPath: tower.yml
          resources:
            requests:
              cpu: "1"
              memory: "1200Mi"
            limits:
              memory: "4200Mi"
          readinessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 5
            timeoutSeconds: 3
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 5
            timeoutSeconds: 3
            failureThreshold: 10
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  labels:
    app: frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      imagePullSecrets:
        - name: "cr.seqera.io"
      containers:
        - name: frontend
          image: cr.seqera.io/private/nf-tower-enterprise/frontend:v23.1.0
          ports:
            - containerPort: 80
      restartPolicy: Always
---
# Services
apiVersion: v1
kind: Service
metadata:
  name: backend
  labels:
    app: backend
spec:
  ports:
    - name: http
      port: 8080
      targetPort: 8080
  selector:
    app: backend
---
apiVersion: v1
kind: Service
metadata:
  name: backend-api
spec:
  ports:
    - port: 8080
      targetPort: 8080
      protocol: TCP
  type: NodePort
  selector:
    app: backend
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  type: LoadBalancer
  ports:
    - port: 80
  selector:
    app: "frontend"
---

Download the manifest:

Deploy to your cluster:

1
kubectl apply -f tower-svc.yml

Tower ingress#

An ingress is used to make Tower publicly accessible, load balance traffic, terminate SSL/TLS, and offer name-based virtual hosting. The included ingress will create an external IP address and forward HTTP traffic to the Tower frontend.

ingress.aks.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
 apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: front-ingress
   annotations:
     kubernetes.io/ingress.class: azure/application-gateway
 spec:
   rules:
     - host: YOUR-TOWER-HOST-NAME
       http:
         paths:
           - path: /*
             backend:
               serviceName: frontend
               servicePort: 80
ingress.eks.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
 apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: front-ingress
   annotations:
     kubernetes.io/ingress.class: alb
     alb.ingress.kubernetes.io/scheme: internet-facing
     alb.ingress.kubernetes.io/certificate-arn: YOUR-CERTIFICATE-ARN
     alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
     alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
     alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-Ext-2018-06
     alb.ingress.kubernetes.io/load-balancer-attributes: >
       idle_timeout.timeout_seconds=301,
       routing.http2.enabled=false,
       access_logs.s3.enabled=true,
       access_logs.s3.bucket=YOUR-LOGS-S3-BUCKET,
       access_logs.s3.prefix=YOUR-LOGS-PREFIX
 spec:
   rules:
     - host: YOUR-TOWER-HOST-NAME
       http:
         paths:
           - path: /*
             backend:
               serviceName: ssl-redirect
               servicePort: use-annotation
           - path: /*
             backend:
               serviceName: frontend
               servicePort: 80
ingress.gke.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
 apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: front-ingress
   annotations:
     kubernetes.io/ingress.class: "gce"
 spec:
   rules:
     - host: YOUR-TOWER-HOST-NAME
       http:
         paths:
           - path: /*
             pathType: ImplementationSpecific
             backend:
               service:
                 name: frontend
                 port:
                   number: 80

Download the appropriate manifest and configure it according to your infrastructure:

Deploy to your cluster:

1
kubectl apply -f ingress.*.yml

See the Kubernetes documentation on Ingress for more information. If you don't need to make Tower externally accessible, you can also use a NodePort or a LoadBalancer service to make it accessible within your intranet.

Additionally, see the relevant documentation for configuring an Ingress on each cloud provider:

Check status#

Finally, make sure that all services are up and running:

1
kubectl get pods

Test the application#

To make sure that Tower is properly configured, follow these steps:

  1. Log in to Tower.

  2. Create an organization.

  3. Create a workspace within that organization.

  4. Create a new Compute Environment. Refer to Compute Environments for detailed instructions.

  5. Select Quick Launch from the Launchpad tab in your workspace.

  6. Enter the repository URL for the nf-core/rnaseq pipeline (https://github.com/nf-core/rnaseq).

  7. In the Config profiles dropdown, select the test profile.

  8. In the Pipeline parameters textarea, change the output directory to a sensible location based on your Compute Environment:

    1
    2
    3
    4
    5
    # save to S3 bucket
    outdir: s3://<your-bucket>/results
    
    # save to scratch directory (Kubernetes)
    outdir: /scratch/results
    
  9. Select Launch.

    You'll be transitioned to the Runs tab for the workflow. After a few minutes, you'll see the progress logs in the Execution log tab for that workflow.

Optional addons#

Database console#

dbconsole.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: dbconsole
   labels:
     app: dbconsole
 spec:
   selector:
     matchLabels:
       app: dbconsole
   template:
     metadata:
       labels:
         app: dbconsole
     spec:
       containers:
         - image: adminer:4.7.7
           name: dbconsole
           ports:
             - containerPort: 8080
       restartPolicy: Always
 ---
 apiVersion: v1
 kind: Service
 metadata:
   name: dbconsole
 spec:
   ports:
     - port: 8080
       targetPort: 8080
       protocol: TCP
   type: NodePort
   selector:
     app: dbconsole

The included dbconsole.yml can be used to deploy a simple web frontend to the Tower database. It is not required but it can be useful for administrative purposes.

  1. Deploy the database console:

    1
    kubectl apply -f dbconsole.yml
    
  2. Port-forward the database console to your local machine:

    1
    kubectl port-forward deployment/dbconsole 8080:8080
    

    The database console will be available in your browser at http://localhost:8080.

High availability#

When configuring Tower for high availability, it should be noted that:

  • The cron service may only have a single instance

  • The backend service can be run in multiple replicas

  • The frontend service is replicable, however in most scenarios it is not necessary

Back to top