This is an internal documentation. There is a good chance you’re looking for something else. See Disclaimer.

Set up Application/Service on OpenShift

Walkthrough for setting up a service on OpenShift. In this guide the service is first set up manually and then exported to Ansible to ease management and recreation of the service.

Prepare a Docker Image

First, you need a Docker image that can be deployed.

Sometimes upstream images from Docker Hub, or elsewhere, can be used without modification.

However, if an upstream image isn’t an option, check out Get Started in the official Docker documentation, in particular if you’re new to Docker. Many languages and build automation tools have their own set of standards for building a Docker image; if you’re looking for a more advanced guide, you may be better off looking for a specific guide. Spring Boot with Docker would be an example for such a guide.

When building your own image, ensure all necessary runtime configuration options are made available via environment variables.

Note

OpenShift differs from running bare Docker in some regards. You should be aware of the following caveats:

Missing Write Permission

OpenShift runs containers as a non-root user by default. Some images available may not work as non-root and it may be necessary to adjust permissions on certain files and directories if the application needs write access to any of them. Grant write access by assigning the file or directory to group 0 and granting that group write access. To this end, add the following to Dockerfile:

RUN chgrp -R 0 /some/directory \
  && chmod -R g+w /some/directory

Username Required

By default, the user as which a container is run, does not have a username or a $HOME directory. However, some application require it. To do so, grant write access to /etc/passwd to group 0 in Dockerfile:

RUN chgrp 0 /etc/passwd \
  && chmod g+w /etc/passwd

Then, as part of the entrypoint script, add an entry for the user:

USERNAME=user
HOME=/APP
echo "${USERNAME}:x:$(id -u):0::${HOME}:/sbin/nologin" >> /etc/passwd

For Java applications, an alternative approach is to set the user.home property. Setting it to / is often sufficient.

See Also

OpenShift Container Platform-Specific Guidelines.

Create Project

Usually, it’s best to have an OpenShift/Kubernetes project for any independent service. Services belonging together may also be placed in the same project.

Note

This description uses the Kubernetes deployment rather than the OpenShift deploymentconfig to easy our dependency on OpenShift. There is a distinct possibility that we’ll a non-OpenShift Kubernetes platform some day.

Here is how to create a project:

# if not logged in yet
oc login

oc new-project ${PROJECT_NAME}

Tip

If a test system is desired, the test system should have the same name as production except that it has an additional -test suffix.

It’s recommended that you only create production manually and use Ansible to create the second project, the test system. This way you can be sure the project can be (re-)created via Ansible.

Initial, Manual Deployment

Note

Technical note:

The Docker Image may be fetched as a result of scaling, pod evacuation, pod restart or compute node restart. Consequently, the availability of the registry is highly important. To minimize the risk depending on an external registry has, all images are stored on OpenShift’s registry.

Also note that Docker Hub, Docker’s default registry, has a strict rate limit which we’d likely hit when a compute node is evicted, fails or is restarted.

Note

${PROJECT_NAME}

Name of the project.

A project is an OpenShift concept and internally implemented as Kubernetes namespace. If you see the word namespace, it can usually be treated as synonym for project.

${SERVICE_NAME}

A name given to the service within the project. If you created a project for just this service, reuse ${PROJECT_NAME} as ${SERVICE_NAME}.

  1. Login to Docker Registry:

    docker login -u any --password-stdin registry.apps.openshift.tocco.ch < <(oc whoami -t)
    
  2. Either a) fetch and upload an existing image (e.g from Docker Hub):

    docker pull ${IMAGE_FROM_ELSEWHERE}
    docker tag ${IMAGE_FROM_ELSEWHERE} registry.apps.openshift.tocco.ch/${PROJECT_NAME}/${SERVICE_NAME}
    docker push registry.apps.openshift.tocco.ch/${PROJECT_NAME}/${SERVICE_NAME}
    

    or b) build an image locally and upload it:

    cd ${DIRECTORY_WITH_DOCKERFILE}
    docker build -t registry.apps.openshift.tocco.ch/${PROJECT_NAME}/${SERVICE_NAME} .
    docker push registry.apps.openshift.tocco.ch/${PROJECT_NAME}/${SERVICE_NAME}
    
  3. Switch to the project:

    oc project ${PROJECT_NAME}
    
  4. Create application/service on OpenShift:

    kubectl create deployment name --image=image-registry.openshift-image-registry.svc:5000/${PROJECT_NAME}/${SERVICE_NAME}
    

    ${image}

    See also Creating an application from an image.

  5. Configure service via environment variables:

    oc set env deployment ${SERVICE_NAME} KEY1=VALUE1 KEY2=VALUE2
    

    Settings for publicly available images are usually documented in the README of their respective repository.

    You can also list the environment variables:

    oc set env deployment ${SERVICE_NAME} --list
    
  6. Add DNS record for service:

    See ${INSTALLATION_NAME}.tocco.ch. Add record for the test system too.

  7. Expose service to the public:

    kubectl expose deployment name --port 80 --target-port ${TARGET_PORT}
    kubectl create ingress name --rule "${HOSTNAME}/=${SERVICE_NAME}:80,tls=tls-${SERVICE_NAME}"
    
    • ${HOSTNAME}: FQDN such as api.tocco.ch

    • ${TARGET_PORT}: Port on which service is listening in Docker container

    Hint

    It’s also possible to make a service available at a specific path. For instance, a service can be made available at https://service.tocco.ch/api/v2:

    kubectl create ingress name --rule "${HOSTNAME}/api/v2=${SERVICE_NAME}:80,tls=tls-${SERVICE_NAME}"
    
  8. Issue a TLS certificate:

    kubectl annotate ingress/${SERVICE_NAME} cert-manager.io/cluster-issuer: letsencrypt-production \
    cert-manager.io/private-key-rotation-policy=Always
    

    This can take some time. See Troubleshooting if certificate isn’t issued within 15 minutes.

    Tip

    By default, HTTP requests are redirected to HTTPS. This is recommended for any service where the user is expected to access the service directly (by typing the address in a browser’s address bar). Any other service should refuse any requests via HTTP to help detect accidential use.

    Refuse requests via HTTP:

    oc patch route abc -p '{"spec": {"tls": {"insecureEdgeTerminationPolicy": "None"}}}'
    

    Whenever a ingress is created a corresponding route is created. This needs to be set on the route (oc get route).

  9. Tell clients to always use HTTPS:

    kubectl annotate ingress/${SERVICE_NAME}} haproxy.router.openshift.io/hsts_header=max-age=62208000
    

    See Strict-Transport-Security and Enabling HTTP strict transport security

  10. Add persistent storage (if required)

    All storage is non-persistent by default. If you need any file or directory to survive an application restart, create a persistent volume:

    oc set volume deployment/${SERVICE_NAME} --add --name ${VOLUME_NAME} --claim-name ${VOLUME_NAME} --claim-size ${N}Gi --mount-path ${PATH}
    

    Tip

    Some images out there require that temporary files can be written to a certain directory but the user doesn’t have write access to it. Remember that the user in the container isn’t root on OpenShift (unlike bare Docker). In this case, you can add an empty, writable directory:

    oc set volume deployment/abc --add --name ${VOLUME_NAME} --type=EmptyDir --mount-path ${PATH}
    

    Unlike persistent storage, this storage won’t survive pod termination and it’s content isn’t shared among instances.

    See also:

  11. Request CPU and Memory

    CPU and memory should always be requested. Set it to the expected CPU and memory usage expected during normal operation:

    kubectl set resources deployment ${SERVICE_NAME} --requests cpu=N,memory=${N}Mi
    

    For instance, to request 0.05 CPUs and 256 MiB of memory pass this:

    --requests cpu=0.05,memory=256Mi
    

    This information is required by Kubernetes to assign resources properly. See Motivation for CPU requests and limits.

    Hint

    Request the right amount of CPU and memory

    You can observe the CPU and memory usage, either locally or while running on OpenShift. Locally, you can use top and on OpenShift kubectl top pods to observe memory usage. Make sure you observe CPU / memory use under load.

    The values should approximate about the average CPU load / memory use during regular workloads.

    –limit

    Use --limit cpu=${N},memory=${N}Mi to limit maximum resource consumption. This shouldn’t usually be required.

    Be warned that any application exceeding the memory limit is terminated immediately.

    Missing kubectl

    If kubectl isn’t available, update/reinstall the client as per Setup OpenShift Client.

  12. Set Java options

    Java options, commonly passed via java -OPTION1 -OPTION2 can be passed via JAVA_TOOL_OPTIONS environment variable.

    At least the following options should be set:

    env:
    # ...
    - name: JAVA_TOOL_OPTIONS
      value: >
        -Xms128m
        -Xmx512m
        -XX:+ExitOnOutOfMemoryError
    
        -XX:+UseShenandoahGC
        -XX:+UnlockExperimentalVMOptions
        -XX:ShenandoahUncommitDelay=30000
        -XX:ShenandoahGuaranteedGCInterval=30000
    
        -Dcom.sun.management.jmxremote
        -Dcom.sun.management.jmxremote.authenticate=false
        -Dcom.sun.management.jmxremote.host=127.0.0.1
        -Dcom.sun.management.jmxremote.port=30200
        -Dcom.sun.management.jmxremote.ssl=false
        -Dcom.sun.management.jmxremote.rmi.port=30200
        -Djava.rmi.server.hostname=127.0.0.1
    

    -Xms... and -Xmx...

    Min. and max. heap memory available, respectively, for the application. Adjust as needed.

    Docker images may offer alternative ways to adjust this settings. Check the documentation for the respective image.

    Always set -Xmx. It defaults to half of the memory available on the machine which is rarely ever appropriate.

    -XX:+ExitOnOutOfMemoryError

    Without this, when the application runs out of memory, an OutOfMemoryError is thrown in an arbitrary thread. Most applications have not been designed with this in mind and background threads, in particular, are not properly restarted. With this, the application is exited and properly restarted by Kubernetes.

    -XX:...

    GC settings

    Copy the settings verbatim as shown above.

    ShenandoahGC is used to ensure unused memory is uncommitted (=returned to the OS).

    -D...jmx / -D...rmi...

    Settings required to enable JMX port.

    See Debugging via JMX

    Copy the settings verbatim as shown above.

Replicas

It’s generally a good idea to have multiple instances (=replicas) of a service running for redundancy and to spread load.

By default the number of replicas for production services is three and test two.

Set number of replicas:

oc scale --replicas=${N} deployment/${SERVICE_NAME}

Important

Some services do not support multiple replicas by design. This are usually application that store some sort of database on disk (e.g. Postgres, Solr).

For such services, it’s crucial that the Recreate deployment strategy is used. It’ll ensure the old instance is stopped before rolling out a new istance.

Test The Service

At this point the service should be operational. Verify everything works as excepted.

Should the service not be available, try to debug the issue:

  • Use https:// explicitly.

    Service is not available via http:// by default.

  • Check project status:

    oc status
    
  • Check logs of any running pod (if any):

    oc logs dc/${SERVICE_NAME}
    

    Or check log of a specific pod:

    oc get pods
    oc logs ${POD_NAME}
    
  • On permission errors, see warnings in Prepare a Docker Image

  • List all resources:

    oc get resources
    

    Use column NAME as ${RESOURCE} in the following commands.

  • Check resources (pay attention to Events at the bottom):

    oc describe ${RESOURCE}
    
  • Edit resource:

    oc edit ${RESOURCE}
    

Export Service to Ansible

Services are located in services/ directory within the Ansible repository. Services are structured like this:

File / Directory

Description

services/roles/${SERVICE_NAME}/

Ansible role for managing the service

See Role directory structure in the Ansible documentation.

services/playbook.yml

Playbook for all services

Call role from here for test and production.

Use tag to only configure a specific service:

cd ${ANSIBLE_REPO}/services
ansible-playbook playbook.yml -t ${SERVICE_NAME}
  1. Add entry (for prod and test) in playbooks.yml.

    It should looks something like this (for production):

    - name: ${HUMAN_READBLE_SERVICE_NAME} production
      import_role:
        name: ${SERVICE_NAME}
        vars_from: prod
      tags: [ ${SERVICE_NAME}, ${SERVICE_NAME}-prod ]
    
  2. Create services/${SERVICE_NAME}/vars/{prod,test}.yml.

    This files contain variable specific to production or test. They should contain, at least, the following:

    # Name of the OpenShift/Kubernetes project
    k8s_project: ${PROJECT_NAME}
    
    # Hostname at which service is reachable
    #
    # {{ … }} is a Jinja2 template and will be replaced
    # by Ansible at runtime.
    hostname: '{{ k8s_project }}.tocco.ch'
    

    Tip

    Variables shared by production and test should be added to services/${SERVICE_NAME}/defaults/main.yml. Variables defined in the vars/ directory, like the ones above, will override the ones in defaults/.

  3. Create resources in services/${SERVICE_NAME}/tasks/main.yml

    See services/roles/image-service/tasks/main.yml/ for an example.

    Manually add the following

    1. Create project and grant access:

      - name: create project
        k8s:
          host: '{{ k8s_endpoint }}'
          api_key: '{{ k8s_auth_token }}'
          api_version: project.openshift.io/v1
          name: '{{ k8s_project }}'
          kind: ProjectRequest  # This is what happens in the background when
                                # `oc new-project` is invoked.
        # 'Conflict' is seen when the project already exists.
        failed_when: result.failed|default(false) and result.reason != 'Conflict'
        when: not ansible_check_mode
      
       - name: grant access to group
         k8s:
           host: '{{ k8s_endpoint }}'
           api_key: '{{ k8s_auth_token }}'
           force: true  # =replace
           namespace: '{{ k8s_project }}'
           api_version: rbac.authorization.k8s.io/v1
           kind: RoleBinding
           name: '{{ item.group_name }}'
           resource_definition:
             roleRef:
               apiGroup: rbac.authorization.k8s.io
               kind: ClusterRole
               name: '{{ item.cluster_role_name }}'
             subjects:
             - apiGroup: rbac.authorization.k8s.io
               kind: Group
               name: '{{ item.group_name }}'
         loop:
         - group_name: tocco-admin
           cluster_role_name: admin
         - group_name: tocco-dev
           cluster_role_name: admin
      
    2. Create account for use by GitLab (used to push Docker image):

      # Creating a service account without secrets. Those are created
      # automatically if omitted.
      #
      # See https://docs.openshift.com/container-platform/3.6/dev_guide/service_accounts.html#dev-managing-service-accounts
      - name: create service account for use by GitLab
        k8s:
          host: '{{ k8s_endpoint }}'
          api_key: '{{ secrets2.openshift_ansible_token }}'
          resource_definition: "{{ lookup('template', 'service_account_gitlab.yml') }}"
      
      - name: create rolebinding
        k8s:
          host: '{{ k8s_endpoint }}'
          api_key: '{{ secrets2.openshift_ansible_token }}'
          resource_definition: "{{ lookup('template', 'rolebinding_gitlab.yml') }}"
      
    3. Create rolebinding_gitlab.yml and service_account_gitlab.yml in services/${SERVICE_NAME}/templates/. Copy rolebinding_gitlab.yml and service_account_gitlab.yml from the image-service, verbatim.

    4. Export the remaining resources.

      At least these resources ${RESOURCE} need to be exported:

      • deployment/${SERVICE_NAME}

      • ingress/${SERVICE_NAME}

      • service/${SERVICE_NAME}

      Other resources, you created, may need to be exported too. I.e. additional ingresses, services, secrets, etc. If you’re unsure, have a look at the list of all resources:

      oc get all
      

      Generated resources like Pods and replicasets need not to be exported.

      For every resources, add a task to tasks/main.yml. Example for a ingress:

      - name: create route
        k8s:
          host: '{{ k8s_endpoint }}'
          api_key: '{{ secrets2.openshift_ansible_token }}'
          namespace: '{{ k8s_project }}'
          resource_definition: "{{ lookup('template', 'ingress.yml') }}"
      

      See services/roles/image-service/tasks/main.yml/ for more examples.

      Next export the resource (e.g. route, service, deployment):

      oc get ${RESOURCE} -o yaml
      

      Place the resulting definition in services/${SERVICE_NAME}/templates/. Use the same file name as you used in tasks/main.yml (route.yml in the example above).

      Postprocess the resulting YAML files:

      • Strip unwanted metadata like the uid, selfLink or resourceVersion.

      • Replace project name with the this Jinja2 template {{ k8s_project }}.

      • Replace any values that need to be different in prod and test with {{ }} templates and ensure the variables are defined in vars/ (see above).

      • Replace any secret (like password) with {{ secrets2.${SOME_NAME} }} and add a secret with name ${SOME_NAME} to secrets2.yml.

      See services/roles/image-service/templates// for examples.

  1. Run Ansible for production:

    cd ${ANSIBLE_REPO}/services
    ansible-playbook playbook.yml -t ${SERVICE_NAME}-prod
    
  2. Run Ansible for test:

    cd ${ANSIBLE_REPO}/services
    ansible-playbook playbook.yml -t ${SERVICE_NAME}-test
    

    Ensure the test system is available. (Issuing a TLS certificate may take some time.) See also Test the Service.

    Remember to switch to the right project for debugging:

    oc project ${PROJECT_NAME}-test
    

Setup Repository for Deploying Docker Image

  1. Create a repository on GitLab for the application you’re deploying if there is none yet.

  2. Set up build pipeline on GitLab

    Either, using an upstream image:

    If you’re using an upstream image (e.g. from Docker Hub), setup a new repository with a pipeline for deploying production and test. Use .gitlab-ci.yml of image-service as a template.

    or, alternatively, when building your own image:

    If you’re building your own image, be sure anything needed to build it is in a repository. Any Dockerfile, resources and possibly the application/service itself. Then use the .gitlab-ci.yml of address-provider as a template for a pipeline to deploy production and test.

  3. Create environment variables on GitLab containing the tokens needed to push the Docker images. In the example .gitlab-ci.yml linked above, those are called OC_TOKEN_PROD and OC_TOKEN_TEST for production and test respectively.

    1. Obtain the token

      Find the token name (first item listed in Tokens):

      $ oc describe serviceaccount gitlab
      Name:                gitlab
      Namespace:           image-service
      Labels:              <none>
      Annotations:         <none>
      Image pull secrets:  gitlab-dockercfg-w8g9l
      Mountable secrets:   gitlab-token-l7krz
                           gitlab-dockercfg-w8g9l
      Tokens:              gitlab-token-l7krz
                           gitlab-token-wlk5f
      Events:              <none>

      Get secret:

      $ oc describe secret gitlab-token-l7krz
      Name:         gitlab-token-l7krz
      Namespace:    image-service
      Labels:       <none>
      Annotations:  kubernetes.io/service-account.name: gitlab
                    kubernetes.io/service-account.uid: 53686829-bf86-11eb-888f-fa163e3ec73a
      
      Type:  kubernetes.io/service-account-token
      
      Data
      ====
      ca.crt:          2137 bytes
      namespace:       18 bytes
      service-ca.crt:  3253 bytes
      tokens:          eyJhbGciOi<yes this really long string is the token you want>nL4JSZmHCg
    2. On GitLab, go to the repository and find Settings → CI/CD → Variables → Add Variable

      Create a variable called OC_TOKEN_PROD and OC_TOKEN_TEST with the respective tokens. Be sure to check Protect variable during creation.

Monitoring

Set up monitoring as documented in Configuring Monitoring.

Documentation

  • List service on Infrastructure Overview.

    • Concisely describe for what service is used.

    • Mentioned how to deploy it.

    • Mention where to find the definition in Ansible.

    A more detailed, full-page documentation may be appropriate for some services. Add a link to that document here. Also, link any relevant upstream documentation.

  • Add a link to the documentation on Read the Docs in GitLab’s README file.

Updates

Services need to be updated. Even if the application isn’t changed, dependencies and the underlying Docker images should be updated.

Define a schedule for updating and make sure the people responsible for doing so are informed. The Address-provider and image-service are both updated as part of creating a release branch. Consider this as an option and if appropriate update the documentation at New Release with instructions.