Basically it is a cluster of pods on its own DNS subdomain, this means that inside a CTA namespace `ping frontend` will ping the CTA frontend in the current instance, same for `mgm` and other services defined in the namespace.
## setting up the CTA `kubernetes` infrastructure
All the needed tools are self contained in the `CTA` repository.
It allows to version the system tests and all the required tools with the code being tested.
Therefore setting up the system test infrastructure only means to checkout `CTA` repository on a kubernetes cluster: a `ctadev` system for example.
## List existing test instances
This just means listing the current kubernetes namespaces:
```
[root@ctadevqa03 ~]# kubectl get namespace
NAME STATUS AGE
default Active 18d
kube-system Active 3d
```
Here we just have the 2 kubernetes *system* namespaces, and therefore no test instance.
## Create a `kubernetes` test instance
For example, to create `ctatest` CTA test instance, simply launch `./create_instance.sh` from `CTA/continuousintegration/orchestration` directory with your choice of arguments.
By default it will use a file based objectstore and an sqlite database, but you can use an Oracle database and/or Ceph based objectstore if you specify it in the command line.
Objectstore configmap files and database configmap files are respectively available on `cta/dev/ci` hosts in `/opt/kubernetes/CTA/[database|objectstore]/config`, those directories are managed by Puppet and the accounts configured on your machine are yours.
**YOU ARE RESPONSIBLE FOR ENSURING THAT ONLY 1 INSTANCE USES 1 EXCLUSIVE REMOTE RESOURCE.
RUNNING 2 INSTANCES WITH THE SAME REMOTE RESOURCE WILL CREATE CONFLICTS IN THE WORKFLOWS AND IT WILL BE YOUR FAULT**
After all those WARNINGS, let's create a CTA test instance that uses **your** Oracle database and **your** Ceph objectstore.
```
[root@ctadevjulien CTA]# cd continuousintegration/orchestration/
Configuring cta SSS for ctafrontend access from ctaeos.....................OK
Waiting for EOS to be configured........OK
Instance ctatest successfully created:
NAME READY STATUS RESTARTS AGE
ctacli 1/1 Running 0 1m
ctaeos 1/1 Running 0 1m
ctafrontend 1/1 Running 0 1m
init 0/1 Completed 0 2m
kdc 1/1 Running 0 1m
tpsrv 2/2 Running 0 1m
```
This script starts by creating the `ctatest` namespace. It runs on the latest CTA docker image available in the gitlab registry. If there is no image available for the current commit it will fail. Then it creates the services in this namespace so that when the pods implementing those services are creates the network and DNS names are defined.
For convenience, we can export `NAMESPACE`, set to `ctatest` in this case, so that we can simply execute `kubectl` commands in our current instance with `kubectl --namespace=${NAMESPACE} ...`.
The last part is the pods creation in the namespace, it is performed in 2 steps:
1. run the `init` pod, which created db, objectstore and label tapes
2. launch the other pods that rely on the work of the `init` pod when its status is `Completed` which means that the init script exited correctly
Now the CTA instance is ready and the test can be launched.
Here are the various pods in our `ctatest` namespace:
```
[root@ctadevjulien orchestration]# kubectl get namespace
NAME STATUS AGE
ctatest Active 4m
default Active 88d
kube-system Active 88d
[root@ctadevjulien orchestration]# kubectl get pods -a --namespace ctatest
NAME READY STATUS RESTARTS AGE
ctacli 1/1 Running 0 3m
ctaeos 1/1 Running 0 3m
ctafrontend 1/1 Running 0 3m
init 0/1 Completed 0 4m
kdc 1/1 Running 0 3m
tpsrv 2/2 Running 0 3m
```
Everything looks fine, you can even check the logs of `eos` mgm:
Waiting for file to be archived to tape: Seconds passed = 0
...
OK: all tests passed
```
If something goes wrong, please check the logs from the various containers running on the pods that were defined during dev meetings:
1.`init`
* Initialises the system, e.g. labels tapes using cat, creates/wipes the catalogue database, creates/wipes the CTA object store and makes sure all drives are empty and tapes are back in their home slots.
2.`kdc`
* Runs the KDC server for authenticating EOS end-users and CTA tape operators.
3.`eoscli`
* Hosts the command-line tools to be used by an end-user of EOS, namely xrdcp, xrdfs and eos.
4.`ctaeos`
* One EOS mgm.
* One EOS fst.
* One EOS mq.
* The cta command-line tool to be run by the EOS workflow engine to communicate with the CTA front end.
* The EOS Simple Shared Secret (SSS) to be used by the EOS mgm and EOS fst to authenticate each other.
* The CTA SSS to be used by the cta command-line tool to authenticate itself and therefore the EOS instance with the CTA front end.
* The tape server SSS to be used by the EOS mgm to authenticate file transfer requests from the tape servers.
5.`ctacli`
* The cta command-line tool to be used by tape operators.
6.`ctafrontend`
* One CTA front-end.
* The CTA SSS of the EOS instance that will be used by the CTA front end to authenticate the cta command-line run by the workflow engine of the EOS instance.
7.`tpsrvXXX`*No two pods in the same namespace can have the same name, hence each tpsrv pod will be numbered differently*
* One `cta-taped` daemon running in `taped` container of `tpsrvxxx` pod.
* One `rmcd` daemon running in `rmcd` container of `tpsrvxxx` pod.
* The tape server SSS to be used by cta-taped to authenticate its file transfer requests with the EOS mgm (all tape servers will use the same SSS).
# post-mortem analysis
## logs
An interesting feature is to collect the logs from the various processes running in containers on the various pods.
Kubernetes allows to collect `stdout` and `stderr` from any container and add timestamps to ease post-mortem analysis.
Those are the logs of the `init` pod first container:
From there, you need to configure the objecstore environment variables, sourcing `/tmp/objectstore-rc.sh` install the missing `protocolbuffer` tools like `protoc` binary, and then dump all the objects you want:
Deletion manages the removal of the infrastructure that had been created and populated during the creation procedure: it deletes the database content and drop the schema, remove all the objects in the objectstore independently of the type of resources (ceph objectstore, file based objectstore, oracle database, sqlite database...).
Configure the Runners for `cta` project and add some specific tags for tape library related jobs. I chose `mhvtl` and `kubernetes` for ctadev runners.
This allows to use those specific runners for CTA tapelibrary specific tests, while others can use shared runners.
A small issue: by default, `gitlab-runner` service runs as `gitlab-runner` user, which makes it impossible to run some tests as `root` inside the pods has not the priviledges to run all the commands needed.