1 .. ############################################################################
2 .. Copyright (c) 2019 AT&T, ENEA AB, Nokia and others #
4 .. Licensed under the Apache License, Version 2.0 (the "License"); #
5 .. you maynot use this file except in compliance with the License. #
7 .. You may obtain a copy of the License at #
8 .. http://www.apache.org/licenses/LICENSE-2.0 #
10 .. Unless required by applicable law or agreed to in writing, software #
11 .. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT #
12 .. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
13 .. See the License for the specific language governing permissions and #
14 .. limitations under the License. #
15 .. ############################################################################
21 The Makefile in this directory is used to build and push all
22 the validation containers. The default registry is **akraino** on
23 dockerhub, but only CI jenkins slaves are authorized to push
24 images to that registry. If you want to push to your own test registry, set
25 the REGISTRY variables as in the commands below.
27 To build and push the images:
29 .. code-block:: console
31 make all [ REGISTRY=<dockerhub_registry> ]
33 To just build the containers, use the command:
35 .. code-block:: console
37 make build-all [ REGISTRY=<dockerhub_registry> ]
42 Building and pushing the container
43 ----------------------------------
45 To build just the k8s container, use the command:
47 .. code-block:: console
49 make k8s-build [ REGISTRY=<dockerhub_registry> ]
51 To both build and push the container, use the command:
53 .. code-block:: console
55 make k8s [ REGISTRY=<dockerhub_registry> ]
60 The k8s image is meant to be ran from a server that has access to the
61 kubernetes cluster (jenkins slave, jumpserver, etc).
63 Before running the image, copy the folder ~/.kube from your kubernetes
64 master node to a local folder (e.g. /home/jenkins/k8s_access).
66 Container needs to be started with the kubernetes access folder mounted.
67 Optionally, the results folder can be mounted as well; this way the logs are
68 stored on the local server.
70 .. code-block:: console
72 docker run -ti -v /home/jenkins/k8s_access:/root/.kube/ \
73 -v /home/jenkins/k8s_results:/opt/akraino/results/ \
74 akraino/validation:k8s-latest
76 By default, the container will run the k8s conformance test. If you want to
77 enter the container, add */bin/sh* at the end of the command above
82 Building and pushing the container
83 ----------------------------------
85 To build just the mariadb container, use the command:
87 .. code-block:: console
89 make mariadb-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
91 To both build and push the container, use the command:
93 .. code-block:: console
95 make mariadb [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
99 In order for the container to be easily created, the deploy.sh script has been developed. This script accepts the following as input parameters:
101 CONTAINER_NAME, name of the container, default value is akraino-validation-mariadb
102 MARIADB_ROOT_PASSWORD, the desired mariadb root user password, this variable is required
103 MARIADB_AKRAINO_PASSWORD, the desired mariadb akraino user password, this variable is required
104 UI_ADMIN_PASSWORD, the desired Blueprint Validation UI password for the admin user, this variable is required
105 UI_AKRAINO_PASSWORD, the desired Blueprint Validation UI password for the akraino user, this variable is required
106 REGISTRY, registry of the mariadb image, default value is akraino
107 NAME, name of the mariadb image, default value is validation
108 TAG_PRE, first part of the image version, default value is mariadb
109 TAG_VER, last part of the image version, default value is latest
110 MARIADB_HOST_PORT, port on which mariadb is exposed on host, default value is 3307
112 In order to deploy the container, this script can be executed with the appropriate parameters.
114 Example (assuming the default variables have been utilized for building the image using the make command):
116 .. code-block:: console
118 cd validation/docker/mariadb
119 ./deploy.sh MARIADB_ROOT_PASSWORD=root_password MARIADB_AKRAINO_PASSWORD=akraino_password UI_ADMIN_PASSWORD=admin UI_AKRAINO_PASSWORD=akraino
121 Also, in order to re-deploy the database (it is assumed that the corresponding mariadb container has been stopped and deleted) while the persistent storage already exists (currently, the directory /var/lib/mariadb of the host is used), a different approach should be used after the image build process.
123 To this end, another script has been developed, namely deploy_with_existing_storage.sh which easily deploys the container. This script accepts the following items as input parameters:
125 CONTAINER_NAME, the name of the container, default value is akraino-validation-mariadb
126 REGISTRY, the registry of the mariadb image, default value is akraino
127 NAME, the name of the mariadb image, default value is validation
128 TAG_PRE, the first part of the image version, default value is mariadb
129 TAG_VER, the last part of the image version, default value is latest
130 MARIADB_HOST_PORT, the port on which mariadb is exposed on host, default value is 3307
132 In order to deploy the container, this script can be executed with the appropriate parameters.
134 Example (assuming the default variables have been utilized for building the image using the make command):
136 .. code-block:: console
138 cd validation/docker/mariadb
139 ./deploy_with_existing_persistent_storage.sh
141 More info can be found at the UI README file.
146 Building and pushing the container
147 ----------------------------------
149 To build just the UI container, use the command:
151 .. code-block:: console
153 make ui-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
155 To both build and push the container, use the command:
157 .. code-block:: console
159 make ui [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
163 In order for the container to be easily created, the deploy.sh script has been developed. This script accepts the following as input parameters:
165 CONTAINER_NAME, the name of the contaner, default value is akraino-validation-ui
166 DB_IP_PORT, the IP and port of the maridb instance, this variable is required
167 MARIADB_AKRAINO_PASSWORD, the mariadb akraino user password, this variable is required
168 REGISTRY, the registry of the mariadb image, default value is akraino
169 NAME, the name of the mariadb image, default value is validation
170 TAG_PRE, the first part of the image version, default value is ui
171 TAG_VER, the last part of the image version, default value is latest
172 JENKINS_URL, the URL of the Jenkins instance (http or https must be defined), the default value is 'https://jenkins.akraino.org/'
173 JENKINS_USERNAME, the Jenkins user name, the default value is 'demo' (in the context of UI full control loop mode, this parameter must be changed to include a real Jenkins user)
174 JENKINS_USER_PASSWORD, the Jenkins user password, the default value is 'demo' (in the context of UI full control loop mode, this parameter must be changed to include a real Jenkins user password)
175 JENKINS_JOB_NAME, the name of Jenkins job capable of executing the blueprint validation tests, the default value is 'validation' (in the context of UI full control loop mode, this parameter must be changed to include a real Jenkins job name)
176 NEXUS_PROXY, the needed proxy in order for the Nexus server to be reachable, default value is none
177 JENKINS_PROXY, the needed proxy in order for the Jenkins server to be reachable, default value is none
178 CERTDIR, the directory where the SSL certificates can be found, default value is the working directory where self signed certificates exist only for demo purposes
180 Note that, for a functional UI, the following prerequisites are needed:
182 - The mariadb container in up and running state
183 - A Jenkins instance capable of running the blueprint validation test (this is optional and is needed only for UI full control loop mode)
184 - A Nexus repo in which all the test results are stored.
186 More info can be found at the UI README file.
188 In order to deploy the container, the aforementioned script can be executed with the appropriate parameters.
190 Example (assuming the default variables have been utilized for building the image using the make command):
192 .. code-block:: console
194 cd validation/docker/ui
195 ./deploy.sh DB_IP_PORT=172.17.0.3:3306 MARIADB_AKRAINO_PASSWORD=akraino_password
197 The kube-conformance container
198 ==============================
200 Building and pushing the container
201 ----------------------------------
203 To build just the kube-conformance container, use the command:
205 .. code-block:: console
207 make kube-conformance-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
209 To both build and push the container, use the command:
211 .. code-block:: console
213 make kube-conformance [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
218 This is a standalone container able to launch Kubernetes end-to-end tests,
219 for the purposes of conformance testing.
221 It is a thin wrapper around the `e2e.test` binary in the upstream Kubernetes
222 distribution, which drops results in a predetermined location for use as a
223 [Heptio Sonobuoy](https://github.com/heptio/sonobuoy) plugin.
225 To learn more about conformance testing and its Sonobuoy integration, read the
226 [conformance guide](https://github.com/heptio/sonobuoy/blob/master/docs/conformance-testing.md).
230 .. code-block:: console
232 docker run -ti akraino/validation:kube-conformance-v1.15
234 By default, the container will run the `run_e2e.sh` script. If you want to
235 enter the container, add */bin/sh* at the end of the command above
237 Normally, this conainer is not used directly, but instead leveraged via
240 The sonobuoy-plugin-systemd-logs container
241 ==========================================
243 Building and pushing the container
244 ----------------------------------
246 To build just the sonobuoy-plugin-systemd-logs container, use the command:
248 .. code-block:: console
250 make sonobuoy-plugin-systemd-logs-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
252 To both build and push the container, use the command:
254 .. code-block:: console
256 make sonobuoy-plugin-systemd-logs [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
261 This is a simple standalone container that gathers log information from
262 systemd, by chrooting into the node's filesystem and running `journalctl`.
264 This container is used by [Heptio Sonobuoy](https://github.com/heptio/sonobuoy)
265 for gathering host logs in a Kubernetes cluster.
269 .. code-block:: console
271 docker run -ti akraino/validation:sonobuoy-plugin-systemd-logs-latest
273 By default, the container will run the `get_systemd_logs.sh` script. If you
274 want to enter the container, add */bin/sh* at the end of the command above.
276 Normally, this conainer is not used directly, but instead leveraged via
279 The openstack container
280 =======================
282 Building and pushing the container
283 ----------------------------------
285 To build just the openstack container, use the command:
287 .. code-block:: console
289 make openstack-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
291 To both build and push the container, use the command:
293 .. code-block:: console
295 make openstack [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
300 The openstack image is meant to be ran from a server that has access to the
301 openstack deployment (jenkins slave, jumpserver, etc).
303 Before running the image, copy openstack deployment environment variables
304 (openrc) to a local folder (e.g. /root/openrc).
306 Container needs to be started with the openrc file mounted. Optionally, test
307 cases can be excluded from execution via a mounted blacklist file.
309 The results folder can be mounted as well; this way the logs are
310 stored on the local server.
312 .. code-block:: console
314 docker run -ti -v /home/jenkins/openrc:/root/openrc \
315 -v /home/jenkins/blacklist.txt:/opt/akraino/validation/tests/openstack/tempest/blacklist.txt \
316 -v /home/jenkins/openstack_results:/opt/akraino/results/ \
317 akraino/validation:openstack-latest
322 Building and pushing the container
323 ----------------------------------
325 To build just the helm container, use the command:
327 .. code-block:: console
329 make helm-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
331 To both build and push the container, use the command:
333 .. code-block:: console
335 make helm [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
340 Container needs to be started with the SSH key file mounted. Users
341 credentials can be provided via a mounted variables.yaml file.
343 The results folder can be mounted as well; this way the logs are
344 stored on the local server.
346 .. code-block:: console
348 docker run -ti -v /home/jenkins/openrc:/root/openrc \
349 -v /home/foobar/.ssh/id_rsa:/root/.ssh/id_rsa \
350 -v /home/foobar/variables.yaml:/opt/akraino/validation/tests/variables.yaml \
351 -v /home/foobar/helm_results:/opt/akraino/results/ \
352 akraino/validation:helm-latest