1 .. ############################################################################
2 .. Copyright (c) 2019 AT&T, ENEA AB, Nokia and others #
4 .. Licensed under the Apache License, Version 2.0 (the "License"); #
5 .. you maynot use this file except in compliance with the License. #
7 .. You may obtain a copy of the License at #
8 .. http://www.apache.org/licenses/LICENSE-2.0 #
10 .. Unless required by applicable law or agreed to in writing, software #
11 .. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT #
12 .. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
13 .. See the License for the specific language governing permissions and #
14 .. limitations under the License. #
15 .. ############################################################################
21 The Makefile in this directory is used to build and push all
22 the validation containers. The default registry is **akraino** on
23 dockerhub, but only CI jenkins slaves are authorized to push
24 images to that registry. If you want to push to your own test registry, set
25 the REGISTRY variables as in the commands below.
27 To build and push the images:
29 .. code-block:: console
31 make all [ REGISTRY=<dockerhub_registry> ]
33 To just build the containers, use the command:
35 .. code-block:: console
37 make build-all [ REGISTRY=<dockerhub_registry> ]
42 Building and pushing the container
43 ----------------------------------
45 To build just the k8s container, use the command:
47 .. code-block:: console
49 make k8s-build [ REGISTRY=<dockerhub_registry> ]
51 To both build and push the container, use the command:
53 .. code-block:: console
55 make k8s [ REGISTRY=<dockerhub_registry> ]
60 The k8s image is meant to be ran from a server that has access to the
61 kubernetes cluster (jenkins slave, jumpserver, etc).
63 Before running the image, copy the folder ~/.kube from your kubernetes
64 master node to a local folder (e.g. /home/jenkins/k8s_access).
66 Container needs to be started with the kubernetes access folder mounted.
67 Optionally, the results folder can be mounted as well; this way the logs are
68 stored on the local server.
70 .. code-block:: console
72 docker run -ti -v /home/jenkins/k8s_access:/root/.kube/ \
73 -v /home/jenkins/k8s_results:/opt/akraino/results/ \
74 akraino/validation:k8s-latest
76 By default, the container will run the k8s conformance test. If you want to
77 enter the container, add */bin/sh* at the end of the command above
82 Building and pushing the container
83 ----------------------------------
85 To build just the mysql container, use the command:
87 .. code-block:: console
89 make mysql-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
91 To both build and push the container, use the command:
93 .. code-block:: console
95 make mysql [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
99 In order for the container to be easily created, the deploy.sh script has been developed. This script accepts the following as input parameters:
101 CONTAINER_NAME, name of the container, default value is akraino-validation-mysql
102 MYSQL_ROOT_PASSWORD, the desired mysql root user password, this variable is required
103 MYSQL_USER, the desired mysql user, the default value is 'akraino'
104 MYSQL_PASSWORD, the desired mysql user password, this variable is required
105 REGISTRY, registry of the mysql image, default value is akraino
106 NAME, name of the mysql image, default value is validation
107 TAG_PRE, first part of the image version, default value is mysql
108 TAG_VER, last part of the image version, default value is latest
110 In order to deploy the container, this script can be executed with the appropriate parameters.
112 Example (assuming the default variables have been utilized for building the image using the make command):
114 .. code-block:: console
116 cd validation/docker/mysql
117 ./deploy.sh --MYSQL_ROOT_PASSWORD root_password --MYSQL_PASSWORD akraino_password
119 Also, in order to re-deploy the database (it is assumed that the corresponding mysql container has been stopped and deleted) while the persistent storage already exists (currently, the 'akraino-validation-mysql' docker volume is used), a different approach should be used after the image building process.
121 To this end, another script has been developed, namely deploy_with_existing_storage.sh which easily deploys the container. This script accepts the following items as input parameters:
123 CONTAINER_NAME, the name of the container, default value is akraino-validation-mysql
124 REGISTRY, the registry of the mysql image, default value is akraino
125 NAME, the name of the mysql image, default value is validation
126 TAG_PRE, the first part of the image version, default value is mysql
127 TAG_VER, the last part of the image version, default value is latest
129 In order to deploy the container, this script can be executed with the appropriate parameters.
131 Example (assuming the default variables have been utilized for building the image using the make command):
133 .. code-block:: console
135 cd validation/docker/mysql
136 ./deploy_with_existing_persistent_storage.sh
138 More info can be found at the UI README file.
143 Building and pushing the container
144 ----------------------------------
146 To build just the UI container, use the command:
148 .. code-block:: console
150 make ui-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
152 To both build and push the container, use the command:
154 .. code-block:: console
156 make ui [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
160 In order for the container to be easily created, the deploy.sh script has been developed. This script accepts the following as input parameters:
162 CONTAINER_NAME, the name of the contaner, default value is akraino-validation-ui
163 DB_IP_PORT, the IP and port of the mysql instance, this variable is required
164 MYSQL_USER, the mysql user, the default value is 'akraino'
165 MYSQL_PASSWORD, the mysql user password, this variable is required
166 REGISTRY, the registry of the mysql image, default value is akraino
167 NAME, the name of the mysql image, default value is validation
168 TAG_PRE, the first part of the image version, default value is ui
169 TAG_VER, the last part of the image version, default value is latest
170 JENKINS_URL, the URL of the Jenkins instance (http or https must be defined), the default value is 'https://jenkins.akraino.org/'
171 JENKINS_USERNAME, the Jenkins user name, the default value is 'demo' (in the context of UI full control loop mode, this parameter must be changed to include a real Jenkins user)
172 JENKINS_USER_PASSWORD, the Jenkins user password, the default value is 'demo' (in the context of UI full control loop mode, this parameter must be changed to include a real Jenkins user password)
173 JENKINS_JOB_NAME, the name of Jenkins job capable of executing the blueprint validation tests, the default value is 'validation' (in the context of UI full control loop mode, this parameter must be changed to include a real Jenkins job name)
174 NEXUS_PROXY, the needed proxy in order for the Nexus server to be reachable, default value is none
175 JENKINS_PROXY, the needed proxy in order for the Jenkins server to be reachable, default value is none
176 CERTDIR, the directory where the SSL certificates can be found, default value is the working directory where self signed certificates exist only for demo purposes
177 ENCRYPTION_KEY, the key that should be used by the AES algorithm for encrypting passwords stored in database, this variable is required
178 UI_ADMIN_PASSWORD, the desired Blueprint Validation UI password for the admin user, this variable is required
179 TRUST_ALL, the variable that defines whether the UI should trust all certificates or not, default value is false
180 USE_NETWORK_HOST, the variable that defines whether the UI container should run in 'network host' mode or not, default value is "false"
182 Note that, for a functional UI, the following prerequisites are needed:
184 - The mysql container in up and running state
185 - A Jenkins instance capable of running the blueprint validation test (this is optional and is needed only for UI full control loop mode)
186 - A Nexus repo in which all the test results are stored.
188 More info can be found at the UI README file.
190 In order to deploy the container, the aforementioned script can be executed with the appropriate parameters.
192 Example (assuming the default variables have been utilized for building the image using the make command):
194 .. code-block:: console
196 cd validation/docker/ui
197 ./deploy.sh --DB_IP_PORT 172.17.0.3:3306 --MYSQL_PASSWORD akraino_password --ENCRYPTION_KEY AGADdG4D04BKm2IxIWEr8o== --UI_ADMIN_PASSWORD admin
199 The kube-conformance container
200 ==============================
202 Building and pushing the container
203 ----------------------------------
205 To build just the kube-conformance container, use the command:
207 .. code-block:: console
209 make kube-conformance-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
211 To both build and push the container, use the command:
213 .. code-block:: console
215 make kube-conformance [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
220 This is a standalone container able to launch Kubernetes end-to-end tests,
221 for the purposes of conformance testing.
223 It is a thin wrapper around the `e2e.test` binary in the upstream Kubernetes
224 distribution, which drops results in a predetermined location for use as a
225 [Heptio Sonobuoy](https://github.com/heptio/sonobuoy) plugin.
227 To learn more about conformance testing and its Sonobuoy integration, read the
228 [conformance guide](https://github.com/heptio/sonobuoy/blob/master/docs/conformance-testing.md).
232 .. code-block:: console
234 docker run -ti akraino/validation:kube-conformance-v1.16
236 By default, the container will run the `run_e2e.sh` script. If you want to
237 enter the container, add */bin/sh* at the end of the command above
239 Normally, this conainer is not used directly, but instead leveraged via
242 The sonobuoy-plugin-systemd-logs container
243 ==========================================
245 Building and pushing the container
246 ----------------------------------
248 To build just the sonobuoy-plugin-systemd-logs container, use the command:
250 .. code-block:: console
252 make sonobuoy-plugin-systemd-logs-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
254 To both build and push the container, use the command:
256 .. code-block:: console
258 make sonobuoy-plugin-systemd-logs [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
263 This is a simple standalone container that gathers log information from
264 systemd, by chrooting into the node's filesystem and running `journalctl`.
266 This container is used by [Heptio Sonobuoy](https://github.com/heptio/sonobuoy)
267 for gathering host logs in a Kubernetes cluster.
271 .. code-block:: console
273 docker run -ti akraino/validation:sonobuoy-plugin-systemd-logs-latest
275 By default, the container will run the `get_systemd_logs.sh` script. If you
276 want to enter the container, add */bin/sh* at the end of the command above.
278 Normally, this conainer is not used directly, but instead leveraged via
281 The openstack container
282 =======================
284 Building and pushing the container
285 ----------------------------------
287 To build just the openstack container, use the command:
289 .. code-block:: console
291 make openstack-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
293 To both build and push the container, use the command:
295 .. code-block:: console
297 make openstack [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
302 The openstack image is meant to be ran from a server that has access to the
303 openstack deployment (jenkins slave, jumpserver, etc).
305 Before running the image, copy openstack deployment environment variables
306 (openrc) to a local folder (e.g. /root/openrc).
308 Container needs to be started with the openrc file mounted. Optionally, test
309 cases can be excluded from execution via a mounted blacklist file.
311 The results folder can be mounted as well; this way the logs are
312 stored on the local server.
314 .. code-block:: console
316 docker run -ti -v /home/jenkins/openrc:/root/openrc \
317 -v /home/jenkins/blacklist.txt:/opt/akraino/validation/tests/openstack/tempest/blacklist.txt \
318 -v /home/jenkins/openstack_results:/opt/akraino/results/ \
319 akraino/validation:openstack-latest
324 Building and pushing the container
325 ----------------------------------
327 To build just the helm container, use the command:
329 .. code-block:: console
331 make helm-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
333 To both build and push the container, use the command:
335 .. code-block:: console
337 make helm [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
342 Container needs to be started with the SSH key file mounted. Users
343 credentials can be provided via a mounted variables.yaml file.
345 The results folder can be mounted as well; this way the logs are
346 stored on the local server.
348 .. code-block:: console
350 docker run -ti -v /home/jenkins/openrc:/root/openrc \
351 -v /home/foobar/.ssh/id_rsa:/root/.ssh/id_rsa \
352 -v /home/foobar/variables.yaml:/opt/akraino/validation/tests/variables.yaml \
353 -v /home/foobar/helm_results:/opt/akraino/results/ \
354 akraino/validation:helm-latest